Book your place now

Opening Keynote – 10:00-10:45

Starting Your AI Journey

“Andrew takes a pragmatic and hype-free approach to explain artificial intelligence and how it can be utilised by businesses today. Andrew will introduce his AI Framework which describes in non-technical language the eight core capabilities of Artificial Intelligence (AI). Each of these capabilities, ranging from image recognition, through natural language processing, to prediction, will be explained using real-life examples and how they can be applied in a business environment. Andrew will then go on to describe how an organisation can start their AI journey and build their own AI strategy.”

Andrew Burgess

Adviser, Speaker & Writer on Artificial Intelligence & Robotic Process Automation

AJBurgess

Panel Session 10:45 – 11:45


Transforming content through data science and AI – assessing the impact and landscape

From large content systems to small boutique publishing operations, data science has the potential not only to transform our understanding of our audience, but also the way we create and manage our content, making it more relevant and agile. Our speaker

and panellists will discuss the potential impact of this new tech on the world of content.

Michael Puscar

Founder,

Oiga Technologies

Ann Michael

President,

Delta Think

Alice Zimmermann

Google Assistant Partnerships, UK, Google

Martha Sedgwick

Executive Director of Product Innovation, SAGE Publications

Michael Head

Senior Research Fellow, University of Southampton

Using data science tools to drive new business – an overview of the opportunities


12:15 – 13:15


Content for the voice-activated world

Conversation is shifting. Hear from Google’s Alice Zimmermann as she looks at trends, best practices and new approaches for creating content that maximises the potential of the voice-enabled world.

Alice Zimmermann

Google Assistant Partnerships, UK, Google


The future is here: how to deliver answers from smart content with artificial intelligence tools.

Elsevier has been on a journey over the last 20 years to create Smart Content that allows our customers to derive answers to complex questions through our products and services.
We use a range of Semantic and AI inspired techniques to create enriched Smart Content. This involves multi-disciplinary teams with expertise in knowledge curation, data science and domain expertise. One of the service we offer is sharing our insights gained through this multi-disciplinary process in the form of consultancy from our professional services team. This operations work is also supported by our technology team’s expertise in managing Big Data at scale to deliver performant responses through a range of approaches in our technology stack.
In the talk I will describe how AI and Machine Learning are used in our production processes; and, how these techniques can be employed to process the Smart Content in order to derive insights right across the range of academic science and commercial R&D contexts.
Using examples from current work I will show best practice in how to achieve results using these tools and I will describe upcoming developments that I hope will both inform and inspire you to take these approaches in your own organisation.

Jabe Wilson

Consulting Director, Text and Data Analytics, Elsevier

“Recycling in the digital age: Creating Analytics tools from a discovery database”

“Abstracting and Indexing databases have been used for decades to allow researchers to discover scientific publications more easily and effectively. Now with technological advances such as semantic enrichment, knowledge graphs and big data, these can be repurposed to provide higher level insights into scientific and industry trends and relationships – reusing the same data to solve new customer problems. This talk will demonstrate how the Institution of Engineering and Technology has achieved this, delivering a powerful tool to customers worldwide, to help them make informed business decisions”

Tim Aitken

Product Manager Inspec, IET


Applying machine learning to educational content to get actionable data

#The K-12 Educational Publishing sector has experienced its fair share of digital disruption in the past few years. Some publishers have reacted well, others not.

Daniel will share with you some of the hard-won insights and anecdotes from over 8 years of in-house and consultancy experience with some of the world’s largest educational publishers making the journey from traditional print to data-driven digital content.

In 2016 Adaptemy took a major publisher through the transformation to structured .xml authoring.  He’ll review how we used machine learning to create practical and actionable data from this content across the organisation: For editorial, the data is used to evaluate learning efficacy, usage/redundancy patterns, quality issues in the content and author performance management. For Sales and Marketing this includes data around improvement to learning outcomes and customer engagement. For Customer Support, this content data is being used to define intervention strategies driving retention and customer satisfaction.

As Data Scientists we need to improve how we communicate to organisations when explaining the transformative value of data-driven publishing.

Daniel will conclude by presenting “Publisher Maturity Model” which provides a framework for evaluating the organisational and technical readiness of a publisher to move to data-driven publishing.

Daniel Mccrea

Head of Publisher Services, Adaptemy







Data-driven product sessions
14:15 – 15:00

Learn about, assess and discover which products have the potential to have the biggest impact on the creation, distribution and consumption of professional content.

Presentation 1:

Nishchay Shah, CTO , Editage

Ada – A Manuscript “Readability Assessment” Screening Solution for Publishers

The scholarly communications landscape is quickly adopting new technologies and integrating automated solutions to help make editorial decision-making processes easier for publishers. Editage, a division of Cactus Communications, has launched Ada, an automated document assessment solution specifically designed to help editors judge the “readability” of scientific content in research papers.

Ada is an API-connected screening tool that uses grammatical, word semantic, and dictionary rule-driven algorithms to assess the readability of a scientific manuscript. Whether at point of manuscript receipt, or at any point throughout the editorial process, Ada can be calibrated and used to guide editorial decision-making surrounding the worthiness of moving forward with a manuscript. Ada can offer guidance in the form of a readability assessment grade, as well as offer indications of compliance to certain pre-set ethical criteria such as plagiarism, standard guideline disclosures, and/or an internal checklist of language inclusions; the latter can be used as a proxy for the scope of a journal. As such, Ada saves editorial triage time and offers a calibrated and standardized objective assessment of how to move forward with a manuscript, in under one minute.

Presentation 2:

Jem Rayfield – Chief Solution Architect at Ontotext

“Towards data driven publishing – leveraging knowledge graphs and text analytics to enable new business opportunities”

In our digital age, contributors and consumers are overwhelmed by an ocean of articles containing millions of field-specific scientific concepts. Researchers and readers alike find it increasingly challenging to discover the most relevant content they need. Consumers also want to be able to see how information is related to a variety of scholarly domains and not just an isolated area of research.

By leveraging AI and cognitive technologies, publishers could create smarter, faster and easier content publishing workflows on one hand, and smarter, faster and easier content consumption journeys for readers, on the other.

Having richer, linked content increases also business opportunities for publishers to reuse and up-sell both their existing and legacy content by addressing more efficiently their readers’ needs.

Presentation 3:

Claire Merritt – Corporate Account Manager at JoVE

“Leveraging the power of video to resolve the reproducibility crisis”

Researchers at leading biopharmaceutical companies have reported struggling not only to reproduce other scientist’s experiments, but their own as well. The traditional text protocol leaves much to the imagination, and with the high cost of time and money wasted in the lab- visualized experiments are changing the way biomedical R&D is done.

JoVE offers a unique solution to the reproducibility crisis with a comprehensive video protocol platform. With over 20,000 authors from leading biopharmaceutical companies research universities and worldwide, JoVE’s subscriptions can be customized to meet each company’s R&D pipelines. Users access are granted access to the platform through IP and domain authentication, making sharing knowledge with colleagues in different sites seamless.

Track 1
Building compelling, user focused data-driven content products


15:00 – 16:00

Anatomy of a modern data-driven content product



Customer requirements are rapidly evolving, users expectations are set by the online interactions that they have with the likes of Google, Amazon and Apple. Content products need to focus on user needs and harness new technologies to remain relevant and go beyond being just ‘grab and go’ stops on a user’s information journey.

This presentation will explain how leading organisations are using modern technology, new development approaches and user centred design to deliver compelling digital content products. We will explore the interaction between people, process and tech and take a specific look at how data drives success in a modern content product.

We will dissect the modern data-driven content platform to talk through the different technology components and how they come together to deliver a user focused, feature rich information product.

Sam Herbert

Co-founder of 67 Bricks

Re-invigorating a middle-aged publisher with machine learning, AI and open data

IFIS Publishing is a scientific publisher which started producing an abstracting and indexing database 50 years ago. It is now in a highly competitive market, with many new entrants – including Google Scholar – appearing over the last 10 years. In response, IFIS formed a partnership with one of India’s leading infomatics companies, Molecular Connections (MC). Over the last 8 years, IFIS and MC have worked together to apply big data technologies like ML, AI, open data and linked data stores to IFIS’s huge legacy database. This has re-invigorated IFIS’s services, introducing cost efficiencies and new, customer-focused services. IFIS and MC’s work was shortlisted for an ALPSP innovation award last year.

Jonathan Griffin

Managing Director, IFIS Publishing

Jignesh Bhate

CEO, Molecular Connections

Re-inventing online access to UK case law

How legal content is accessed and used is changing. Hear from Daniel how ICLR put data at the centre of their new online product to deliver a compelling user experience and a platform for future innovation.

Daniel Hoadley

Head of Marketing, Incorporated Council of Law Reporting for England and Wales

Track 2

Automating content workflows – how AI and machine learning can create leaner, faster publishers


15:00 – 16:00

Beyond publishing: using data to understand the hidden research impact

Researchfish has been collecting impact data on research projects for a decade, tracking over £50 billion worth of awards annually from 160 organisations globally. The data collected goes far beyond publications to include knowledge exchange activities, collaborations, policy influences, public engagement, and economic and industrial impact. The depth and breadth of the information held lends itself to extensive analysis delivering insights on the impact research has globally.

We shall present the work we are doing on our data set, including publications and beyond, to draw a unique picture of the global research landscape, and how we are working to:

Increase understanding of what drives impact

How to communicate it effectively

What lies beneath impact data and metrics.

Ross Pullar

Product Manager, researchfish


Applying machine learning to taxonomy creation at GOV.UK

GOV.UK had been trying to structure its content by developing a comprehensive topic taxonomy for years. Making it hard was the volume of content, the size of the domain, various levels of understanding and buy in from stakeholders and frankly, conventional thinking. Traditional taxonomy development methods were failing, a rapid and radical change in approach was needed. In June 2017, data scientists and content strategists were brought onto the existing multidisciplinary team full time and within 6 months, the team managed to develop a comprehensive topic taxonomy and increase the percentage of GOV.UK content tagged to it from ~30% to 86% using supervised machine learning. The team is now building a new governance framework and continuing to explore further opportunities to build data science methodologies into their work.

Ellie King

Data Scientist, UK Government Digital Service

Machine Learning for subject index extraction from scholar texts and tagging in LaTeX


The function of a subject index is to provide the user with an efficient means of tracing information. The index refers to the relevant information within the material and important concepts that are significant to the user. Automated indexing software builds indexes, and the actual results are a list of words and phrases, sometimes useful in the beginning stages of building an index. We propose to extend this initial index material with recent advances in key-phrase extraction from scholar texts. We apply a collocational chains method to extract meaningful phrases. Then, we use subject corpora to create content knowledge-base and apply this knowledge to identify important key-concepts within collection of articles, or a book. Our data-driven technique allows us to compile subject indexes for books and journal issues that are with minimal editing or without editing useful for users and conforms with index creation standards. We identify the actual location of these concepts within text and tag them with LaTeX typesetting commands.

Vidas Daudaravicius

Research Manager, UAB VTeX

Afternoon Keynote


16:30-17:15

“My Artificial Muse”; How can A.I. collaborate with humans in creative and artistic processes?

“My Artificial Muse” is a performance by Albert Barqué-Duran, Mario Klingemann and Marc Marzenit, premiered at Sónar+D (2017) and now in a World Tour, exploring how an Artificial Neural Network can collaborate with humans in the creative and artistic processes. What is a Muse? Who can be a Muse? Where can we find a Muse? Can a Muse be “artificial”? Do they need to be “physical”? Can a computer-generated Muse be as inspiring as a human-like one? By destroying the classic concept of a Muse, are we creating something better? The artistic fruit of Artificial Intelligence (Computational Creativity) is a growing area of research and is increasingly seeping into the public consciousness. We will discuss how to integrate Artificial Intelligence as a creative collaborator in artistic processes.

Albert Barqué-Duran, PhD

Postdoctoral Fellow in Cognitive Science & Artist City, University of London

Book your place now