Braybrooke Road, Wargrave, Berkshire, RG10 8DU. UK
+44 1628 947950
comms@contechlive.com

ConTech 2018 Sessions – 29th November

Book your place now

Opening Keynote – 10:00-10:45

Starting Your AI Journey “Andrew takes a pragmatic and hype-free approach to explain artificial intelligence and how it can be utilised by businesses today. Andrew will introduce his AI Framework which describes in non-technical language the eight core capabilities of Artificial Intelligence (AI). Each of these capabilities, ranging from image recognition, through natural language processing, to prediction, will be explained using real-life examples and how they can be applied in a business environment. Andrew will then go on to describe how an organisation can start their AI journey and build their own AI strategy.”

Andrew Burgess

Adviser, Speaker & Writer on Artificial Intelligence & Robotic Process Automation AJBurgess

Panel Session 10:45 – 11:45

Transforming content through data science and AI – assessing the impact and landscape

From large content systems to small boutique publishing operations, data science has the potential not only to transform our understanding of our audience, but also the way we create and manage our content, making it more relevant and agile. Our speaker and panellists will discuss the potential impact of this new tech on the world of content.

Michael Puscar

Founder, Oiga Technologies

Ann Michael

President, Delta Think

Alice Zimmermann

Google Assistant Partnerships, UK, Google

Martha Sedgwick

Executive Director of Product Innovation, SAGE Publications

Michael Head

Senior Research Fellow, University of Southampton

Track 1
Using data science tools to drive new business – an overview of the opportunities
12:15 – 13:15

Content for the voice-activated world

Conversation is shifting. Hear from Google’s Alice Zimmermann as she looks at trends, best practices and new approaches for creating content that maximises the potential of the voice-enabled world.

Alice Zimmermann

Google Assistant Partnerships, UK, Google

The future is here: how to deliver answers from smart content with artificial intelligence tools.

Elsevier has been on a journey over the last 20 years to create Smart Content that allows our customers to derive answers to complex questions through our products and services. We use a range of Semantic and AI inspired techniques to create enriched Smart Content. This involves multi-disciplinary teams with expertise in knowledge curation, data science and domain expertise. One of the service we offer is sharing our insights gained through this multi-disciplinary process in the form of consultancy from our professional services team. This operations work is also supported by our technology team’s expertise in managing Big Data at scale to deliver performant responses through a range of approaches in our technology stack. In the talk I will describe how AI and Machine Learning are used in our production processes; and, how these techniques can be employed to process the Smart Content in order to derive insights right across the range of academic science and commercial R&D contexts. Using examples from current work I will show best practice in how to achieve results using these tools and I will describe upcoming developments that I hope will both inform and inspire you to take these approaches in your own organisation.

Jabe Wilson

Consulting Director, Text and Data Analytics, Elsevier

From products to services – how data technology put customers at the centre of our business Coming soon

Tim Aitken

Product Manager Inspec, IET

Applying machine learning to educational content to get actionable data

#The K-12 Educational Publishing sector has experienced its fair share of digital disruption in the past few years. Some publishers have reacted well, others not. Daniel will share with you some of the hard-won insights and anecdotes from over 8 years of in-house and consultancy experience with some of the world’s largest educational publishers making the journey from traditional print to data-driven digital content. In 2016 Adaptemy took a major publisher through the transformation to structured .xml authoring.  He’ll review how we used machine learning to create practical and actionable data from this content across the organisation: For editorial, the data is used to evaluate learning efficacy, usage/redundancy patterns, quality issues in the content and author performance management. For Sales and Marketing this includes data around improvement to learning outcomes and customer engagement. For Customer Support, this content data is being used to define intervention strategies driving retention and customer satisfaction. As Data Scientists we need to improve how we communicate to organisations when explaining the transformative value of data-driven publishing. Daniel will conclude by presenting “Publisher Maturity Model” which provides a framework for evaluating the organisational and technical readiness of a publisher to move to data-driven publishing.

Daniel Mccrea

Head of Publisher Services, Adaptemy

Data-driven content product demos
14:15 – 15:00

Learn about, assess and discover which products have the potential to have the biggest impact on the creation, distribution and consumption of professional content.

Presentation 1: Nishchay Shah, CTO , Editage
Ada – A Manuscript “Readability Assessment” Screening Solution for Publishers

Track 2
Building compelling, user focused data-driven content products
15:00 – 16:00

Anatomy of a modern data-driven content product
Customer requirements are rapidly evolving, users expectations are set by the online interactions that they have with the likes of Google, Amazon and Apple. Content products need to focus on user needs and harness new technologies to remain relevant and go beyond being just ‘grab and go’ stops on a user’s information journey.
This presentation will explain how leading organisations are using modern technology, new development approaches and user centred design to deliver compelling digital content products. We will explore the interaction between people, process and tech and take a specific look at how data drives success in a modern content product.
We will dissect the modern data-driven content platform to talk through the different technology components and how they come together to deliver a user focused, feature rich information product.

Sam Herbert

Co-founder of 67 Bricks

Re-invigorating a middle-aged publisher with machine learning, AI and open data IFIS Publishing is a scientific publisher which started producing an abstracting and indexing database 50 years ago. It is now in a highly competitive market, with many new entrants – including Google Scholar – appearing over the last 10 years. In response, IFIS formed a partnership with one of India’s leading infomatics companies, Molecular Connections (MC). Over the last 8 years, IFIS and MC have worked together to apply big data technologies like ML, AI, open data and linked data stores to IFIS’s huge legacy database. This has re-invigorated IFIS’s services, introducing cost efficiencies and new, customer-focused services. IFIS and MC’s work was shortlisted for an ALPSP innovation award last year.

Jonathan Griffin

Managing Director, IFIS Publishing

Jignesh Bhate

CEO, Molecular Connections

Coming Soon

Daniel Hoadley

Head of Marketing, Incorporated Council of Law Reporting for England and Wales

Track 2
Automating content workflows – how AI and machine learning can create leaner, faster publishers
15:00 – 16:00

Deploying AI and deep learning to better analyse research projects and discover new collaborations Chronos has established a global workflow to generate efficiencies for researchers, universities, academic publishers and funders to publish, report and analyse research content output. In extending the functionality of the platform, we are beginning to provide data and access to content which can enable deeper analysis of the research performed, to enable AI learning to be applied to track original research, lab results and practices and consequent patterns which can assist the entire research community to identify potential breakthroughs and clarify future funding and research decision making.

Christian Grubak

CTO, Chronos

Applying machine learning to taxonomy creation at GOV.UK

GOV.UK had been trying to structure its content by developing a comprehensive topic taxonomy for years. Making it hard was the volume of content, the size of the domain, various levels of understanding and buy in from stakeholders and frankly, conventional thinking. Traditional taxonomy development methods were failing, a rapid and radical change in approach was needed. In June 2017, data scientists and content strategists were brought onto the existing multidisciplinary team full time and within 6 months, the team managed to develop a comprehensive topic taxonomy and increase the percentage of GOV.UK content tagged to it from ~30% to 86% using supervised machine learning. The team is now building a new governance framework and continuing to explore further opportunities to build data science methodologies into their work.

Sona Hathi

Content Strategist, UK Government Digital Service

Ellie King

Data Scientist, UK Government Digital Service

Machine Learning for subject index extraction from scholar texts and tagging in LaTeX The function of a subject index is to provide the user with an efficient means of tracing information. The index refers to the relevant information within the material and important concepts that are significant to the user. Automated indexing software builds indexes, and the actual results are a list of words and phrases, sometimes useful in the beginning stages of building an index. We propose to extend this initial index material with recent advances in key-phrase extraction from scholar texts. We apply a collocational chains method to extract meaningful phrases. Then, we use subject corpora to create content knowledge-base and apply this knowledge to identify important key-concepts within collection of articles, or a book. Our data-driven technique allows us to compile subject indexes for books and journal issues that are with minimal editing or without editing useful for users and conforms with index creation standards. We identify the actual location of these concepts within text and tag them with LaTeX typesetting commands.

Vidas Daudaravicius

Research Manager, UAB VTeX

Afternoon Keynote
16:30-17:15

“My Artificial Muse”; How can A.I. collaborate with humans in creative and artistic processes? “My Artificial Muse” is a performance by Albert Barqué-Duran, Mario Klingemann and Marc Marzenit, premiered at Sónar+D (2017) and now in a World Tour, exploring how an Artificial Neural Network can collaborate with humans in the creative and artistic processes. What is a Muse? Who can be a Muse? Where can we find a Muse? Can a Muse be “artificial”? Do they need to be “physical”? Can a computer-generated Muse be as inspiring as a human-like one? By destroying the classic concept of a Muse, are we creating something better? The artistic fruit of Artificial Intelligence (Computational Creativity) is a growing area of research and is increasingly seeping into the public consciousness. We will discuss how to integrate Artificial Intelligence as a creative collaborator in artistic processes.

Albert Barqué-Duran, PhD

Postdoctoral Fellow in Cognitive Science & Artist City, University of London

Book your place now