Using Natural Language Generation to Bootstrap Empty Wikipedia Articles: A Human-centric Perspective

Tracking #: 2402-3616

This paper is currently under review
Lucie-Aimée Kaffee
Pavlos Vougiouklis
Elena Simperl

Responsible editor: 
Philipp Cimiano

Submission type: 
Full Paper
Nowadays natural language generation (NLG) is used in everything from news reporting and chatbots to social media management. Recent advances in machine learning have made it possible to train NLG systems to achieve human-level performance in text writing and summarisation. In this paper, we propose such a system in the context of Wikipedia and evaluate it with Wikipedia readers and editors. Our solution builds upon the ArticlePlaceholder, a tool used in 14 under-served Wikipedias, which displays structured data from the Wikidata knowledge base on empty Wikipedia pages. We train a neural network to generate text from the Wikidata triples shown by the ArticlePlaceholder, and explore how Wikipedia users engage with it. The evaluation, which includes an automatic, a judgement-based, and a task-based component, shows that the text snippets score well in terms of perceived fluency and appropriateness for Wikipedia, and can help editors bootstrap new articles. It also hints at several potential implications of using NLG solutions in Wikipedia at large, including content quality, trust in technology, and algorithmic transparency.
Full PDF Version: 
Under Review