Building and Mining Knowledge Graphs for Newsroom Systems

Tracking #: 2179-3392

Arne Berven
Ole A. Christensen
Sindre Moldeklev
Andreas Opdahl
Kjetil J. Villanger

Responsible editor: 
Ruben Verborgh

Submission type: 
Full Paper
Journalism is challenged by digitalisation and social media, resulting in lower subscription numbers and reduced advertising income. Information and communication techniques (ICT) offer new opportunities. The paper explores how social, open, and other data sources can be leveraged for journalistic purposes through a combination of knowledge graphs, natural-language processing (NLP), and machine learning (ML). Our focus is on how these and other heterogeneous data sources and techniques can be combined into a flexible architecture that can evolve and grow to support the needs of journalism in the future. The paper presents the state of our architecture and its instantiation as a prototype we have called {\newshunter}. Plans and possibilities for future work are also outlined.
Full PDF Version: 


Solicited Reviews:
Click to Expand/Collapse
Review #1
By Jesús Arias Fisteus submitted on 23/May/2019
Review Comment:

This work presents a prototype of a system that applies natural language processing, machine learning and semantic web technologies to enhance the editorial process of news. The system is being developed in collaboration with a company that develops news production software for the international news industry. The system includes components for the whole process: gathering data from sources (news sites, social networks, etc.), storing those items, translating them, extracting and storing semantic annotations from them, classifying them, augmenting them with related information, detecting events, help reporters work on the creation of new news item, etc.

The topic of the paper is within the scope of the journal. It is well written, well structured and easy to read. I find the described system to be quite interesting and comprehensive, trying to improve the whole life cycle of a news item. However, I don't feel the paper meets the requirements for a regular paper, as explained below. I wonder whether it would make more sense to adapt it for publication as an "Application report", since it would in my opinion fit the evaluation criteria of the journal for that category.

The architecture of the whole system can, up to a point, be considered original and the main contribution of the paper, although none of its components is especially novel, except, perhaps, the ability to apply the system to early drafts of news items on which a journalist is working. The architecture is composed by more or less standard modules in this kind of systems (data gathering, automated translation, classification, named entity recognition and disambiguation, metadata storage in graph format, sentiment analysis, etc.) Each of these modules is built in the current prototype of the system on top of already existing software or services. They communicate with each other through REST APIs.

I also find the evaluation of the system to be weak. The paper presents it from three points of view:

- Section 7.1 (proof-of-concept evaluations) describes just software tests (functional tests, integration tests, etc.) they applied to the software, which don't add any scientific value. Readers would expect the prototype of a system such as this to be properly tested, and then no mention to it is needed. I recommend removing this section.

- Section 7.2 (evaluations with human participants) reports about the use of the system by 6 professional users in the news domain. They reported extensive feedback for the main features of the system, some just about the user interface, but some, much more valuable, about some modules not producing their results at the desired quality level. This feedback can help to identify limitations in the state of the art algorithms behind those features, and potential lines of further research. I find this part of the evaluation interesting and useful.

-Section 7.3 (Evaluations of algorithms) reports on the performance of some random modules within the system (translation, single labelling, multi labelling and event detection). There is no explanation about why those modules get evaluated and other don't. The evaluation of the translation system is based on just 2 documents. The evaluation of the event detection system is also quite weak. For the other two modules larger datasets were used, but there is no explanation about how the experiments were carried out (size of the training, validation and test sets, whether cross-validation was used, etc.

Other issues:

- I think the title is misleading. One would expect the paper to be focused on building knowledge graphs from the gathered documents, but the paper describes the whole system and there is no special focus on how the knowledge graph is built.

- I find section 3.3 about design science to be quite abstract and not really contributing to the paper.

- I also believe that section 3.4 about the development methodology and the development tools (e.g. source code editors) used can be removed or largely reduced, since it doesn't add any value to the paper.

Review #2
Anonymous submitted on 14/Jun/2019
Review Comment:

Thank you for this submission, in which you describe a system called Newshunter, developed by a research lab together with a company.

Instead of taking a deep dive by looking at very specific aspects of the paper, I'll keep this short review on a rather general level.

(1) originality: the paper describes the implementation of a system that is specifically aimed at journalists. Many similar systems exist and have been existing for about 15 years. The authors state, on page 1: "Our research goal is to understand whether and how information and communication techniques (ICTs) such as knowledge graphs, natural-language processing, and machine learning can be combined to make social, open, and other data sources more readily available for journalistic work." The important words in this quote are "whether" and "how". Many successful companies and products have been built around concepts and architectures that are extremely similar to Newshunter – so the answer to this research question is, quite evidently: "Of course." In terms of originality, the paper presents nothing that can be considered novel. While the whole architecture design and implementation of the prototype has surely been an interesting endeavour for the team involved and surely also for the company, many similar approaches exist and have been existing for a while.

Later, on page 3, the research question is further explained as follows: "can heterogeneous data sources and techniques be combined into an architecture that supports the needs of modern journalism?" – here, the important phase is "needs of modern journalism". Unfortunately, the authors never explicitly state what the "needs of modern journalism" are. If there has been a requirements analysis exercise, it should be described and explained, in detail. As it currently stands, the paper comes across as a larger implementation project, in which various existing third-party components have been taken off various shelves and integrated into a prototype that was then tested with "two journalists and four domain experts" (section 7.2). The very low number of participants in this study and the study itself (a simple textual description of the objects' observations) should be significantly extended in a future version of the paper. As of now, the observations can only be considered as anecdotal (who are the subjects? what is their background? how was the user study performed? how was it analysed? why only six people? couldn't the company provide a group of beta testers? etc.).

Follow-up question: what is a "domain expert" in this context? Aren't "experts" in the "domain" of journalism, journalists? How are the two journalists and the four domain experts different? Another comment to the evaluation: this is, essentially, a user study in which stakeholders are asked to use and describe features of a prototype. The six users are surely not interacting with the backend, so they must be using the graphical user interface but the description of the GUI is only a very small part of the paper (one paragraph, section 6.12). The user study, as described in the paper, is not an adequate way to evaluate the overall system, as described in the paper.

There's another discrepancy and that relates to the title of the paper: "Building and Mining Knowledge Graphs for Newsroom Systems". Only a very small part of the paper is about knowledge graphs. Plus, it's not about "newsroom systems" (plural) but about one specific system.

The related work area misses several of the large current EU projects on ICT, news and journalism, most notably SUMMA (University of Edinburgh). In addition, none of the recent papers of VU Amsterdam on a similar project are cited (Piek Vossen). The same goes for the recent NLP and Journalism workshops, in which lots of papers that are highly relevant for Newshunter have been presented. Likewise, the whole "digital journalism" area is completely missing, for example, the survey papers by Neil Thurman, which actually could've been used partially to inform the design of Newshunter. In fact, the list of references is rather outdated, most references are from 200x, while there are three from 2015, eight from 2016, two from 2017, four from 2018, and two from 2019. While the authors emphasise in a number of places that the field is very dynamic and developing rapidly, the list of references suggests a different situation.

In terms of originality, the paper does not address any of the aspects that are currently being discussed in the area of ICT for journalism, some of which are the detection of online misinformation campaigns, fake news detection, troll/social bot detection, hate speech detection, automatic storyline identification.

(2) significance of the results: with regard to the paper as it has been submitted, there are no significant results.

(3) quality of writing: there are many typos and ungrammatical sentences; the paper is in various parts very verbose and repetitive.

Review #3
Anonymous submitted on 18/Jun/2019
Major Revision
Review Comment:

The paper describes the architecture and provides details on the implementation of the News Hunter system prototype, which combines multiple features that assist journalistic work under a unified platform.
As a tool, News Hunter offers a promising and well thought-out solution for meaningful support during the research and creation of news content.
However, there are significant aspects that are not adequately presented.
Firstly, the presented research hypothesis seems somewhat misformulated. The individual technologies incorporated in News Hunter have already proven useful and impactful for journalists. This is also evident from the background section where multiple systems incorporate some of the technologies also present in News Hunter. There are however other aspects such as multilingual news aggregation and summarisation, content enrichment and source provenance, that are not adequately covered.
The innovation of News Hunter relies mostly on the fact that these technologies are elegantly and efficiently combined under a usable framework. Part of that achievement is the processing of the harvested data, its ultimate representation as linked data and its exposure via a unified graph database. In my opinion, and for the specific journal, the paper should provide more details on the design and implementation of the graph database and the implementation of the lifter and retriever components, to make clear how the transition to linked data is realised.
Regarding the research value of the individual components that comprise News Hunter, it seems that most of the components use off-the-shelf solutions and libraries. If there were any modifications or API extensions in order for the components to work with the graph database, the authors should briefly mention them.
Details that are purely referring to the implementation process, such as section 3.4, are rather superfluous and could be omitted.
The evaluation of the platform is also limited, in the sense that it does not lead to directly actionable guidelines. As reported in section 7.2, there are many features that are desirable from end users but should be improved. The authors should discuss on their ideas for improving such features while maintaining the integrity and efficiency of the proposed architecture.
Overall, the main advantage of the paper is that it describes a potentially useful and well-designed platform, which makes meaningful use of semantic web concepts and technologies. However, in its present form the manuscript lacks direction and does not focus on the details relevant to a semantic web audience. Therefore, I would suggest an extensive restructuring of the paper, in order to provide further details on the impact of the linked data approach, its value for research and end-user audiences and its effect on the efficiency and extensibility of the platform.