Review Comment:
In this manuscript, a topic modeling of the semantic web research field is performed. To do that, authors employed a set of three software tools and compare their results with the topic extracted from 3 seminal papers by the authors of this manuscript.
The main idea is interesting, and the paper is well organizing.
In what follows, some comments and suggestions are listed:
- Mainly authors performed a science mapping analysis (a kind of scientometric analysis), but any comment or reference to this research field is present.
- The corpus is not well defined. How many documents were retrieved.
- Why the last year is 2015? We are now in 2019.
- There are a lot of science mapping software tools specific to deal with topic analysis on scientific corpus. For example, SciMAT, VOSViewer, CiteSpace. Why authors do not use this software?
- Why authors use three different software to perform topic modeling?
- Why Rexplore use a different corpus. It does not make sense. A different corpus should uncover different topics.
- As authors claimed, the 20 most important topics detected by each software tool do not match. So, as I argued above, if authors employed different corpus, it is logical that the result will be different.
- The utility of this manuscript is not clear. I mean, authors just identify the topic using the expert opinion, and then try to validate their results using automatic tools. Maybe the focus of the paper is wrong. Authors must not try to validate their results, authors must try to extract the main concept covered in the research field. Maybe, the validation of the expert opinion with the results given by automatic learning, could be developed in a different paper.
- Authors should try to consider to develop a science mapping analysis based on co-word analysis, and at least, try to compare the results given with the results obtained in this manuscript.
|