Review Comment:
In this work, the triple confidence-aware encoder-decoder model is the model utilized in software and AI development to help facilitate the prediction of software actions. As such, what is proposed here is the binary model that would create three encoder-decoders into the creation of one object. What is seen in the article, the confidence relational graph is measured primarily through the encoding model which is proposed for large scale data use. he decoding stage in itself is different than the encoding function, primarily because during the encoding stage we see the optimization of entity embedding. Obtaining low-dimensional entity that combines entity semantic information and structural information.
As such, with these parameters, the results that are derived from the implementation of the triple confidence-aware encoder-decoder model on Google Knowledge graph is that the authors’ new model performed at a higher function, thusly achieving higher performance. In the end, the author’s proposed that their first work introduced the commonsense triple confidence into CKGC. This is used to help to help integrate the model and recognized neighbor entity information in order to earn a more accurate semantic representation. Furthermore, their model was able to perform at a higher function, achieving better results.
--------------------------------------------
The following comments are provided as per originality, significance of results and quality of writing.
A. Originality: This paper is based on a cumulation of work that has been done over the years, using several sources as well as a well-known brand name like Google to formulate the paper. Much has been covered on this topic, especially on the topic of information sciences and neurotechnology. Chronicling much development and the usage of the Conv/TransE model to translate the results.
B. Significance of the Results: The significance stems from the fact that the authors uses a decoding structure. As such, the decoding structure is at a higher result for the function. Moreover, the in-depth research and results have found several incompleteness of the training samples. Thus, there is impact on the confidence scores.
C. Quality of Writing: The writing is fairly detailed in its explanations and reasoning. Using many different sources to help back up its arguments on this paper. Starting on the explanation of the necessity of research, as well as using multiple examples, figures and charts, the authors were able to explain their paper well. In all, the quality of writing is good and should just be subjected to minor spellchecks etc.
------------------------------------------------------
The main comments for improvement are as follows.
--- Introduction section needs a motivating example in order to provide a solid foundation for the research problem. It also needs to include the Layout of the Paper, towards the end.
--- In the Experiments and Results section, a discussion on results should be included as a separate subsection.
--- Some perspective on Applications needs to be offered, e.g. in the Experiments and Results section as yet another subsection, or as an opening paragraph in the Conclusions or some such.
--- Conclusions need to be further elaborated in order to emphasize the authors' contributions, and highlight future work. Include a list of bullets for each of these aspects to enhance reader appeal.
--- Related Work section needs more discussion. it seems rather terse for a journal article.
Include the following additional references that will further enhance the article. Please find the full author list, page numbers etc. I am just including et al. here
1) Tandon et al. Commonsense Knowledge for Machine Intelligence, ACM SIGMOD Record, 2017, https://dl.acm.org/doi/abs/10.1145/3186549.3186562
2) Tao et al. A Confidence-Aware Cascade Network for Multi-Scale Stereo Matching of Very-High-Resolution Remote Sensing Images, Remote Sensing 2022, doi: 10.3390/rs14071667
3) Puri et al. Commonsense Based Text Mining on Urban Policy, LREV journal, 2022, https://link.springer.com/article/10.1007/s10579-022-09584-6
4) Xie et al. Detect Incorrect Triples in Knowledge Base Based on Triple Confidence Evaluation, ICIBE 2017 (International Conference on Industrial and Business Engineering), doi: 10.1145/3133811.3133829
5) Onyeka et al. Using commonsense knowledge and text mining for implicit requirements localization, IEEE ICTAI 2020 (Intl. Conf. on Tools with Artificial Intelligence), https://ieeexplore.ieee.org/abstract/document/9288192
6) Antifakos et al. Towards improving trust in context-aware systems by displaying system confidence, MobileHCI Conference 2005, doi: 10.1145/1085777.1085780.
--------------------------------
Making revisions to the article based on the above comments will help to provide improvements. Thereafter the revised article after major revisions can be reconsidered for publication. The authors have done a very good job, and are highly encouraged to submit a revision, which will further enhance the appeal of the article.
|