Semantic Web and its Role in Facilitating ICT Data Sharing for the Circular Economy: State of the Art Survey

Tracking #: 3381-4595

Authors: 
Anelia Kurteva
Kathleen McMahon
Alessandro Bozzon
Ruud Balkenende

Responsible editor: 
Agnieszka Lawrynowicz

Submission type: 
Survey Article
Abstract: 
The exponentially growing digitisation of services that drive the transition from industry 4.0 to industry 5.0 has resulted in a rising materials demand for ICT hardware manufacturing. The environmental pressure, CO2 emissions (including embodied energy) and delivery risks of our digital infrastructures are increasing. A solution is to transition from a linear to a circular economy (CE), through which materials that were previously disposed of as waste are re-entered back into product life-cycles through processes such as reuse, recycling, remanufacturing, repurposing. However, the adoption of the CE in the ICT sector is currently limited due to the lack of tools that support knowledge exchange between sustainability, ICT and technology experts in a standardised manner and the limited data availability, accessibility and interoperability needed to build such tools. Further, the already existing knowledge of the domain is fragmented into silos and the lack of a common terminology restricts its interoperability and usability. These also lead to transparency and responsibility issues along the supply chain. For many years now, the Semantic Web has been known to provide solutions to such issues in the form of ontologies and knowledge graphs. Several semantic models for the ICT, materials and CE domains have been build and successfully applied to solve complex problems such as predictive maintenance. However, there is a lack of a systematic analysis of the existing semantic models in these domains. Motivated by this, we present a literature survey of existing ontologies for ICT, materials and the CE, their possible applications, limitations and current CE standardisation efforts that can help guide its further implementation.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Eva Blomqvist submitted on 19/Apr/2023
Suggestion:
Major Revision
Review Comment:

This is a survey paper, focusing on ontologies for modelling data about ICT devices, the materials they contain, in order to facilitate data sharing for the Circular Economy (CE). The survey results consists of a set of ontologies for modelling (i) ICT devices, (ii) materials and material composition, and (iii) CE concepts and strategies. I find this topic really interesting, and I agree with the authors that semantic technologies are a key enabled in the CE domain, since CE requires data sharing not only across organisations, but also across domains and for various unforeseen purposes. As such I find this paper highly valuable and timely, and I am sure it would interest many readers both in the semantic web community and beyond.

However, the paper also has several weaknesses, that I think need to be addressed before it can be published. It starts already with the paper title and the motivation and introduction in section The paper title claims that the paper will analyse the role of the semantic web for data sharing in CE, while the paper is in fact specifically targeting ONE specific technology, i.e. ontologies, and does not even for this technology really analyse its role for data sharing in CE, but merely surveys existing models. A better title could be something like “A survey of ontologies for facilitating data sharing about ICT devices and components in the Circular Economy”. The current title, together with the long motivation and explanation of what CE is and how it is important in the ICT domain, gives the reader wrong expectations when reading the rest of the paper. I acknowledge that at the end of the abstract, and at the end of section 1 the authors do briefly mention that this is a survey of ontologies only, but this is easy to miss when the title says something else.

Further, I think the other main weakness of the paper is the methodology. I am even not entirely sure if this is a problem with how the actual survey was carried out, or merely with how it is presented in the paper, but at least the presentation is a problem. Section 2 of the paper presents the survey methodology, but this section is entirely detached from the rest of the paper. Figure 2 is nice, but it is not clear how each sub-step was performed, in what order, by whom, on what sources etc. Also, there is no explicit connection between any of the steps and the results presented in the paper, although for some parts the reader can guess (e.g. step 2.3 probably resulted in Table 2). But for some parts it also seems that I cannot find any discussion on it in the paper, such as for the step 3.2, which sounds as if that would result in a survey of tools (potentially using ontologies but not necessarily) in CE. Also, the authors refer to PRISMA for guidelines on reporting surveys, however, first of all the current PRISMA site suggests to refer to their updated version from 2020 and not the 2009 version that is cited there (see [1]). Second, I do not agree that the reporting of this survey actually follows the PRISMA guidelines. For instance, in PRISMA 2020 you can find a 27-point checklist for the paper, listing things to be reported in various sections, such as that the methods section needs to make explicit the inclusion and exclusion criteria used in the survey, selection process, risk of biases and limitations of the study. Of course, some items are not really applicable, since PRISMA is mainly for the medical domain, but I would like to see the items that are in fact applicable actually being included and discussed in this paper. This would clarify several vague statements in the paper, such as that parts of the survey is “based on” other surveys - what does that actually mean? That you included all the results they found and then added your own? Or that you simply reuse their survey results and did not add your own? (See for instance the introduction sentence to section 4.2 for such as statement.) In the analysis section, the authors also need to describe the assessment method much more in detail, and the criteria for arriving at tables 6-8 are missing, e.g. what does it mean to “evaluate against” a competency question? What if a more general concept and property is available, but not exactly the one in the CQ? Does that give a “yes” or a “no” for the CQ? Granularity (in the text mistakenly called expressivity) is discussed briefly, but only stating that the ontologies differ in their level of detail and granularity, but no actually saying how this was handled when producing the tables. Also the delimitation to only focus on laptop computers needs to be more clear in the paper - how does that affect the results? Is this survey even applicable to the whole ICT domain then? Why/why not? Overall, the paper is methodologically weak, and this is main thing that needs to be improved in order to make the paper publishable.

The standards section is also a bit detached from the rest of the paper. How is this connected to the ontology survey? Do the ontologies follow the standards? Did the standards come after the ontologies? Are there clashes?

Finally, I also find the ontology survey to include some dubious assessments. On thing that is mentioned several times is the tool used to develop the ontologies - how is this relevant? Would you in a survey of software libraries include the IDE that was used to develop them, e.g. “this was coded using NetBeans IDE”? Probably not, unless that has some impact on the resulting artefact. However I fail to see how this is relevant for the ontologies. Further, there seems to be some confusion on the ontology languages and the notion of “expressivity”. First of all, RDFS and OWL are ontology languages (and there are others) but RDF is not. Overall I fail to see really how you could even express an ontology using RDF only, you would basically loose the whole idea of a formal semantics, since RDF is only plain triples, without any further possibility to express meaning. However, since both OWL and RDFS build on top of RDF and can be expressed as RDF graphs, e.g. for sharing on the web or in a triple store, what I suspect is that when some ontologies in the survey are listed as RDF ontologies, this actually just means they are available in some RDF serialisation format. Similar, there seems to be some confusion on the OWL versions. OWL2 is simply the current version of OWL, so any current ontology in OWL can probably be said to be in OWL2, it is not some specific language separate from OWL itself. OWL DL on the other hand is a certain subset of OWL that limits the expressivity of OWL(2), so that says something different. Also the term “expressivity” seems to be used in a strange way in the paper, i.e. in some cases I interpret it as the authors mean the level of detail of the ontology (e.g. depth of taxonomy, or granularity of modelling) rather than the actually expressivity used for the logical modelling, where the latter should instead indicate an OWL profile (or a specific DL perhaps if you want to be more precise). While the confusion of the term expressivity should definitely be resolved in the text, I acknowledge that it will probably be very difficult for the authors to determine the actual expressivity of each ontology in this survey (especially since many are not even available online). I would therefore suggest to merely state as much information as possible of each ontology, but be clear on the cases where this is unknown. I think the most relevant information is whether it is an RDFS or OWL ontology, if it adheres to a specific OWL profile (e.g. EL, QL, RL) or if it is simply in OWL DL. For the accessible ontologies the authors should be able to determine this by examining the ontology itself, and/or using some tool to assess it.

Detailed comments/questions and minor issues, in order of appearance:

- Footnotes 1 and 2 are more or less repetitions of each other.

- Does table 1 come directly from reference [15]? There has been a lot of discussion on how FAIR is actually to be implemented for ontologies, see for instance [2] below. A bit more discussion on this would benefit the paper. On the other hand, FAIR is not really used to assess the ontologies later on (apart from “are they online or not”), so this could also be better connected to the survey itself.

- Table 2 is nice and valuable, but quite long, so potentially it would fit better in an appendix and only a few examples included in the main text. I also have some doubts when reading parts of the table, such as how the key concepts were determined? Why is “device” not a key concept of Q1 and Q2, but of Q3 and several others? In this section it is also not so clear that you are only focusing on laptops, because you talk about ICT devices in general, and additionally some of the key concepts are NOT laptops, such as the router and switch listed for Q1. Why are they key for laptops, if phones and tables are not? Further, what is the difference between device and hardware in this context? Q6 uses the term device, while Q7 uses hardware, do they refer to the same thing? What is status and grade in Q8 and Q9? It is interesting that you consider software to be a component of a device, i.e. you are not only covering physical objects as components here, or am I misinterpreting Q10? Why is location not a key concept of Q15? Q19-22 state “is used” - is used in what, the component or the device? Q31 - seems random to only focus on USB-ports, why not other ports? Q38 - how is warranty a physical property? I am also not sure about memory capacity in Q41, I would have intuitively put that as a computational property. On the other hand, how is CO2 footprint (Q44) a computational property? Then comes a set of CQs that relate to cost, stock etc. that seem to be totally missing the contextual nature of these properties. In my opinion it doesn’t make sense (or at least it will be practically infeasible to get the data) to model the total world-wide stock of an item, instead I guess what could be captured in the stock of the type of item at some certain reseller, manufacturer etc. Similarly the cost is also not a universal thing, something may cost a certain amount to manufacture in one factory, and another amount in another factory. The product will cost x at a certain retailer, and y at another store etc. Similarly the material cost, is that the actual amount of money spent, or the potential cost based on what components are included? Q56-57 also seem context-dependent, in this case related to an actor, i.e. recommended by someone, or selected by some organisation? Q59 is unclear, what does this mean? Is it per device? Per type of device? Per organisation?

- In section 4.1.11 you state that only 4 ontologies are available, but this is actually a bit ambiguous since some of the papers you found presented ontology networks consisting of up to 9 ontologies. I get what you mean, but the number is I guess higher if you could individual ontologies.

- In the next sentence you say “rarely available online” when you discuss that the other things that are described in papers are not accessible - this is vague, are they online or not?

- In section 4.3.2 you mention that the authors have developed a library that somehow extracts parts of an ontology if I understand correctly? The implications of this should be discussed further. If non-standard methods of reuse are applied, then that may affect the FAIR-ness and reusability of the ontologies.

- Section 4.2.5: I am not sure that you really mean that one has to have studied philosophy to reuse EMMO? Do you mean that it is too abstract perhaps, and that you need a certain level of knowledge on modelling principles and basic ontological distinctions to use it?

- Section 4.2.8 mentions MDO but you have not described MDO yet, better put this section later, after MDO.

- The first paragraph of 4.2.11 is a bit strange. The second sentence seems to state the obvious - what would be the alternative? Not modelling materials in i material ontologies? And I am not sure how EWC comes into the picture, it was not in the list of surveyed knowledge structures.

- What do you mean with “build in an ‘ad hoc’ manner” at the bottom of page 19? Without following a proper methodology? Without having a clear goal? Without having a verified set of requirements? Or something else?

- Page 20, “The work of Sauter et al. [38] has a limited expressivity… (see Table 8)” - Table 8 does not say anything about expressivity.

- I am not a domain expert so I may be wrong, but isn’t the standard called BS 8001 (and not BIS 8001) and the publisher BSI?

- Some references are incomplete, even missing the year (e.g. [40]), online references missing access dates, and the format is not consistent (e.g. sometimes years in parenthesis, sometimes not).

Language issues:

- Overall “build” is used in many places in the text, where the correct inflected form should be “built”.
- An additional issue is that quotes are used in many places where you do not really need them - remove the quotes and reformulate in your own words instead!
- Section 4.1.5: Paatent -> Patent
- Section 4.1.6: It can be also be -> It can also be
- Bottom of page 12: exiting -> existing
- Section 4.3.1: three level of -> three levels of
- Section 4.3.3: adoptation - do you mean adoption or adaptation?

[1] http://prisma-statement.org/PRISMAStatement/PRISMAStatement.aspx
[2] Poveda-Villalón, M., Espinoza-Arias, P., Garijo, D., & Corcho, O. (2020). Coming to terms with FAIR ontologies. In Knowledge Engineering and Knowledge Management: 22nd International Conference, EKAW 2020, Bolzano, Italy, September 16–20, 2020, Proceedings 22 (pp. 255-270). Springer International Publishing.

Review #2
By Ben De Meester submitted on 20/Apr/2023
Suggestion:
Major Revision
Review Comment:

### High-level Review

This Survey Article is a combination of three things.
1: A survey on (Semantic) CE ontologies and tools, focussing on the ICT data use case.
2: An evaluation of CE ontologies based on competency
3: A survey on CE standards.
I think a couple of larger issues require a major revision of this paper.

First, it is unclear to me why the ontology evaluation is part of this survey, and is put in between the ontology survey and the standards survey.
Given this content, I would have expected a different structure: combining the 2 surveys and extracting _all_ data models (either described in an ontology or in a standard), and then evaluate all those data models. based on the compentency questions. That would show which standards could learn from or be combined with which ontologies and vice versa.

Second, I find the survey methodology not well argumented. It is claimed to follow PRISMA, but I could find little to no overlap with PRISMA.
To make this work reproducible, a clear argumentation on which keywords were used is essential, however, this is lacking.
In fact, not even the full list of keywords is given, an absolute minimum in my opinion.

Third, how the requirements came to be is not well argumented.
"Based on this and on our collaboration with the refurbishment industry," makes it sound ad-hoc,
which gives very little weight to the evaluation in Section 5.
If it can't be well argumented why this list of requirements is representative of the use case,
I don't see much value in Section 5.
Maybe having it externally reviewed (3-5 experts should be enough to make supported claims) could add some more credibility?

I think the analysis is well done and thorough, one remark: an ontology is a _shared_ domain model (if it is not shared across stakeholders, data interoperability is hampered), and the analysis text seems to hint at suggesting that creating a complete CE ontology is a matter of desk research. However, I don't believe that just creating or linking all concepts so that all competency questions are answered is enough. This needs a shared support from a larger community. I'd be interested to see how the authors aim to tackle that.

I miss the fact that the standards survey results aren't linked to the ontologies survey results: is there anything that can be discussed when comparing the work of the standardization bodies with the work of the (academic) ontologies?

I'll also provide the review for the following dimensions:

#### (1) Suitability as introductory text, targeted at researchers, PhD students, or practitioners, to get started on the covered topic.

Introduction is very good, so very suitable as starting publication. However, the structure of next sections (specifically section 4) should be improved to tell a more consistent story

#### (2) How comprehensive and how balanced is the presentation and coverage.

It's very hard to tell how comprehensive the survey is (one of its largest drawbacks),
as no reproducibility parameters were provided (i.e., filling in the PRISMA checklist).

#### (3) Readability and clarity of the presentation.

Clarity of the presentation varies. Some paragraphs are very clear, others lack proper structure or contain misspellings/sloppy writing.

#### (4) Importance of the covered material to the broader Semantic Web community.

To the broader Semantic Web community, I don't think is this work is of huge importance per se: the results are focused on a specific niche.
It is however relevant, and the idea of adding some kind of evaluation when doing an ontology survey is interesting.
Using some (sub)tasks of ontology engineering methods such as NeOn to evaluate your survey results
might lead to a new kind of 'ontology survey method'.
However, that was not the focus of this work and was thus not detailed.

### Detailed review

#### Introduction

- I find the first paragraph of the introduction very clear and understandable for a wide audience, well done. I would expect there to be a reference for these definitions?
- "are being used on average for 3 and 4-5 year" --> I only read "3-4 jaar" in the reference ([5] p4), why you write "3 and 4-5" is unclear to me
- The paragraph at line 7 of page 3 could be rephrased a bit to make your point more clear. Are you meaning to say that the adoption of CE in the ICT sector is limited due to the high computational power needed to generate, process, store and maintain data? I find that a weak argument, missing any reference. So either make it clear, or maybe remove it. It also contradicts with your next paragraph, which basically argues for _more_ data.
- "This additional concerns due to the lack of transparency and traceability along the resulting fragmented data value chain." --> it is unclear to me what is meant by this.
- The paragraph ofp3 lines 16-32 is a bit all over the place: it's making multiple points not necessarily closely connected. I would suggest to up front clarify which challenges you will discuss, and then discuss them in detail. e.g., the last sentence comes out of the blue, confusing me about how this connects with the previous parts.
- "Through the years, research has shown the benefits of ontologies for solving the issues of data heterogeneity, interoperability, misinterpretation and lack of contextual meaning has been the main aim" --> there's something wrong with the structure here.
- "From technology perspective, to be able to deal with the unprecedented amount of heterogeneous data when developing CE sustainability assisting tools, especially for ICT data sharing, more and more experts have been utilising the Semantic Web" --> there's something wrong with the structure here
- "Linked Data can function as an exchange medium for the CE driving the "push and pull" between diverse industry resources." [39]. --> This is just a citation without context. It reads very sloppy.
- "We believe that our work is also helpful for non-experts knowledge engineers as it presents the successful utilisation of the Semantic Web for ICT and material data sharing in the CE" --> I find this a weak argumentation: it is not clarified _how_ 'presenting sucessful utilisation' is helpful for knowledge engineers

#### Methodology

- first paragraph: when listing the steps, you mix past tense and present tense.
- "Step 3 and 4 focused on collecting and analysing relevant existing work. Next, in Step 4," --> there's something wrong with the structure here
- Figure 2: the Conclusions step should be numbered 6, not 5.
- ❗ there's an unclarity here: you state you do a systematic review following PRISMA. However, the steps of fig 2 don't resemble nor give the same information as PRISMA's flow diagram (http://www.prisma-statement.org/PRISMAStatement/FlowDiagram.aspx). I also cannot find a filled in PRISMA checklist (http://www.prisma-statement.org/documents/PRISMA_2020_checklist.pdf). It is unclear to me how you can then claim to follow PRISMA.
- ❗ Only a subset of the keywords is given, making it almost impossible to reproduce the results.

#### Requirements for an ICT Ontology for the CE

- In general, the categorization of the requirements does not feel well introduced. How did you come up with this categorization? Who validated this?
- Q38-Q40: why are warranty claims categorized under 'Physical' properties? Maybe they belong under 'Commercial Properties'? (see point below)
- Q41: why isn't this under 'Device Components'?
- Q45-Q48: it baffles me that these are categorized as 'Computational Properties', don't you need an additional 'ICT Commercial Properties' category for these?

#### State of the Art

- I feel that presenting the 7 attributes as introduced in the beginning of this section for each piece of related work in a table would largely increase the value of this paper, as it would make comparing existing works much simpler.
- The description of the Semantic Models for ICT is a bit questionable: it's not fully consistently described, and some claims are misleading in my opinion, e.g. it is stated that "concepts from VANN have been reused", but VANN is an ontology metadata model, used to describe, e.g., the prefered prefix when using the ontology. It has as such nothing to do with the actual datamodel. So yes, technically, VANN is reused, but VANN is not reused for the actual model, so I suggest to remove these kinds of claims to avoid confusion.
- It's confusing that Table 3 presents different characteristics than the characteristics presented in the beginning of this section. I'd align them and make sure they're all represented in the table. I would personally also put the table before going through all the individual ontologies: this allows the reader to first get an overview, and then dive into details.
- " JSON-LD data model" --> A JSON-LD artifact is not a data model, it's a JSON serialization for a piece of semantic data.
- "transformed the OWL version into a JSON-LD data model" --> "provided an OWL model using JSON-LD"
- "it follows RDF serialisation" --> RDF is a framework to represent graph data, and has multiple serializations such as JSON-LD and Turtle. I would rephrase to, e.g., "it is described using RDF" (although I'm curious about the ontology modeling language itself: I assume it is in fact OWL?)
- The description of the Semantic Models for Materials could be made more consistent, e.g. some mention which OWL profile is used (OWL2DL), other just state OWL (without specifying the version nor profile), and other state RDF.
- Given Table 4, I'm wondering whether this section could be made more clear by removing some or all text that is also available in the table. Same comment as above: introduce the table first, then go into details.
- 4.2.11 " Newer ontologies such as MAMBO [82] and MDO [84] follow OWL2 and OWL2 DL Semantic Web standards" --> this makes it seem as if, e.g., EMMO, which is tagged as "OWL", doesn't follow OWL2 Semantic Web Standards, whilst in fact it does. I would clarify the difference between OWL, OWL2, and OWL2 DL (I _think_ you can state all as "OWL2", but I'm not sure).

#### Conclusion

- The conclusion is clear but remains very general, I was hoping on more crisp conclusions, e.g. 'high-level concept X is not well supported in any ontology', or 'there is large consensus on how to model X, but very diverse ways to model Y'. Also things such as 'we notice that documentation is lacking' is a very broad comment: can you propose some mitigation strategies, or some discussion on why you think that is?

### Minor/Typos

- I personally prefer consistent Oxford Comma usage
- "non-experts knowledge engineers" --> "non-expert knowledge engineers"
- "understating" --> "understanding"
- "furter" --> "further"
- footnote 10 has some latex commands in it
- 4.2.1. -> double `.` at the end.
- "the can be seen" --> "can be seen"
- "all other ontologies we built manually" --> "all other ontologies were built manually"
- " it is s unclear" --> " it is unclear"
- "they have been build" --> "they have been built"

Review #3
By Agneta Ghose submitted on 21/Apr/2023
Suggestion:
Major Revision
Review Comment:

The manuscript "semantic web and its role in facilitating ICT data sharing for circular economy: state of the art survey" presents an overview of existing ontologies for devices and materials used for information, communication, and technology (ICT); and circular economy (CE). The topic covered by the manuscript is of utmost necessity and can support researchers working in these domains particularly with respect to ensuring the interoperability of the related data(bases). There are several barriers in the path of adopting sustainability in the ICT sector. Some of these barriers can be mitigated with tools to support knowledge exchange. The authors suggest that use of semantic web can mitigate barriers to knowledge exchange and ensure shared data is increasingly Findable, Accessible, Interoperable and Reusable.
The authors have provided an overview of several taxonomies and ontologies shared in the specific domains. However, the study lacks the authors reflections on the state of the art of the available ontologies. Adding a critical reflection, as well as improving the structure and language of the manuscript could enhance the quality of the manuscript. Detailed comments are given below:
In general, Tables 3 -5 succinctly provide an overview of the different ontologies hence a paragraph for each ontology reviewed isn’t required unless something additional is provided. A suggestion would be to present a more elaborate and reflective summary of the sections -4.1.11, 4.2.11, 4.3.5, 4.4.4. Suggest elaborating the key findings, usage, shortcomings of one or more of the ontologies reviewed.
Strongly recommend to get the paper proof read to improve the language and sentence structure. Please focus on making understandable statements while using numbered references. The key is to paraphrase the key issue and then added the sources.
Abstract:
- What do you mean by ‘delivery risks’ of our digital infrastructure?
Introduction
- Pg 3 ln 7 Provide reference if possible to the statement “adoption of CE in the ICT sector is currently limited”
-Pg 3 ln 35 Could you briefly provide concrete examples from the citations [16 -21] on how ontologies have been used to solve issues with data heterogeneity and improve interoperability.
Table 1 presents current challenges for ICT and CE in relation to meeting FAIR principles. The possible solutions from semantic web could be improved by providing a concrete examples based on some of the ontologies reviewed in this manuscript. Hence this table could also be placed in another section of the study and not necessarily in the introduction.
In the introduction, it would be great if you could highlight the sustainability issues related to ICT devices such as laptops (e.g. LCA studies) Identify the issues these studies have had in relation to access to knowledge and how it could improve the analysis (e.g. reduce use of generic info, reduce uncertainty etc) to improve the problem analysis of your study and give a clear research question.
Methodology
Pg 5 Ln 9 What are the key objectives of these projects and why was it relevant for this study?
Pg 5 ln 35 ‘Each component has history which is a record of its material…’ I am wondering if there is a better word than history to be used in this context. For example, ‘an inventory’.
Pg 6 Ln 29 Please reformulate text in brackets
Pg 7 Competency questions. Have there been any considerations on data security. In my research experience with environmental assessments, some product manufacturers are not keen on sharing data openly. In such condition, data is often anonymized or shared in aggregated format. I am not entirely sure how this works for ICT devices but given that you have several questions on Brand, location, organiser, I assume the data shared using these ontologies might not be confidential.
Pg 8 Q44 I don’t think carbon footprint is a computational property. Perhaps a competency question for CE or device properties
State of the Art
Pg 9 ln 34 ‘object property which relates’… relates to what ?
Pg 9 ln 39 Unclear on what information is it based on and how does it link to the sources given in 47 and 48. Moreover, this sentence construction is not very good when you are using numbered references. Try to provide the context in the references or atleast the name of the authors
Pg 10 ln 36 Please correct the sentence structure.
Pg12 ln 24 Please elaborate what is meant by top level ontology
Pg 12 Ln 37 Such sentences are not recommended when using numbered referencing. Please adapt accordingly
Pg 14 ln 1 ‘….as problematic.’ Why?
Pg17 Section 4.3 Recommend reviewing ‘A core ontology for modelling life cycle sustainability assessment on the Semantic Web ( https://doi.org/10.1111/jiec.13220). Its focus is on providing a semantically linked ontology for LCA models which are often used to assess circular production systems.
Pg 20 Ln 29 “Common pitfalls …..” Good! Now provide examples of these challenges
Pg 22 Section 6.4 The current status of standards could be presented in the problem analysis (introduction) for the paper. In general, section 6 Overview of CE standards does not fit into the flow of the paper as the authors do not really use any analytical method to asses the standards.
In general, it is recommended to provide full forms of all abbreviations including those that are well known (e.g ICT) in the paper. A good practice would be to do so when they are first used.