Cloud Service Description Ontology: Construction, Evaluation and Querying

Tracking #: 1654-2866

Khouloud Boukadi
Molka Rekik
Hanêne Ben-Abdallah1
Walid Gaaloul

Responsible editor: 
Freddy Lecue

Submission type: 
Full Paper
Cloud federation systems have recently emerged as a scalable delivery model that interconnects services from several cloud providers for load balancing and accommodating spikes in demand. Among the challenges that this delivery model faces is the complexity of service selection, which is due to the heterogene- ity of cloud service descriptions across the federation. To ease this complexity, it is crucial to harmonize cloud service descriptions. Towards this end, we herein propose a Cloud Service description Ontology (CSO) that we modeled based cloud standards. CSO covers functional and non-functional capabilities of the three main cloud provision models (IaaS, PaaS, SaaS). To populate CSO, we de- ned a set of semantic mapping rules to collect instances from cloud providers' web pages. In addition, to insure the quality of CSO, we propose an evaluation approach that detects and corrects consistency, redundancy and incompleteness errors. Furthermore, to show the correctness and the inference power of CSO, we present the results of an experimental evaluation that measured the precision and recall ratios of CSO query results.
Full PDF Version: 

Reject (Two Strikes)

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 10/Aug/2017
Major Revision
Review Comment:

This is a resubmission of SWJ1522, which I reviewed in late 2016. Hence, I will focus on how the comments from the reviews of the original submission have been taken into account, and not discuss the complete paper again.

In my original review, I had four major concerns: First, I was not able to understand all of the design decisions which the authors did, and, second, I was missing a clear delimitation to the state of the art. Third, I did not see how specific aspects of cloud federations were actually covered by the proposed CSO. Fourth, I had a number of concerns regarding the evaluation.

In general, the paper has been improved a lot, especially with the discussion of design decisions and the clear delimitation to the state of the art. However, I still think that cloud federation only plays a secondary role within the paper, despite some improvements. Also, the evaluation is still not presenting the level of technical detail I'd expect (see also the comments from reviewer 2 regarding the original submission; in my opinion, the authors should have fully incorporated the according comments by reviewer 2).

Overall, the paper has been improved by quite some degree, but given that there are still a number of shortcomings, it still requires a "major revision".

Further important comments:
* Section 3: "the remaining concepts are relatively intuitive" is not reproducible without any further documentation.
* Abbreviations: When you introduce an abbreviation, stick to it, i.e., don't repeat the full form again (e.g., CSO is introduced, then the full form is used, then the abbreviation again; VM is first used, then introduced, then the full form and the abbreviation are used interchangeable; CSA is introduced several times). These are just examples: The authors need to check all abbreviations throughout the paper.
* If there is no particular reason to write something uppercase, then the authors should stick to lowercase writing. Especially, uppercase and lowercase writing should not be mixed. For instance, in Section 4, "forum" (and some other words) are written in both lowercase and uppercase, without a particular reason for this. Again, this is just an example: The authors need to check this throughout the paper.
* The literature list needs to be unified: For instance, sometimes, the DOI is mentioned, but mostly, this is not the case; sometimes, the editors are mentioned, but mostly, this is not the case; sometimes, the publisher is mentioned, but mostly, this is not the case; sometimes, terms like "Journal" are abbreviated, sometimes they aren't; for some entries, page information is missing; for [32] it's "O'Reilly", not "OReilly"; the last access to websites should be given in English, not in French; for [58], pretty much all information is missing. Again, these are just some examples: The authors need to carefully check all references and unify their content and layout.
* Still, the authors do not make use of a vector format for their graphics (e.g., EPS, PDF), which are therefore not really scalable or well-depicted.

There are still quite a number of smaller mistakes in the paper, which I have commented on below.

Minor comments:
* Abstract: "we modeled based cloud" => "we modeled based on cloud standards"
* Throughout the paper, the term "insure" is used when the authors actually mean "ensure"
* Throughout the paper (also in listings), wrong opening quotation marks are used, however, not everywhere.
* Introduction: What are "dockers"? You actually mean "docker containers" (or simply "containers")
* Introduction: I am missing a reference for the sentence "This nontrivial means is further complicated..."
* There is no need to add round brackets to a group of references
* Table 1: "services" => "service"; "standard-" => "standard"
* Section 2: "either did not deal with the ontology instantiation, or did it" => "either do not deal with the ontology instantiation, or do it"
* Section 3: "hould" => "should"; "The-thus" => "The thus"; "in accordance with to" => "in accordance to"; "through the TOSCA's concepts" => "through TOSCA's concepts"; "licenses and users number" => "licenses and user number"; "perceptive" => "perception"; "documents, questionnaire" => "documents, questionnaires"; "certifies organization's" => "certifies an organization's"; "cloud providers URI" => "cloud providers' URI"
* Section 4: "with certain" => "with a certain"; "providers related" => "provider related"
* Section 4: "and has no Cloud Security Alliance" => it seems as if there is something missing in this sentence, since the sentence does not really make a lot of sense at the moment.
* Section 4.1: "expect of the NIST SP800-88" => again, something is wrong with the sentence structure (maybe a missing or wrong word), because the meaning of this remains unclear.
* Section 5: "Inconsistency occurs when one the two cases" => Missing word, the sentence does not make sense at the moment
* Throughout the paper, both "super-class" and "super class" are used. This should be unified. Also, what is the difference between a "super-class" and a "base class"? If there is none, please unify the terminology.
* Section 5: "using the Apache Jena" => "using Apache Jena"; "Section 5.3.1)" => "Section 5.3.1"
* Figure 7: "properties :hasEssentialCharacteristic" => "properties: hasEssentialCharacteristic"
* Section 6: "with the lowest rate" => "with a low rate"; "Orange teleco" => "Orange telecom"; "also wished" => "also wishes" (don't change the tense!)
* Section 7: "anti-patterns detection" => "anti-pattern detection"; "discovery is tested" => "discovery was tested"

Review #2
By Paola Grosso submitted on 26/Sep/2017
Minor Revision
Review Comment:

Dear authors
I have re-read the revised version of your submission. I have to say that in general, I find you have addressed the issued I and my fellow reviewers have raised. I will comment on these points in a moment.

Still, you have not managed to convince me that your results are significant, as once more, you have not shown the suitability of your ontology to address 'real' use cases. I had made an extensive comment on evaluation last time. I expected it to be properly addressed, but your new example (section 6.2) is, if anything, even less relevant. It is unclear what this experiment proves. The service you describe is the acquisition of cloud services from _one_ external partner. There is no federation of cloud services, Figure 8 does not show a 'federation'.
In my opinion, you need to show how CSO addresses a more complex problem in an actual federation of providers, with a more relevant use case. This motivates me to request an additional revision of your paper. Only this way the 'credibility' of your approach can be fully substantiated.

For the rest.
I do see you a have expanded on the Federation properties (Fig. 2 and section 4.2). How these properties are used in a real scenario would be (see above) of interest to me.

You have properly enhanced the security aspects of your ontologies.
I appreciate the effort you made to better model PaaS and SaaS clouds, as well introducing containers in the ontology.

Review #3
By Maria Maleshkova submitted on 04/Jan/2018
Minor Revision
Review Comment:

The authors present a cloud service ontology (CSO) that aims to address the need for formally describing cloud services. The paper is greatly approved, since the initial submission. Below are some additional recommendations for further refining the paper.

1. Abstract. “that we modelled based cloud” —> that we modelled based ON
2. Introduction, page 2. The problem with the heterogeneity of services is not only the deception but also the way of communication, how exactly calls are done, how is authentication done, etc.
3. Introduction, page 3. contribution 2, why is the focus only on easing discovery? can there rules not also be used for composition, mediation, invocation…?
4. Introduction, page 3. contribution 4, what exactly is “an inference that allows…”, inference rules or reasoning approach? what exactly do you mean. The way that 4. and 2. are currently described, it is not very clear what is the essential difference between the two contributions. Consider rephrasing.
5. Related work, page 4. Table 1 — user Author and year abbreviations for the referenced approaches, in this way it would be clear what approaches you are talking about. alternatively introduce an additional column with the name of the approach. Currently the reader has to go back and forth between the table and the list of references. Reference to USDL is missing.
6. Related work, page 4. What is the difference between cloud services and Web services in terms of modelling? Why are you not considering WS approaches too?
7. page 5, do you use these observations to check that the CSO fulfils them? Why state them here and not show that the CSO actually satisfies these?
8. page 9, in terms of concepts that have most probably already been defined in other ontologies ( QoS and reliability, availability, portability,…) do you reuse existing classes, import them or redefine them? Why not import from an existing QoS ontology?
9. page 15, second row of the table is out of bounds
10. page 17, “Besides, they can be used to detect anomalous support features” what do you mean? Inconsistencies? Errors? what are anomalous features
11. page 18-19, make the listings in the boxes referencable and give them a caption.
12. page 18, “Medium_Certification” out of bounds
13. page 18, delete space after “High_Protection”
14. page 19, listing in the box out of bounds
15. page 20, “Low_FederationPortability” out of bounds use \sloppy paragraph
16. page 20. How adaptable/adjustable are these rules is a user want to change them based on a specific use case?
17. page 23. Maybe better to induce the anti-patterns first, and then give examples. Currently, it is a bit confusing.
18. page 30. “Section 5.3.1)” the opening bracket is missing.
19. page 32. Make the capitalisation of section titles consistent — all nouns with capital letter
20. page 32. Footnote 15, problem mit “.”
21. page 34, Figure 8, especially the bottom part is not readable