Review Comment:
The previous points for minor revision were. New comment are annotated with '=>'
* It is unclear how tasks are created, on the one hand this seems to be done automatically by combining several sources. On the other hand, this seems to be a manual task. By referring to an article under review it is unclear how this is accomplished.
=> just to be sure. The learning tasks originate from previous work [25] or were they created by teachter as suggested in section 5? Consider adding the role of the teachers / learning experts in Fig 1?
* The tasks and site recommendation (and ranking) are handled by a commercial service. This means that is unclear how recommendation R3 is satisfied. Also, it is not motivated why an own or open source implementation was used. Furthermore, the recommender seems to take into account preferences, but is unclear how that works.
=> this is added now, how hard would ik be to copy this functionality?
* Provide more evidence about the quality of the system. E.g. how did the teachers validated the pedagogical interest of the system. And what was the outcome.
=> ok
Detailed comments:
Consider adding the word "about" in the title after Learning. => ok
Also, the term semantic in the title is confusing, which part of the application is (about) semantic(s)? => ok
According to [13]; Learning in ubiquitous learning environment is conducted in the interactions among three essential communicative elements: social human, object in real world, and artifact in virtual space. Can you elaborate on how these essential elements are imbedded in Casual Learn? => ok
Can you elaborate on the semantics of the term learning? You describe ubiquitous learning, informal learning, contextualized learning, authentic learning and geolocalized learning. => so the semantic technologie is used to retrieve context / location relavant data?
Also, for the relation between learning tasks and learning activities. => to me the difference between learning tasks and learning activities is not clear. Are these the same? or different things?
In 1. describes that there is a task dataset published as Linked Open Data. In 2. you describe how this task dataset was retrieved and integrated from various sources. Where the (4) teachers involved in this process? Also, how can this be maintained? => not addressed
Later in 2. you describe the process to create tasks and refer to [19] which is still under review, so it unclear how this mechanism works. => ok
Did the task generation take into account the level of secondary-school students? in section 5 you mention that all teachers designed learning activities. Consider describing this process. Is there a task editing system? => not addressed. just to be sure. The learning tasks originate from previous work [25] or were they created by teachter as suggested in section 5? Consider adding the role of the teachers / learning experts in Fig 1?
In 1. consider to mention related (geo) systems from section 6. where you claim that there are few visualization tools. =>ok
2. described the notion of "answerType". Is there a mechanism where teachers can inspect the answers and e.g. give a rating? => still not clear.
Also is there a multiple choice type? (answerType) => only text or image?
2. where are points (lat,lon) used as georeference? Would polygons not give a richer experience. For example to refer to a large object, street, neighbourhood or area? => addressed (not available in semantic web), but these are available in other (non semantic-web) sources, such as OSM.
2. why is the geo reference located to the context and not also to tasks? => ok
3. lists the requirements for the system. R3 demands that the learners preferences are to be taken into account. What kind of requirements are there? In 4.1 it is mentioned that preferences should be taken into account. Consider to elaborate on this mechanism. =>ok
4.1 contains the first mention of user rating. This seems to be important in the recommendation process. Can you elaborate on user preferences, user rating and recommendation? => not addressed, perhaps introduce the function of rarting, before 4.1?
4.2.consider to add the notion of geo-fench. => ok
4.3 mentions that the source code is open source, but Recombee (the recommender server) is a commercial / closed source service. So an important part of the system is delegated to a commercial party. Consider to describe how the actual recommender works and list possible open source alternatives. => ok
5. mentions that teachers assess the results of the experience. Do you mean that teachers had access to accomplished tasks? According to 4.1 images and videos are not stored in the answer database. => ok
5. describes that based on a usability study, improvements are included in this version of the system. So on what version was the usability study based? Furthermore, can you specify what the issues / improvements are?
Consider to quantify a typical Casual Learn session, e.g. how long does a session take, how many tasks need to be carried out? => ok
5. Congratulations on the prizes and attention. Consider adding the reasons behind these compliments. => ok
5. Can you elaborate on the interviews with the users of the system? => ok
6. Consider to split the discussion section into related work and the discussion about the (potential) capabilities, limitations of the tool and future work. => ok
6. Did you consider to recommend more information about a cultural heritage, by presentating various links (linked data) to other related sites? => ok
The last sentence: "Casual Learn can take this information into account to recommend tasks according to the learners’ personal interests.". Does this mean: Casual Learn can already do this, or is this future work? => ok
|