Towards Multilingual Semantic Annotation for Sign Language

Tracking #: 2691-3905

This paper is currently under review
Pablo Calleja
Maria Poveda
Elena Montiel-Ponsoda

Responsible editor: 
Guest Editors Advancements in Linguistics Linked Data 2021

Submission type: 
Application Report
Sign languages can be considered under-resourced languages when compared to spoken languages. Their digital representations are also minimal, since there is no a globally accepted standard for representing signs and only tentative progress has been made on some sign languages in isolation. In addition, multilingualism in sign languages is a poorly addressed research area. Some approaches and resources have been developed in recent years that take advantage of available dictionaries or lexicons for spoken languages. Building on those initiatives, in this contribution we propose an ontology to model signs as captured in videos together with their transcriptions in written languages. Additionally, we have developed a web service to annotate those video segments and their transcriptions with BabelNet synsets. This allows for different sign languages to be connected through a BabelNet synset acting as a pivot. The web service is part of a crowdsourcing platform developed in the context of the EasyTV project to provide audio-visual services to people with disabilities. In that context, the web service has been used to populate a multilingual sign language knowledge base and has been validated with encouraging results. To the best of our knowledge, this work is one of the first attempts to conceptualise the representation of sign languages into the Semantic Web.
Full PDF Version: 
Under Review