Abstract
Sign Languages (SL) are the main tools used by Deaf people for access to information (AI), an essential issue to allow their social inclusion. Information Systems (IS) have a key role in this AI, but in some cases they fail to work by not considering the needs of the Deaf, such as a Human-Computer Interaction (HCI) with communication by SL. The Automatic SL Recognition (ASLR) area has developed algorithms to solve technical problems, but there’s still need to develop HCI tools for users in real contexts. This paper presents a context-based collaborative framework to create and upgrade SL databases by Deafs, to improve the development process of ASLR systems from the HCI perspective.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Context-based framework
- Tools for deafs
- Collaborative methods
- Sign language
- Datasets
- Real users
- Automatic recognition
1 Introduction
The language is a powerful tool for the acquisition of cultural values, the inclusion of individuals in society, the exercise of their citizenship and their access to information and scientific knowledge [12, 21].
The lack of access to information has been a serious problem faced by Deaf people that have tried for many years to conquer their inclusion in society and the exercise of their citizenship.
The Deaf have their own culture and identity that are characterized by a Sign Language (SL) as their main tool for the communication and interactions. However, because of poor knowledge of SL by society and of the lack of information in SL, the Deafs are excluded from the access to knowledge [6, 21].
Information Systems (IS) play an essential role to break this barrier to accessibility. However, in many cases IS fail to provide a real access to information, mainly by not considering the real needs of the Deaf, such as features related to communication with the interface, namely natural input and information in SL.
Thus, the problem is the lack of a more natural Human-Computer Interaction (HCI) for the Deaf: an interaction based in the SL.
A research field that has tried to develop computational services based in SL is the Computer Vision (CV), which for over 30 years has developed algorithmic strategies for Automatic Sign Language Recognition (ASLR) systems - a computational basis necessary for building the HCI by SL [6].
Regarding ASLR, CV studies have focused in the algorithms to solve the technical problem with discrete sets of signs domains, but the applying of these resources to design a HCI by SL in a real context of use is still far away [7]. The inclusion of a HCI approach can be considered to improve the development process of ASLR systems, among others, to clearly understand the needs of the Deafs and SL structure [6, 7].
Antunes et al. (2011) [7] reviewed several CV studies providing an overview of the development process of ASLR systems, in order to identify some limitations related to the lack of a HCI approach, describing some categories and their problems, such as: object of research, approaches, SL databases and technologies.
Then, a framework to support the development process of ASLR systems was proposed, based on the HCI perspective [7]. This framework describes the needs of the Deaf, details the SL structure and the factors to be considered towards technology (e.g. not to use sensors that restrict the natural movements). In addition, Garcia et al. (2013) [22] presented a HCI architecture with a larger focus and methodological details to assist the development of tools for the Deaf.
However, the lack of a strategy to build SL databases, that considers a HCI approach, it is still a problem. The SL database is an initial step necessary for the ASLR development process, because it is the resource used to train and test the algorithms of pattern recognition, and their quality impacts the end solution. Therefore is important that the database also contemplates a HCI approach.
Thus, some factors should be considered, such as: involvement of real users (Deafs) and their common contexts (e.g. learning), low cost techonologies (e.g. webcams), an adequate methodology and criteria to define the signs and their descriptions, among others.
This paper presents a conceptual framework to support the building of SL databases based on contextual and collaborative activities of the Deaf communities, considering HCI factors, such as context, user needs, SL, Deaf culture. In this ways, ASLR systems can work with natural data from a real context of use.
The main contribution is the design of a framework to improve the development process of ASLR systems, providing a strategy for building or using SL databases from HCI perspective. This paper discusses the Deaf and SL, contextual activities, a methodology for choice and description of the signs and an algorithmic strategy to minimize the size of the database.
In addition, the framework includes an approach to adjust and use an existing SL database in the process. This proposed approach can also be used as a model to continuous and iterative training / testing of the ASLR system in real environments, improving the system with the collaboration of end users.
2 Theoretical Background
The goal of Human Computer Interaction is to develop systems that are easy to use and solve the real needs of the users in their contexts. In this sense, it is essential to involve the users in the process, in order to understand the real requirements, for building appropriate interactive systems [30].
Social, cultural and linguistics aspects should be part of the process of the interaction design. The focus on users is the key, for understanding how they performed their tasks, the ways of use, etc [30].
2.1 Context, Collaboration and Framework
The context consists of understanding the users, their needs, their main activities, their knowledge and use of the technology, in a real environment [24].
The context involves the proper environment, the situation and the activities performed by a people group. When considering the context of use it is possible to improve the human-computer interaction and develop more useful applications [16]. However, determining of the context is not simple, because users interact in many social environments with different goals, technologies and results [5].
Context can be classified by four types: Activity, Identity, Location, and Time. These categories aim to assist in the description of the context, as the tasks and actions (activity), the environment (location), the time when this activities occur and the people involved (identity) [4].
Thus, an adequate understanding of the users activities in practice and their relationship with technology is crucial to build context-based applications [33].
Collaboration means working together with the intention of sharing goals and contribute to problem solving. The collaboration on local activity involves processes such as communication, negotiation, sharing, coordination, etc [9].
The collaboration consists of coordinated activities to shared tools to perform a task continuously. This process of interaction enables the exchange of knowledge, the discussion of solutions and the building of consensus [37].
A framework consists of an conceptual schema or a specific domain model that describes its situations, its properties and its relationships. Thus, a framework can be used to communicate ideas, to define domains, to describe a context, to represent methods and processes in the development of a system [32, 35].
2.2 The Deaf
In the context of this research, the Deaf is an individual who belongs to a minority community characterized with his own identity and culture defined by the use of a SL as the maternal language for communication and social interaction.
The Deaf encounters difficulties to perform even the simplest tasks of daily life: general access to information, medical appointments, in the purchase of medications at drugstores, finding educational materials in their language, etc.
In order to minimize the barrier to accessibility and provide inclusion, many Deaf communities use social gatherings to share information, local study groups, online collaborative activities due to geographical separation, etc.
2.3 Sign Language (SL)
SL is the natural language of Deaf people, a resource used for communication, education, etc. SLs are complete linguistic systems, characterized by the gestural-visual modality capable of allowing the Deaf to develop all their linguistic potentialities [12, 20, 21].
Since society has little knowledge about the SL and there is little knowledge available in the SL, the Deaf people are constantly excluded from society. Therefore, IS have an important role in the tools to provide resources in SL for a real accessibility of information and knowledge.
2.4 SL Phonology
The linguistic defense of SL as natural languages started with Stokoe (1960), who conducted a research about the signs used in the communication of Deafs and showed that SL had all the linguistic features of a natural language [31].
Then, the study showed that the American Sign Language (ASL) has three parameters that were used in a finite number of combinations to constitute the signs: the handshape, the location and the movement [36]. In subsequent studies another property was described: the orientation of the hand palm (OP) [10, 20].
Later, Baker (1983) [8] and others described the Non-Manual Expressions (NME) as a distinctive unit: movements of face, eyes, torso and head.
From these surveys, which developed phonological models based on parameter classes, emerged a new branch of structures: the segment-based models.
The Movement-Hold (MH) model [27] states that signs were formed by two segments: Hold, which were signs without movement, and Movements.
Later, there were other models with more specific features for the phonological structure: Hand Tier (HT), Moraic (sub-units in moras), Dependency Phonology (concepts of locations and sub-spaces), Visual Phonology (geometric and mathematics features) and the Prosodic model [6, 10].
3 HCI by Sign Language
3.1 Examples of Tools Without SL Interaction
ASL Browser [2] is a sign search tool that classifies the signs by their association with the letters of the oral language. When the user selects a word in english, a video of the sign and its meaning is presented. This IS leaves out those these potential users who don’t know the language of their country, and is of little no use to find a sign the Deaf has never seen before.
The Spread the Sign [3] offers a free collection of terms in many languages. In order to use the system to search a synonymous or a equivalent sign, the user must input the keyword in the application, but this input is not in SL.
In the Acesso Brasil dictionary [1] there is a search by handshape, but, due to its lack of usability, the results are presented as a list of words in Portuguese, the same situation of no use within the ASL Browser.
3.2 Automatic SL Recognition Systems (ASLR)
The CV has developed several studies and produced strategies for the technical problem of recognition. However, these studies have not applied these resources to create tools for the end user. A literature review [7] presented an overview of the common limitations related to lack of a HCI approach in ASLR systems.
Inadequate Object of Research. Most methodological approaches have focused on computational techniques in which the Deafs were not included. If the purpose is to promote access to information, an approach focused on the Deaf is required to know their real needs, cultural aspects and conditions of use, aiming at an adequate computational treatment of SL.
Inadequate Methodological Approaches. The common approach whole-word consists of an isolated dataset of signs that are represented by matching signs to words in the spoken language. The system is trained to recognize this set. The problem is that the systems are limited by the set that were trained. As the language can produce infinite signs, this approach is inadequate for large vocabularies.
SL Databases. A factor in the development of an efficient ASLR system is the use of a robust database for training and testing the algorithms. In the studies reviewed, the databases were not built following a methodology from HCI perspective. The recurrent problems are showed in Table 1.
The Purdue RVL-SLLL [28] classified the ASL data by handshapes and movements, signs and sentences. The first classification (handshapes and movements) is important because the most adequate approach is to recognize the sub-units (phoneme model) before processing isolated signs or full sentences. The database consists of 2576 videos of 39 motion primitives, 62 handshapes, and sentences.
The RWTH-BOSTON-400 database has 843 sentences, several signers and subsets for training, development and testing [17]. This database also works on controlled environment and with whole-word model. For the authors, “it is still unclear how best to approach recognition of these articulatory parameters”.
The BSL project [34] uses a methodology related with sociolinguistics and corpus linguistics. The project includes native signers that told short personal stories (users grouped by age and location), but a software (not a computacional model) was used to annotate the signs.
The Dicta-Sign [18] is a project that involves database collection, ASLR and animation and translation for Internet. The prototype of the ASLR system use a depth sensor to recognition. The dataset was described in a computational model (helping to generate animations), but has a low level of details because it is based in a SL writing system.
The other databases found consist of SL corpus that contain a collection of videos conversations [15, 19, 25], isolated signs [11, 25, 26], isolated handshapes [19, 26], special cameras and sensors [15], recordings with multiple synchronized cameras [15, 25] or focused only on linguistic research [19, 34]. Melnyk et al. (2014) [29] presented a review of other databases by ASLR perspective, but from HCI context there are the same limitations.
The problems of these databases are related either to the disconsidering of the context of use or to controlled environments that restrict the natural movements of the user. Another usual problem is related to the conceptual approach: (a) the use of the whole-word model, and (b) select signs randomly and without criteria.
In this case, a database with a large number of signs can not cover all the SL in relation to the sub-units. In addition, the use of the sets without similarity between the signs in the training and testing of the application, provides a system with low accuracy in the results and consequently a poorer user experience.
The lack of a computational model to describe the signs and their sub-units is evident. The use of the phoneme-based approach with a robust computational model in the development process of ASLR systems [7] can improve the user experience in real environments with more accurate and complete systems.
4 Context-Based Collaborative Framework
The framework (Fig. 1) describes: the computational framework (approach, storage, sharing, etc.) that assists all stages of the process, such as the context of use and the activity, the collaborative approach, the SL database and the continuous improvement. Each module of the framework assists the other modules, providing an integrated approach to build a SL database.
In subsequent sections each module is detailed, presenting some results of their application in a real context of the Deafs to validate the strategy.
4.1 Context and User Activities
This module is intended to describe a special context of a Deaf community. The description must specify: (a) the profiles, (b) the environment, (c) the activity worked in this context, and (d) the time. In the case study to validate this module, we instantiate the following way:
Profile. The users consist of Deaf students at an undergraduate course of Lilbras (Brazilian Sign Language). This profile was chosen due to the easy access to them inside of the university. However, any user profile can be chosen, if properly connected with the activity, the environment and the time.
Environment. We performed the activities in the classroom, a common room and daily used by the students. It is important to mention that due to the visual feature of the SL, it is common for classrooms to be organized in a way that facilitates the conversation and the view of the current “speaker” (usually an interpreter) by the students.
Activity. The discussion and the application of the concepts learned in the classroom; in this case, the SL Phonology. Any activity could be chosen. During activities (discourse), the isolated signs must be captured and saved by the coordinator.
Time. Meetings occurred as complementary activities, some days after the correspondent lessons. This brought benefits to the Deafs, who practiced the concepts and counted these meetings as extra activities for their course.
4.2 Collaborative Approach
The collaborative strategy can be local or online supported by a system. The key issue is to develop the contextual activity planned, saving the discourse and the generated signs. For the experiment we used local meetings. If the online is chosen, a platform as InCoP [38] is needed to support the collaboration.
The coordination is conducted by a mediator (tasks organization, activity description and control) and an assistant (operational tasks). In the case of at least one actor does not communicate in SL, an interpreter must attend. The interpreter must belong to the same community, to avoid communication problems due to regionalisms and slang.
In the Cooperation process, the mediator should supervise the discussion and when necessary help to create a consensus. During the activity, the discourse and the interactions must be recorded in the database as discourse.
We use the phonology context based on CCKC (Collaborative Consensus and Knowledge Creation) process [23] to define isolated signs. This approach interconnects the context, the collaboration and the computation to generate a robust, contextual and representative database. Some of the activities carried out along the process were show in Fig. 2.
4.3 SL Database
Approach. The phoneme-based model [7] consists of segmenting the signs in sub-units that are described in the SL structure: the phonetic sub-units (finite set used to create the signs). Thus, it is possible to build an ASLR system to recognize these parameters, obtaining a generic service, even with signs included later. This approach allows the generation of a representative database, since it can be built from a set of signs which covers all the subunits that compose the phonological tree leaves.
Description. Each isolated sign should be described trought a computational model that represents the structure and rules of the phonology of SL. This model must have a high level of details, because should differentiate the very similar signs with different meanings in SL.
For this task we used the CORE-SL (Computational Representation Model of the SL) [6], that defines each sub-unit, provides examples, has a high level of detail and is based on HCI approach for natural interaction by SL.
Storage and Retrieval. Each isolated sign should be stored in the database at the end of each activity with the video and the corresponding description, creating a reference to the sign used in the discussions.
For the storage we used the system presented in the CORE-SL [7], which allows the upload of the sign in video and their description. Additionally, the system has a search engine to retrieve the signs and groups based on sub-units.
For the experiment we used a conventional and low cost video camera, frequently used during the activities of the Deafs such as classroom video chat.
Min-Max Approach. The problem of the existing databases is the lack of a method to determine if the signs set is representative (cover the signs creation possibilities). The objective is to allow this completeness with a Minimum-Maximum Set of Signs (MMSS), that should minimize the number of signs, but should maximize the sub-units of the computational model, in order to reduce the complexity, the training cost and acquisition of the signs from new users.
The Min-Max approach is defined as: given as input a set \(E = \{e_1, e_2, ..., e_n\}\) of CORE-SL sub-units, and a signs set (dictionary) \(S = \{s_1, s_2, ..., s_m\}\) where each sign is described by a combination of the E, find the set \(C \subseteq S\) such that |C| should be minimum and their elements should cover a maximum of E sub-units.
The MMSS can be modeled as the Set Cover Problem (SCP). Since the SCP does not have an algorithm to compute the optimal solution in polynomial-time, a solution is the use of an approximation algorithm which aims to find an approximate solution efficiently [14]. Then, a Greedy algorithm was used [13].

The MMSS can be applied to the CORE-SL system that controls the list of sub-units that not were instantiated by a sign. Thus, during the CCKC process, this feature can be used to select signs for the sub-units not yet instantiated.
4.4 Continuous Improvement
The database generated at one activity can be continuously improved. This can be done by: (a) applying the framework to other groups and activities (same database); (b) including the framework in the ASLR system, that can be provided as a service / tool for the end user (Deaf communities).
In the service, the system can use the collaboration of the users applying a continuous iterative testing for the improvement of the database and the system, iteratively building a better interaction experience for the Deaf.
For instance, after processing the user’s search (in SL), the system can return a result and ask the user if its correct. If incorrect, the system:
-
1.
can show a list of signs to the user and request the correct sign;
-
2.
can allow the user to record a new example of this sign via webcam;
-
3.
can allow the updating of a description in computational model;
Initially, the ASLR system is trained with the SL database. The MMSS can also be used to generate the training and the test set, for example, customizing the algorithm to include one or more signs by parameter in result. In the test step, the system receives a sign (training set), processes, generates the description (CORE-SL) and evaluates the similarity of the result with the SL Database. If not found, the system is trained again (iterative process), as showed in Fig. 3.
In the real environment, the user searches a sign in the system. This input is processed and the results evaluated. Then a list is returned with a solution of candidates signs (based on a similarity function). Then, through this list, the user can take actions to improve the system, as previously mentioned. Then, the human-computer interaction will be improved in use.
4.5 Using Related Literature SL Databases
The framework also can be applied in related literature SL databases. In this case, the selected SL database should be incorporated into the CORE-SL system, and then, the framework can be applied with real users, for example, on a local or online activity to describe the signs. Since our hypothesis is that the use of an existing database to create new knowledge of phonology or even knowledge of other SL (sub-units are universal).
5 Conclusions
The present work describes the requirements for the appropriate development of SL databases regarding the correctness of the computational linguistic treatment and the attending to the real needs of Deaf users in associated applications or services. By our perspective, CV recognition protocols have to be adapted to this framework whenever the ASLR systems aim to support these communities (Deaf people) in their activities in the real world.
The framework has been proved to be an adequate method to create natural SL databases, from contextual and collaborative activities with Deaf people.
Since the phoneme-based approach requires that there are signs for all sub-units (ensuring completeness), a controlled human process would be costly and require a long time. Therefore, a collaborative approach (local or online) can minimize the time to create the database, as well as the size of the signs set of the database.
Furthermore, the context-based approach provides more robust insights than controlled environment, such as the real discourse situations, the regionalisms, the similar signs and the interpersonal variations.
These insights should be considered in the development process of the ASLR systems, in order to provide more quality and accuracy for the service, improving the user experience during the human-computer interaction through the SL.
The HCI approach supported by the framework aims to develop an end-user ASLR service. Thus, the service can be continually improved by its use in a real environments with the cooperation of the end users.
References
Acesso Brasil, May 2011. http://acessobrasil.org.br/libras
ASL Browser, May 2011. http://aslbrowser.commtechlab.msu.edu
Spread the sign. spreadthesign.com (2015). http://www.spreadthesign.com/gb/
Abowd, G.D., Dey, A.K.: Towards a better understanding of context and context-awareness. In: Gellersen, H.-W. (ed.) HUC 1999. LNCS, vol. 1707, pp. 304–307. Springer, Heidelberg (1999)
Ackerman, M., Darrell, T., Weitzner, D.J.: Privacy in context. Hum.-Comput. Interact. 16(2–4), 167–176 (2001)
Antunes, D.R.: Um Modelo de Descrição Computacional da Fonologia da Língua de Sinais Brasileira. Master’s thesis, Pós-Graduação em Informática, UFPR (2011)
Antunes, D.R., Guimarães, C., García, L.S., Oliveira, L.E.S., Fernandes, S.: A framework to support development of sign language human-computer interaction: building tools for effective information access and inclusion of the deaf. In: Proceedings of the Fifth IEEE International Conference on Research Challenges in Information Science, pp. 126–137 (2011)
Baker, C.A.: Microanalysis of the nonmanual components of questions in American Sign Language. Ph.D. thesis, University of California, Berkeley (1983)
Barros, L.: Suporte a ambientes distribuídos para aprendizagem cooperativa. Ph.D. thesis, COPPE/UFRJ, Rio de Janeiro (1994)
Brentari, D.: A Prosodic Model of Sign Language Phonology. A Bradford book - MIT Press, London (1998)
Bungerot, J., Stein, D., Dreuw, P., Ney, H., Morrissey, S., Way, A., van Zijl, L.: The atis sign language corpus (2008)
Chomsky, N.: Knowledge of Language: Its Nature, Origin and Use. Praeger Publishers, New York (1986)
Chvátal, V.: A greedy heuristic for the set-covering problem. Math. Oper. Res. 4(3), 233–235 (1979)
Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms, 2nd edn. The MIT Press, Cambridge (2001)
Crasborn, O., Zwitserlood, I.: The corpus NGT: an online corpus for professionals and laymen. In: Workshop of Representation and Processing of Sign Languages (2008)
Dey, A.K.: Understanding and using context. Pers. Ubiquit. Comput. 5(1), 4–7 (2001). http://dx.doi.org/10.1007/s007790170019
Dreuw, P., Neidle, C., Athitsos, V., Sclaroff, S., Ney, H.: Benchmark databases for video-based automatic sign language recognition. In: LREC 2008, ELRA (2008)
Efthimiou, E., Fotinea, S.E., Hanke, T., Glauert, J., Bowden, R., Braffort, A., Collet, C., Maragos, J., Goudenove, F.: Dicta-sign: Sign language recognition, generation, and modelling: a research effort with applications in deaf communication proceedings of the language resources and evaluation. In: Conference Workshop on the Representation and Processing of Sign Languages : Corpora and Sign Languages Technologies (2010)
Efthimiou, E., Fotinea, S.-E.: GSLC: creation and annotation of a Greek sign language corpus for HCI. In: Stephanidis, C. (ed.) HCI 2007. LNCS, vol. 4554, pp. 657–666. Springer, Heidelberg (2007)
Felipe, T.A.: Os Processos de Formação de Palavra na Libras. ETD - Educação Tematica Digital 7(2), 199–216 (2006)
Fernandes, S.: Educação de Surdos, 2nd edn. Editora Ibpex, Curitiba (2011)
García, L.S., Guimarães, C., Antunes, D.R., Fernandes, S.: HCI Architecture for Deaf Communities Cultural Inclusion and Citizenship. In: Proceedings of the 15th International Conference on Enterprise Information Systems - ICEIS 2013, vol. 3, pp. 68–75. Angers, France, July 2013
Guimarães, C., Antunes, D.R., Fernandes, S., García, L.S., Miranda, A.J.: Empowering collaboration among the deaf: Internet-based knowledge creation system.In: IADIS WWW/Internet 2011 Conference. Proceedings of the IADIS International Conference on WWW/Internet, pp. 137–144 (2011)
Jacko, J.A., Sears, A. (eds.): The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications. L. Erlbaum Associates Inc., Hillsdale (2003)
Johnston, T., Schembri, A., Adam, R., Napier, J., Thornton, D.: Auslan Signbank: the Auslan Lexical Database (2015). http://www.auslan.org.au/
Kumar, E., Kishore, S.R.C., Kishore, P.V.V., Kumar, P.: Video audio interface for recognizing gestures of indian sign language. Int. J. Image Process. (IJIP) 5(4), 479–503 (2011)
Liddell, S.K., Johnson, R.E.: American sign language: the phonological base. In: Valli, C., Lucas, C. (eds.) (org.) Linguistics of American Sign Language: an introduction. Clerc Books/Gallaudet Press, Washington, D.C. (2002) (1989)
Martinez, A., Wilbur, R., Shay, R., Kak, A.: Purdue rvl-slll asl database for automatic recognition of american sign language. In: 2002 Proceedings of the Fourth IEEE International Conference on Multimodal Interfaces, pp. 167–172 (2002)
Melnyk, M., Shadrova, V., Karwatsky, B.: Towards computer assisted international sign language recognition system: a systematic survey. Int. J. Comput. Appl. 89(17), 44–51 (2014)
Preece, J., Rogers, Y., Sharp, H.: Interaction Design, 1st edn. John Wiley & Sons Inc., New York (2002)
de Quadros, R.M., Karnopp, L.B.: Língua de Sinais Brasileira: Estudos Linguísticos. Artmed, Porto Alegre (2004)
da Rocha, L.V., Edelweiss, N., Iochpe, C.: Geoframe-t: a temporal conceptual framework for data modeling. In: Proceedings of the 9th ACM International Symposium on Advances in Geographic Information Systems. GIS 2001, pp. 124–129. ACM, New York, NY, USA (2001)
Rodden, T., Cheverst, K., Davies, K., Dix, A.: Exploiting context in hci design for mobile systems. Workshop on HCI with Mobile Devices (1998)
Schembri, A., Fenlon, J., Rentelis, R., Reynolds, S., Cormier, K.: Building the british sign language corpus. Lang. Documentation Conserv. 7, 136–154 (2013)
Shehabuddeen, N., Buddeen, N., Probert, D., Phaal, R., Platts, K.: Representing and approaching complex management issues: part 1 - role and definition. Centre for Technology Management (CTM) (1999)
Stokoe, W.C., Casterline, D., Croneberg, C.: The Dictionary of American Sign Language on Linguistic Principles. Gallaudet College Press, USA (1965)
Tijiboy, A.V., Maçada, D.L., Santarosa, L.M.C., Fagundes, L.d.C.: Aprendizagem cooperativa em ambientes telemáticos. Informática na Educação: teoria & prática. Porto Alegre, vol. 1(2) (abr. 1999), pp. 19–28 (1999)
Trindade, D.F.G.: InCoP: Um Framework Conceitual para o Design de Ambientes Colaborativos Inclusivos para Surdos e Não-Surdos de Cultivo a Comunidades de Prática. Ph.D. thesis, UFPR, Informatics Program, Curitiba, PR (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Antunes, D.R., Guedes, A.L.P., García, L.S. (2015). A Context-Based Collaborative Framework to Build Sign Language Databases by Real Users. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. Access to Interaction. UAHCI 2015. Lecture Notes in Computer Science(), vol 9176. Springer, Cham. https://doi.org/10.1007/978-3-319-20681-3_31
Download citation
DOI: https://doi.org/10.1007/978-3-319-20681-3_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-20680-6
Online ISBN: 978-3-319-20681-3
eBook Packages: Computer ScienceComputer Science (R0)