Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag April 5, 2017

Adaptation and Evaluation of a Virtual Classroom for Blind Users

  • Marlene Karlapp

    Marlene Karlapp (M.Sc.) is research assistant at the Chair for Complex Multimedia Application Architectures at the University of Potsdam. She is a developer of a framework which allows creating a screen reader for two-dimensional Braille displays without any programming knowledge needed.

    and Wiebke Köhlmann

    Dr. Wiebke Köhlmann worked as a research assistant at the University of Potsdam from 2008 to 2015. In her dissertation she analyzed the accessibility of virtual classrooms for blind users. Since 2016 she specifies human machine interfaces for the automotive industry.

    EMAIL logo
From the journal i-com

Abstract

Digital media offers multiple possibilities for inclusion of people with disabilities. Nevertheless, they pose barriers especially for blind learners as they hinder an active participation in synchronous learning settings. Virtual classrooms are a particularly good example for a complex media application as they combine various media types such as audio, video, graphical content and text which can only partially be perceived.

Our work pursues the goal to facilitate an equal participation in virtual classrooms for blind users. In order to overcome these barriers, alternative user interface concepts for the display on a two-dimensional tactile Braille device have been conceived. Based on these concepts, the virtual classroom solution BigBlueButton was improved thereupon. A subsequent evaluation simulated a learning session with blind users in order to determine the acceptance and usability of the software adaptation. This user study shows that an active participation of blind learners in virtual classroom sessions can be achieved by using multimodal output devices and alternative concepts.

1 Introduction

Virtual classrooms are synchronous conference systems which allow the realization of online learning sessions and collaboration. These software solutions combine various media for communication and collaboration in one application. These media include for example chat, audio and video conferences, content production and presentation on a digital whiteboard. Due to their location independent and digital nature, they allow people with perceptive and mobility limitations to participate in learning courses.

Blind users access digital information via assistive technologies, screen readers and Braille displays. Screen reading software translates the textual information on the screen to synthesized speech and Braille for the output on Braille displays. Multimodal content of virtual classrooms cannot be conveyed adequately using this line-based output which is limited to textual content. Two-dimensional tactile Braille displays allow for the display of multiple lines of text as well as simple graphical representations, thus, also for the display of user interfaces. The BrailleDis[1] provides a display area of 120×60 pins. Input can be made using gestures on its touch-sensitive surface. The BrailleDis is used in this approach as an exemplary assistive tactile device.

2 Related Work

There are various studies concentrating on the accessibility of collaborative applications (see e.g. [16, 17]). Unfortunately, only few of them concentrate on the accessibility of virtual classrooms for blind users. In the following, two solutions are presented:

Freire et al. [6] introduce a human translator for a conventional virtual classroom, who describes the whiteboard content in real-time. This approach allows blind users to perceive the content presented, but also produced higher costs, increases the preparation effort and requires an extra human resource in addition to the teacher.

Figure 1 
          Layout of the BigBlueButton flash client (version 0.9) with adaptations (see [10]).
Figure 1

Layout of the BigBlueButton flash client (version 0.9) with adaptations (see [10]).

Santarosa et al. [20] choose to develop a new accessible virtual classroom. Taking into account accessibility guidelines, they develop a virtual classroom which supports learners with disabilities according to their needs. For example, this virtual classroom offers screen reader support for the chat, support for different input device such as mouse and keyboard, change notifications, activity protocol and the possibility to add alternative descriptions for graphical content. This solution allows the perception of the content, but does not facilitate an independent creation of such and thus collaboration for blind learners.

Additionally to these research projects, vendors of virtual classroom software increasingly offer basic support of accessibility functions following accessibility guidelines like the Web Content Accessibility Guidelines (WCAG) 2.0 [24] or IMS Guidelines for Accessible Learning Applications [7]. These include e.g. keyboard support, auditory notifications for events, alternative descriptions for graphical content, subtitles, activity protocols as well as personalized settings (see [1, 8, 23]). Nevertheless, research proves that these features cannot facilitate a sufficient accessibility (see [15, 20]).

An evaluation of open source and commercial virtual classrooms, concerning their guideline conformance [13, 15] and their availability of information via application programming interfaces (APIs) [21], shows serious impediments for visually impaired. Notably, these barriers occur in terms of the perception of graphical content and of changes as well as the operation of virtual classrooms. The assistive features of existing virtual classrooms and also of the research projects described above enable a blind user to follow a course; though do not allow interacting any further than posting a chat message.

Based on this research, Blackboard Collaborate[2] proves to be the most accessible commercial virtual classroom and BigBlueButton[3] to be the most accessible open source solution [11].

3 Adaptation of a Virtual Classroom

In the following, we describe the adaptation and extension of a virtual classroom based on the gained insights from diverse research approaches, existing solutions and guidelines described above. The goal of our work is to enable a preferably equal participation for blind users using innovative concepts on the BrailleDis. Due to the adaptability of open source software, BigBlueButton was chosen [11].

3.1 BigBlueButton

The virtual classroom BigBlueButton has a modular structure and could thus be extended easily. On behalf of an improved accessibility, keyboard support, auditory feedback, activity protocol, alternative descriptions and accessible recordings were added [11], which were not part of the official open source project (see Figure 1).

Figure 2 
            Tactile user interface of the virtual classroom. The description module is active (cf. [10]).
Figure 2

Tactile user interface of the virtual classroom. The description module is active (cf. [10]).

Depending on the assigned role, a user has access to the following module functionalities of the virtual classroom:

  1. Participant’s list: Overview of all participants including their roles and available media.

  2. Web cams: Display of all shared video streams.

  3. Whiteboard: Display of presentations and creation of drawings and annotations.

  4. Chat: Possibility for public and private communication.

  5. Descriptions: Descriptions for presentation and whiteboard elements.[4]

  6. Activity protocol: chronologic list of events, which can be refined by a filter.

Further features include the display of the session title, various setup options and recording functionality.

3.2 Alternative User Interface and Interaction Concept

The following alternative user interface concept was developed to facilitate the adaptation of the complex modules of the virtual classroom to the two-dimensional tactile representation of the BrailleDis [12, 14]. Using the implemented prototype, a blind user can perceive the content of the following modules: activity protocol, whiteboard, chat, participant’s list and description module (cf. [10]). The design concept allots one module to be presented in the center of the display at a time, called body region (cf. Figure 2, number (1)). Furthermore, each view displays the title of the virtual classroom and the name of the active module (2), a navigation bar (3), a status bar (4) and a detail region (5). The detail region can be compared to the output on a single-line Braille display, which displays the currently processed content in Braille letters parallel to the audio output.

In the navigation bar, active modules are emphasized by removing the separating vertical line on the right of the tab (see tab “description”). On the other hand, the upper border of the tactons (tactile icons) in the status bar is added in order to indicate an active state.

Figure 3 shows all modules that can be displayed in the body region. It maps the screen content of Figure 1 to a tactile representation on the BrailleDis.

The layouts of the activity protocol (Figure 3a) and the chat (b) are similar. An input field is placed at the bottom. In the chat module, the user can enter text messages. In the input field of the activity module, the user can enter filters and requests similar to a command line. The content of the activity protocol or the chat history is displayed above in Braille. An entry of the activity protocol consists of the name of the source module, a time stamp and respective descriptions or content. A chat entry shows a time stamp, the author and the message itself.

The whiteboard module (c) shows the whiteboard area of the virtual classroom. The whiteboard content is transferred to a bitonal representation displaying black pixels as set pins. The learner can use the zoom function and change the contrast in order to obtain the clearest representation for an optimized tactile perception.

The description module (d) holds descriptions to the whiteboard elements. A list of all slide elements is available on the top left. A minimap (see [18]) on the right displays a minimized representation of the whiteboard. If an alternative description is available, it can be displayed in the field underneath. Additionally, this input field allows the user to add or alter descriptions for whiteboard elements.

Figure 3 
            Tactile representation of the views in the body region: (a) activity protocol, (b) chat, (c) whiteboard, (d) description module, (e) participant’s list.
Figure 3

Tactile representation of the views in the body region: (a) activity protocol, (b) chat, (c) whiteboard, (d) description module, (e) participant’s list.

The participant’s list (e) is divided into a moderator’s list (top) and a list with participants (bottom). The current status of the microphone, video transmission and presentation rights is displayed next to each participant’s name. To keep this view compact and clear, the status information is abbreviated.

Users can choose between interaction using the hardware keys of the BrailleDis as well as gesture input. Text input is performed using the keyboard. The concept includes a comprehensive keyboard control which is not yet fully implemented.

In order to avoid accidental gesture input during reading (see Midas Touch effect [9]), a hardware key has to be pressed to initiate an exploration (Figure 4 (EG)) or selection gesture (SG) [2, 22]. In addition to these keys, the following commands can be executed via the hardware keys:

  1. switch between the modules (1–5),

  2. stop the current speech output (F1),

  3. turn the microphone on and off (F2),

  4. raise or lower the hand (F4),

  5. zoom within graphical representations (Z),

  6. change the contrast of images (C) and

  7. scroll the content using a navigation bar (NB).

Figure 4 
            Allocation of hardware keys: (1) activity protocol; (2) whiteboard; (3) chat; (4) participants; (5) descriptions; (F1) stop speech output; (F2) microphone on/off; (F4) raise/lower hand; (EG) exploration gesture; (SG) selection gesture; (Z) zoom, (Z – up) enlarge, (Z – down) downsize; (C) contrast, (C – up) increase, (C – down) decrease; (S) slides, (S– left) previous slide, (S – right) next slide; (NB) navigation bar ([10, 14], graphic according to [4]).
Figure 4

Allocation of hardware keys: (1) activity protocol; (2) whiteboard; (3) chat; (4) participants; (5) descriptions; (F1) stop speech output; (F2) microphone on/off; (F4) raise/lower hand; (EG) exploration gesture; (SG) selection gesture; (Z) zoom, (Z – up) enlarge, (Z – down) downsize; (C) contrast, (C – up) increase, (C – down) decrease; (S) slides, (S– left) previous slide, (S – right) next slide; (NB) navigation bar ([10, 14], graphic according to [4]).

All these hardware commands can also be performed using gestures on the touch-sensitive Braille display. This can be achieved by pressing the selection gesture key while performing a one finger tap gesture in the desired area. By this means, a blind user can focus input fields and press buttons. Additionally, zooming using a (half-)circle gesture is more precise when performed via gesture input than using the hardware keys [18].

[5]

In order to obtain additional information without performing an action, the user presses the exploration key while taping on the desired element. For example, a message in the activity protocol can thus be read aloud by the application. To maintain the orientation of the user, no focus change is performed when using exploration gestures.

Thanks to these interaction modes, the blind learner has access to all basic functionalities and interaction possibilities like a sighted user has in the role of a participant.

3.3 Filtering of the Application Data

Figure 5 shows the architecture of the prototypical adaptation of the virtual classroom BigBlueButton called TactileBBB. It was implemented using C#. TactileBBB uses the Microsoft Accessibility Framework UI Automation[6] (UIA) to access the application data of the virtual classroom, defines the presentation layout on the BrailleDis and provides the logic for interpreting user interactions. For communication between the application and the BrailleDis, the BrailleIO framework[7] [3] is used.

Figure 5 
            Architecture of the prototypical implementation of BigBlueButton [10].
Figure 5

Architecture of the prototypical implementation of BigBlueButton [10].

If a user desires to switch the view to the activity protocol, a hardware key on the BrailleDis can be used. The BrailleIO framework recognizes that a key has been pressed and passes the event to the TactileBBB application. TactileBBB receives the event with the request to display the activity protocol view and switches to the module in BigBlueButton. TactileBBB then transfers the data for this view to the BrailleDis using BrailleIO. The user receives auditory feedback that the view has changed. As the filtering process can take some seconds, saved protocol information is displayed in order to support the user in perceiving the changes. Subsequently, the current data of the activity protocol is filtered using UIA and compared with the saved data. In case of changes, the new information is passed to the BrailleDis – and thus to the user – via the BrailleIO framework.

4 Evaluation

In order to evaluate the usability of the conceived alternative concepts and adaptations of the virtual classroom using a multimodal tactile Braille display in comparison to existing solutions, a user study was conducted. The main focus was to determine which barriers could be overcome and in which ways the active participation could be facilitated. A learning course was simulated in a virtual classroom including a teacher and two learners. The subjects assumed the role of one of the participants and took part in the course by listening and also interacting. The BrailleDis and the keyboard served as the devices for in- and output [14].

4.1 Sample

The sample consisted of ten blind subjects who were 17 to 53 years old, including four women and six men. Four subjects were experienced in the work with the BrailleDis due to previous evaluations and will thus be referred to as experts. Six subjects from senior classes in secondary school were familiar with tactile representations, but inexperienced with tactile two-dimensional displays and will subsequently be referenced as users. Five of the ten subjects were familiar with e-learning applications (e.g. learning management systems and chat applications), but no subject had used virtual classrooms or conference software before [14].

4.2 Methodology and Procedure

The study was structured as follows: A preliminary questionnaire was sent via e-mail, an introduction to the BrailleDis, the user test itself and a concluding questionnaire directly after the user test. The evaluation was executed with the functionality described above. The qualitative research was conducted with one subject at a time and a test supervisor on-site as well as another test supervisor in a remote location. The second test supervisor at the remote location possessed the double role of an additional participant and of the teacher (see Figure 6).

Figure 6 
            Experimental setup of the user test [14].
Figure 6

Experimental setup of the user test [14].

The initial introduction to the basic operation of virtual classrooms, the tactile user interface and interaction techniques was conducted using the evaluation method cognitive walkthrough (cf. [19]) in order to gain an impression how intuitive the user interface was. Due to the prototypical version of the application and the limited recognition of gesture input of the BrailleDis, restrictions concerning the functionality were substituted by interventions by the test supervisor using the wizard-of-oz method[8] [5]. The evaluation was documented using audio recordings and videos of the subjects’ hands on the BrailleDis, the recording function of the virtual classroom as well as protocols of the test supervisors. In the following the two parts of the user test are described:

4.2.1 Part 1: Comparison of a Textual and Semi-graphical Output

A comparative test simulating similar course scenarios was performed to evidence an accessibility improvement due to the alternative concepts. The display of textual output of the activity protocol on the BrailleDis was compared to the semi-graphical representation of the complete user interface of the TactileBBB application. The main difference of the two representations lay in the removal of the navigation bar, the title area and the status bar for the textual display (see Figure 2). This representation matches almost the customary working techniques of blind users without having to use further assistive devices.

Figure 7 
              Comparison of the screenshots of textual and semi-graphical representation of the filtered virtual classroom showing activity protocol on the left and TactileBBB application on the right [14].
Figure 7

Comparison of the screenshots of textual and semi-graphical representation of the filtered virtual classroom showing activity protocol on the left and TactileBBB application on the right [14].

In the context of a course setting, the teacher explained a topic using speech, drew a hierarchical diagram on the whiteboard and posed questions in the chat. The subject was asked to follow the course as best as he/she could and to answer comprehension questions afterwards. At first, the textual representation of the activity protocol was used (see Figure 7, left), followed by the same course setting with a slightly adapted topic using the semi-graphical application (Figure 7, right).

4.2.2 Part 2: Evaluation of Further Features

Subsequently to the comparative evaluation, further use cases were performed by the subject. They were assessed using questions and observations of the test supervisors. The use cases included posting a chat message, raising the virtual hand as well as perceiving and composing of alternative descriptions for whiteboard elements (see Figure 3d).

The evaluation with the users was limited to the duration of two lessons.[9] As a longer introduction to the main operation of the BrailleDis was necessary for the users than for the experts, it was not possible to address the same amount of use cases. But as the experts’ feedback was more detailed due to their experience, the available time spans proved as adequate.

4.3 Results

The results of both parts of the evaluation showed a high acceptance of the user interface and the proposed interaction concepts. The following sections describe the results in detail.

4.3.1 Results of the Comparison Test

The comprehension of the learning objectives was evaluated by questions with regard to contents:

  1. Which participant shares his/her video?

  2. Which elements were drawn on the whiteboard?

  3. At what time did the moderator pose a question?

  4. To which whiteboard element does the question in the chat refer to?

  5. Please describe the drawn representation.

  6. How many hierarchical levels has the representation?

The experts were able to answer more questions correctly than the users due to their experience with the BrailleDis and two-dimensional screen content (see Table 1). Overall, the content was easier to perceive with the semi-graphical than with the textual representation. The subjects had difficulties to draw conclusions about the content structure from the element descriptions in the textual representation. Questions A and C concerning the camera status and the timestamp of a chat message could be answered correctly by all experts using the textual as well was the semi-graphical representation. The structure of the drawing on the whiteboard (questions B, E, F) was difficult to perceive using the textual representation even though geometrical elements and their descriptions were listed in chronological order. Nine of ten subjects were able to grasp the drawing structure using the semi-graphical representation of TactileBBB. But only few subjects could establish a connection between the chat and the whiteboard (question D). Merely two subjects were able to assume an answer to this question using the TactileBBB [14].

Table 1

Comparison test: Comprehension for each subject in the textual (t) and semi-graphical (sg) application; ✓ correct answer; × wrong answer; – questions not posed (according to [14]).

Questions: (A) (B) (C) (D) (E) (F)







Test: t sg t sg t sg t sg t sg t sg
Users: P1 × × × ×
P2 × ×
P3 × × × × ×
P4 × × × × × ×
P5 × × × ×
P6 ×
sum 3 5 2 3 2 5 1 5
Experts: P7 × × × × ×
P8 × × × × ×
P9 × ×
P10
sum 4 4 2 4 4 4 1 1 2 4 2 4
overall 4 4 5 9 4 4 3 4 4 9 3 9

The comparison test showed that blind users can participate actively in a virtual classroom session thanks to the alternative user interface concept on a tactile two-dimensional Braille display. Eight of the ten subjects stated that the semi-graphical display facilitated the perception of the session. Only one subject preferred the textual representation.

4.3.2 Results Concerning Interaction

The evaluation of the interaction comprised three use cases:

  1. Raise the hand: Selection gesture on the tacton [h] in the status bar (Figure 2) or press of the hardware key [F4] (Figure 4).

  2. Post a chat message: Switch to the chat module – selection gesture on the input field – composition of a message using the keyboard – post message using the enter key (Figure 3b).

  3. Edit descriptions: Switch to the description module – selection gesture on a whiteboard element in the list – selection gesture on the input field – composition of a description using the keyboard – saving the description using the enter key (Figure 3d).

All three use cases were evaluated by the experts; the users each evaluated some of the use cases depending on the available time.[10]

The operating steps for the composition of a chat message and for raising a hand were rated positively. The description view is the most complex view of the user interface and was thus judged as hard to use. Nevertheless, the operation steps for adding or editing a description were rated positively. The prototypical implementation of the description view comprised a list of whiteboard objects, a downsized representation of the whiteboard (minimap [18]) and an input field as well as three buttons (Figure 3d). Subjects wished for the highlighting of the selected list elements in the minimap. This linking is conceived in the concept but not yet implemented (cf. [12]). Therefore this concept was affirmed by the subjects.

Figure 8 
              Weighted representation of preferred views according to their perceived importance (1 most important, 3 rather unimportant) [14].
Figure 8

Weighted representation of preferred views according to their perceived importance (1 most important, 3 rather unimportant) [14].

Figure 9 
              Responses to the question: “How did you like the control elements?” [14].
Figure 9

Responses to the question: “How did you like the control elements?” [14].

4.3.3 Results Concerning the User Interface

Overall, the user interface was rated positively. The display of graphical content was the greatest improvement in comparison to the current working techniques of the subjects. The preferences concerning the amount of graphical information, the use of the different views (activity protocol or whiteboard) and the interaction modalities varied highly. This leads to the conclusion that the support of various interaction techniques as well as a high adaptability of the interface and the output is necessary to respect the different needs and preferences. As an improvement of the whiteboard view, the subjects asked for further explanations and descriptive labels. Furthermore, the necessity of a view change should be reduced to a minimum. One approach could be to provide auditory feedback for all events, for example the reading of new chat messages when the whiteboard view is active.

The subjects rated their preferred views for a possible real-life use case in three levels from 1 (very important) to 3 (rather unimportant) (Figure 8). Nine subjects regarded the whiteboard as their most important view. Other important views include the chat and description module, followed by the participants and activity protocol module.

4.3.4 Results Concerning the Control Elements

In summary, the control elements were rated positively (Figure 9). Only a larger space in between elements would have improved the readability. The concept of tabs to change views was broadly accepted. Nevertheless, the subjects were confused about the concept of removing a line of the tabs to indicate the active state in comparison to adding a border to active tactons in the status bar. The origin of these concepts lays in the inconsistent behavior of tabs and buttons in visual user interfaces. Browser tabs, for example, are usually open towards the content below and buttons and icons are highlighted by a border or color change when active. As this concept is currently not present in non-visual work techniques, a redesign of the concept could lead to a simplified operation [14].

4.4 Discussion

Even the reading speed of the experts proved to be slower than the presentation speed. It is to be expected, that an increasing familiarity with the operation of the user interface and the elimination of usability and technical impediments will lead to a nearly synchronous perception of the content.

It was easier for blind users to follow the virtual classroom session when using the semi-graphical than with the textual representation. However, the activity protocol allowed perceiving a higher amount of detail information and chronology than using only the whiteboard view. Therefore, the views are equally important and should be applied alone or in combination according to user preferences and course type. The focus of the evaluation lay in the verification of the virtual classroom’s usability and accessibility; a continuative survey of collaborative activities would complete the findings. Furthermore, it could yet not be shown if the alternative concepts allowed for an improved understanding of semantic coherences between the different functions of the virtual classroom (cf. Table 1, question D). The not yet implemented concept to allow for a linking of related elements could increase this understanding.

5 Conclusions

With the exemplary adaptation of the virtual classroom BigBlueButton for a tactile two-dimensional Braille display, we were able to show an improved accessibility for blind learners. The tactile representation allowed for the perception of all content, a direct interaction as well as the manipulation of graphical elements. In order to achieve an equal participation of blind users, further concepts, for example for creating and editing whiteboard objects, need to be added and evaluated [14]. For this extension, the implementation should consider using BigBlueButton’s new HTML5 client, which was not yet available at the time of this prototype’s development. Using HTML5, a higher availability of application data is to be expected than with the flash version used in the here described implementation.

As the BrailleDis used in this evaluation is very expensive, it cannot be afforded by most of the users. Nevertheless, the exemplary adaptation of the open source virtual classroom BigBlueButton and the output on the BrailleDis were necessary to prove the potential of the alternative concepts. The use of the BrailleIO framework allows an easy adaptation for future, less expensive solutions of tactile displays in terms of user interface layout, interaction, resolution and functionality. Thus, this work is an important basis for future developments in the area of accessible collaborative e-learning tools as well as Web 2.0.

About the authors

Marlene Karlapp

Marlene Karlapp (M.Sc.) is research assistant at the Chair for Complex Multimedia Application Architectures at the University of Potsdam. She is a developer of a framework which allows creating a screen reader for two-dimensional Braille displays without any programming knowledge needed.

Wiebke Köhlmann

Dr. Wiebke Köhlmann worked as a research assistant at the University of Potsdam from 2008 to 2015. In her dissertation she analyzed the accessibility of virtual classrooms for blind users. Since 2016 she specifies human machine interfaces for the automotive industry.

References

[1] Blackboard Collaborate Web Conferencing: Voluntary Product Accessibility Template (VPAT), https://en-us.help.blackboard.com/@api/deki/files/19616/Collaborate_web_conferencing_VPAT_-_2013.pdf, visited: 1.2.2017.Search in Google Scholar

[2] Bornschein, J.; Prescher, D.; Schmidt, M.; Weber, G.: Nicht-visuelle Interaktion auf berührempfindlichen Displays. In (Schlegel, T., ed.): Multi-touch, Xpert.press, p. 319–338. Springer, Berlin, Heidelberg, 2013.Search in Google Scholar

[3] Bornschein, J.: BrailleIO – A Tactile Display Abstraction Framework. In (Zeng, L.; Weber, G., eds.): The Proceedings of Workshop Tactile/Haptic User Interfaces for Tabletops and Tablets (TacTT 2014), p. 36–41, 2014.Search in Google Scholar

[4] Bornschein, J.; Prescher, D.: Collaborative Tactile Graphic Workstation for Touch-Sensitive Pin-Matrix Devices. In (Zeng, L.; Weber, G., eds.): The Proceedings of Workshop Tactile/Haptic User Interfaces for Tabletops and Tablets (TacTT 2014), p. 42–47, 2014.Search in Google Scholar

[5] Dahlbäck, N.; Jönsson, A.; Ahrenberg, L.: Wizard of Oz Studies – Why and How. Knowledge-Based Systems, 6(4):258–266, 1993.Search in Google Scholar

[6] Freire, A. P.; Linhalis, F.; Bianchini, S. L.; Fortes, R. P. M.; da G. C. Pimentel, M.: Revealing the Whiteboard to Blind Students: An Inclusive Approach to Provide Mediation in Synchronous E-Learning Activities. Computers & Education, 54:866–876, 2010.Search in Google Scholar

[7] IMS Guidelines for Developing Accessible Learning Applications, Version 1.0 white paper, http://www.imsglobal.org/accessibility/accessiblevers/, visited: 1.2.2017.Search in Google Scholar

[8] Improving Accessibility for Disabled Users in Connect Pro Meetings, https://www.connectusers.com/tutorials/2008/11/meeting_accessibility/index.php, visited: 1.2.2017.Search in Google Scholar

[9] Jacob, R. J. K.: What You Look at is What You Get: Eye Movement-based Interaction Techniques. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’90, ACM, New York, p. 11–18, 1990.Search in Google Scholar

[10] Karlapp, M.: Taktile Umsetzung eines virtuellen Klassenzimmers für Blinde (Masterarbeit). Universität Potsdam, Institut für Informatik und Computational Science, 2015.Search in Google Scholar

[11] Köhlmann, W.; Dressel, N.; Wegner, D.: Erweiterung eines virtuellen Klassenzimmers zur Verbesserung der Zugänglichkeit für Blinde. In: Tagungsband der 45. Jahrestagung der Gesellschaft für Informatik – Informatik 2015, p. 1325–1339, 2015.Search in Google Scholar

[12] Köhlmann, W.; Lucke, U.: Alternative Concepts for Accessible Virtual Classrooms for Blind Users. In: 2015 IEEE 15th International Conference on Advanced Learning Technologies (ICALT), p. 413–417, 2015.Search in Google Scholar

[13] Köhlmann, W.: Richtlinien zur barrierefreien Gestaltung virtueller Klassenzimmer für Blinde. i-com, 13(3):3–11, 2014.Search in Google Scholar

[14] Köhlmann, W.: Zugänglichkeit virtueller Klassenzimmer für Blinde. Logos Verlag, 2016.Search in Google Scholar

[15] Köhlmann,W.; Smisniak, U.: Accessibility of Synchronous Collaborative Platforms – an Analysis. In (Breiter, A. et al., eds.): Proceedings der Pre-Conference Workshops der 11. e-Learning Fachtagung Informatik (DeLFI 2013). Logos Verlag, Berlin, p. 117–122, 2013.Search in Google Scholar

[16] Mori, G.; Buzzi, M. C.; Buzzi, M.; Leporini, B.; Penichet, V. M. R.: Making Google Docs User Interface More Accessible for Blind People. In: Proceedings of the First International Conference on Advances in New Technologies, Interactive Interfaces, and Communicability (ADNTIIC 2010). Springer, Berlin, Heidelberg, p. 20–29, 2011.Search in Google Scholar

[17] Pölzer, S.; Miesenberger, K.: 2D Presentation Techniques of Mind-maps for Blind Meeting Participants. In (Sikne Lanyi, C.et al., eds.): Proceedings of the 13th AAATE Conference. IOS Press, Amsterdam, 2015.Search in Google Scholar

[18] Prescher, D.; Weber, G.; Spindler, M.: A Tactile Windowing System for Blind Users. In: Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2010). ACM, New York, p. 91–98, 2010.Search in Google Scholar

[19] Sarodnick, F.; Brau, H.: Methoden der Usability Evaluation: Wissenschaftliche Grundlagen und praktische Anwendung. Wirtschaftspsychologie in Anwendung. Verlag Hans Huber, Bern, 2010.Search in Google Scholar

[20] Santarosa, L.; Conforto, D.; Machado, R. P.: Whiteboard: Synchronism, Accessibility, Protagonism and Collective Authorship for Human Diversity on Web 2.0. Computers in Human Behavior, 31:591–601, 2014.Search in Google Scholar

[21] Schäkel, C.; Köhlmann, W.: Programmatic Availability of Virtual Classrooms for Assistive Technologies. In: Proceedings of the 8th ACM International Conference on Pervasive Technologies Related to Assistive Environments. PETRA ’15, ACM, New York, p. 57:1–57:8, 2015.Search in Google Scholar

[22] Spindler, M.; Kraus, M.; Weber, G.: A Graphical Tactile Screen-Explorer. In (Miesenberger, K.et al., eds.): Computers Helping People with Special Needs, vol. 6180. Springer-Verlag, Berlin, Heidelberg, p. 474–481, 2010.Search in Google Scholar

[23] Talking Communities V7.42 Voluntary Product Accessibility Template, http://talkingcommunities.com/vat/, visited: 1.2.2017.Search in Google Scholar

[24] Web Content Accessibility Guidelines (WCAG) 2.0, http://www.w3.org/TR/2008/REC-WCAG20-20081211/, visited: 1.2.2017.Search in Google Scholar

Published Online: 2017-04-05
Published in Print: 2017-04-01

© 2017 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 20.4.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2017-0002/html
Scroll to top button