Keywords

1 Introduction

Part of Computer Science (CS) social responsibility is placed on the appropriation of its innovative potentiality to support Education, especially on the effort in Literacy. The need for social inclusion of disabled communities gives CS – more exactly, Human Computer Interaction - a complementary role. We have occupied this space based on three principles: an inter and transdisciplinary approach, participatory design and Action-Research (AR) practices. A PhD theses built within our research group and published in 2014 [1] identified, based on the Literacy through the Direct Way Methodology (LDWM) [25] and by means of AR, a set of a computer application requirements to support the teaching-learning processes of Portuguese written language reading and writing to/by deaf children. The LDWM is based firstly on the appropriation of children’s Literature – followed by several genders, on the need for critical reading, collaborative classroom activities and, mainly, text treatment as sequences of concrete significant pieces of objects. While traditional oral literacy aims at the acquisition of coding and decoding abilities, is dependent on the oral capacity and does not grant effective critical reading of the written language, this approach defines Literacy as an individual state in which the person manages to make plain usage of reading and writing in social context. For its authors, to be a reader implies having high degrees of autonomy and criticism over a text. This methodology, built and widely used during more than two decades by a civil French organization (www.lecture.org), has proved to be effective virtually in all cases: for either a first or a second language, for children, teenagers and adults and for learners with no literacy success by Orality, of which deaf children are an special case. An interview of a blind teacher – a reference from Instituto Benjamin Constant (IBC), the pioneer Latin America Institution for Education of visually impaired people [6] called us to the urgency of supporting the rescue of literacy practices of the Portuguese Language teaching and learning through the Braille system. She referred to the “desbraillização” (“dis-braille-zation”) process, hypothetically caused by the accommodation of blind people to the screen-readers. The cause-effect subject is polemic, but the fact, reported before in the USA [7], is that a regression of Braille literacy practice is in course, in spite of blind children literacy experts having proved the sine-qua-non characteristic of the Braille System as the passport to the plain social inclusion of the blind [8].

All this took us naturally to the research question: “Is it possible – and, in this case, which would be the associated activities and tools – to map the set of requirements elicited for deaf children literacy to the context of blind children?”. The main objective became, then, to investigate how a tangible interaction for blind children’s literacy should be. This goal was pursued mapping the set of requirements elicited for deaf children to the context of blind ones, through conceptual readings, a working process continuously close to a blind teacher - also a national reference in Braille literacy, having, then dual role in the process - and the searching for proper interface elements and interaction techniques for blind students placed in the reading and writing acquisition process.

Among the challenged faced, we select the need to attend to literacy practices with congenital visual impaired children [21] and to identify concrete representations of Braille that could allow for language manipulation both individually and collaboratively in classroom situations. The knowledge of real literacy practices together with these concrete representations for texts in Braille would bring us the blind children’s correspondent requirements for the specific ones identified by Bueno [1] for deaf children literacy. We also looked for ways of registering the children’s productions and associate to them different oral representations (the children’s himself, the teacher’s and the screenreader’s), in order to make the comparison (a metalinguistic one), together with several additional classroom activities possible, but the latter are not covered by the present paper, since our main focuses here are the tangible aspects of the interface and of the associated interaction.

2 Related Work

We visited related work, comprehending latest results focused firstly on input/output devices and interfaces for visually disabled people [911].

We also revised applications that were meant to support Braille “literacy” - limited to the motor coordination abilities and the code learning [1214]. An interesting work developed at Carnegie Mellon University consisted on the participatory and iterative design process of an intelligent tutor of Braille [14]. The main challenges of the authors included the avoidance of the right-to-left writing obligation – due to the inaccessibility of the Braille typewriters (Perkins machines) for developing countries, together with the production of feedback for each symbol typing in satisfactory time intervals (neither available in the traditional writing nor in the addressed one within the populations in context, which are limited to the puncher). Equally motivated by the need to support blind children literacy with low cost solutions, researchers from the University of Madison Wisconsin and from the Birla Institute of Technological Sciences developed a robotic device called “Write Tutor” [12]. These authors state that the Write Tutor can teach children and people of any age group the art of writing. Including, within its architecture, a speech recognition system, the tutor guides the user to write what he/she says, holding his wrist by the robotic arm. Their system has got built-in modules that “allow for teaching reading and writing simultaneously” [12]. Another related work available, published in 2015, is Mudra [13], a multimodal interface integrated with an speech recognizing module implemented in Android for Braille teaching. The authors state that their work shows the way in which multimodal interfaces could be used “to teach Braille in a quicker and more efficient way”, as well as “without effort” [13]. The paper does not present evidences of those statements. Never-the-less, even in the supposition of their existence, it is still possible to claim that the concern of these authors was, once more, limited to the capabilities of encoding-decoding, and did not refer to the binomial teaching-learning of the critical reading and writing as it does in our approach.

Last but not least, few works shared our approach of plain literacy concept [1518]. The research reported in [15] refers to the need for “nurturing” Braille. These authors’ investigation looked forward maximizing the access and the motivational aspect for Braille learning and use. The paper presents the results of five interviews with blind persons, composed by teachers and students. The authors created scenarios and projected software and hardware solutions in order to motivate for the discovery and the strengthening of the resilience of Braille literacy [15]. We have verified that both the principles and the objectives are aligned with ourselves’, and we registered some issues that could be of potential contribution to our solution. The authors reinforce the special advantages of Braille over other writing systems for the blind populations, specially its huge flexibility to represent different knowledge areas writing systems, from which we can quote Math and Music, and the reduced number of significant elements to define a cell, that makes a seven-key keyboard possible and eliminates the need for larger areas.

A one-day workshop at CHI 2014, in Toronto [16], brought together designers, researchers and general HCI practitioners to analyze the opportunities and directions to take in designing more natural interactions based on spoken language and to look at the ways in which recent advances in speech processing could reinforce the acceptance of speech and natural language interaction. The Workshop abstract states that humans’ most natural forms of communication, speech and language, are also the most difficult modalities for machines, as they suppose, by the width of their channels. From the participant of this workshop view, the effort to improve machines’ ability to understand speech and natural language has been neglected as interaction modalities, mainly because of the challenges imposed by its high error rates and computational complexity. Among the issues stated at this workshop, we can bring the fact of the wide range of tasks in which speech can be useful, clearly not limited to direct interactions [16]. One example of a huge potentiality of speech refers to the area of access to multimedia repositories. As these people state, the rate in which video resources, for instance, are uploaded at YouTube (72 h by min) [16, 24], makes it increasingly difficult to search for information and to navigate through large and usually multilingual collections. This problem is being faced by our research, which had identified the vantages of using noises, sounds, music and songs bases to take advantage of the direct and fastest communication channels for blind people, together with their associated background.

We were also curious, by what else could be extracted from speech, beside speech-segments (phonemes and longer ones). Additionally, the authors of [16] questioned themselves – as we do ourselves) about how speech could be combined with other modalities to better interfaces’ usability and soundness. Another idea related and complementary to this one is what else could be added to speech to make it more expressive and natural. This is also our concern and will be the object of future research.

A work-in-progress paper presented at CHI 2014 [17] dealt with our principal interest: supporting teaching and learning processes of blind children literacy, understood in its whole multi-dimensional complexity. The authors were developing technologies to support teachers and parents in creating replicable tactile graphics. They refer to “emergent literacy” [17, 25] as the process by which a child can construct concepts about the functions of symbols and print, being based on experiences and meaningful language facilitated by interaction with adults. Co-reading experiences between parents and child can lead to emotional bonds, join one-another into discovering surrounding environments, objects and relationships, as well as extend creativity and vocabulary acquisition, together with continuous semiosis [17].

The authors of [17] also claim that tactile picture books aid in the development of the child’s tactile acuity and mobility, as well as in their sense of feeling of their environment, apart from the confidence to explore and construct new associations through touch. They claim, also, that available tools for creating tactile books are difficult to learn and require significant time to use and to adapt, since they usually have confusing guidelines or focus on scientific graphics. These authors [17] refer to a technical report [26] to state the difficulty in publishing creditable literacy research results in the space of visual impairment due, from their research experience, to the “low incidence nature of visual disability” which limits both the methods and the conclusions that can be drawn. Based on that premise, the authors of [17] looked for getting to know about this population and taking advantage of as many different ways as possible, having visited sites, attended to workshops, and distributed questionnaires. We suppose we are protected in this aspect, not only by the theoretical studies – which proved, once again, not to be enough for making neither conclusions nor working hypotheses, but by our permanent contact with Maria da Gloria Almeida, as well whose life-long experience – along the overlapping roles of a blind child that learned to read and write in Braille and of a Braille literacy expert, has continuously limited our confidence and taken us back to the challenge of creating proper and potentially useful solutions. Among the initial findings of the authors of [17], we were interested specially by the emerging insight that shows the agencies, even those who are really committed to create affordable product to be socially used by the intended users, struggling to make product that meet all the specific requirements, in spite of the fact that teachers of visual impaired children and other related professionals being eager to testing primitive products. This finding is coherent with our premise of the higher relevance of the solutions’ context-adherence over high-technologies creation and use, especially where the latters would not be financially accessible, even if developed in time. The authors of [17] state that there are efforts to make 3D printing and easy and efficient way for creating matrixes for tactile graphics. Since this facility was not available, we designed the open solution based on LEGO® 3D illustrations, which was deconstructed at the very first test by our genuine representative of the final users’ profile. As research opportunities, the authors of [17] include the need to explore methods to exchange information regarding child’s learning experiences at home and while at school, as well as to find techniques for transcribing images in the way experienced artists do, together with the application of established guidelines practiced and refined by teachers of visually impaired children. These ideas appeared natural for our research and are part of our future work too, since we had experienced difficulties to find related literature and we had reached, even by a different path, awareness about the need for further exploring of image description within our working space.

A visionary work was presented in [18]. Its authors claim that the tangible reading experience of interactive books for children in general is inaccessible for blind children. Their paper presents an innovative set of 3D-printable models designed as building blocks for creating movable tactile pictures that can be touched, moved and understood by children with visual impairments. Models examples of what they proposed are canvases, connectors, hinger, spinners, sliders, lifts, walls and cuttouts. They map a range of spatial concepts such us input/output, up/down, and high/low. They based their proposed models on a three-step methodology including a survey of popular interactive books, two workshops on the processes of creating movable pictures by hand (physically, using LEGO® and Play-Doh) and the creation wood-based prototypes and informal testing with sighted preschoolers. Their System creates a 3D-printable model from a given specification. Supported by [28], these authors state that books that include tangible interaction encourage children interests into the content, stimulating their perceptual motor skills that evolve to linguistic ones [18]. They also claim, now based on [29], that holding children’s attention was proved to be the key to develop emergent literacy in association with books [18]. This statement is completely aligned with the principles of the LDWM [25] and our PhD thesis of hypotheses [1]. These authors’ methodology, in the step of physical constructions of 3D-scenarios by children, shares our observation of children’s building actions from LEGO® an LEGO®’s Story-Starter, a commercially available set that includes Playmobil-like objects like socially common characters’ (such us a policeman, a – stereotyped - girl, a boy, …) and building elements that help to build other roles, as a pointed hat and a broom, to compose a witch. The design requirements stated by these authors were: (i) Easy to move and touch. (ii) Easy to print. (iii) Easy to assemble. (iv) Easy to customize. (v) Easy to reuse. (vi) Hard to break. Except from the facility to print, inherent to their technological innovation, we shared all their requirements. These authors inform that many premade, ready-to-print objects are now available on sharing sites as Thingiverse. These tools are candidates to be included in our future visionary architecture to support the DWML for blind children that we intend to build after having arrived to a rather comprehensive list of requirements and, mainly, after having conducted tests with real end users, what can only be done during lessons time, which in Brazil starts effectively on March. The authors of [18] refer to the importance of physical analogies to the real world in toys such as cars and trains for visually impaired children. These analogies are in consonance with [19, 20] since they keep the significant features that characterizes the object concept, which is the case in cars and trains, regarded the teacher’s explanation associated to the difference in size, which happens to be one of the features that cannot be reproduced at classroom environments [19, 20].

Our further research on material for blind children’s requirements took us also to [30]. Among the justifications for the special relevance that courseware assumes in visually impaired people, there is the difficult of blind persons to be in contact with the physical environment, the risk of the child be conducted to “verbalism” - expression that denotes the mere verbal repetition of enunciations with no significance, the dependency that the concept construction has to the child’s contact with the real world objects. In this, the authors are aligned with [19, 20]. In [30] the authors remark the relevance of courseware building from simple and easily available materials, as matches, string and cardboard. They claim that this practice enhances the reproduction of the courseware by the totality of the intended users. They also remark other requirements: (i) abundance. They must be attend simultaneously to several children; (ii) variability. To instigate the child’s interest and his/her experimentation; (iii) significance. In the sense of its potentiality of being perceived by the tactile sense (as also stated in [19, 20]); (iv) size. They explain that very small elements do not allow for detail identification (requirement stated in [19, 20] too) and, also, that very big objects do not allow for the totality apprehension. This was also remarked by [19, 20], together with the explanation of the need for telling the children, verbally, any feature of the real object that could not be correctly represented. In [30], the authors refer to the need for the appropriation of contrasts, such as smooth/rough and thin/thick in order to facilitate distinctions in value. They claim, further, that the material cannot cause any kind of irritation, nor unpleasant feeling at handling. The faithfulness of the representations to the real objects, also claimed by these authors, reinforces what was said [19, 20]. The authors of [30] also refer to the facility to manipulate and to the need for robustness, two features already placed in our conceptual working space.

The need for tangible interfaces carried us naturally to the LEGO® concept. Looking after combining LEGO® to Braille including pictures, took us to think about a solution where the Braille cells would be represented by LEGO®’s 3 × 2 blocks, added by a 3D pictures representation, which seemed natural for blind children, whose main communication sense is tactile. This took us easily to the commercially available LEGO Educational Story Starter®, which offers the possibility of building stories scenarios with LEGO® blocks in sequential panel bases and its photographic recording and correspondent snapshots input into texts. By the capabilities perceived and as far as the documentation is concerned, this application does not consider blind children. This, together with the ideas found at the related reviewed papers, motivated our first solution attempt. The option for LEGO® building bases was justified by what we think to be the main capability of this solution, namely the possibility of integrating, within the same production space, the proper text with alternative representations, among which we underline the 3D scenarios. This was supposed to work as a metaphor with “ink texts” illustrations, which was supposed to allow for the child to perceive, recognize and understand the role of texts illustrations, while it would additionally act stimulating creativity in texts production (feature inherent to Story-Starter® product as written in its official description) and making the concrete building of non-textual representations, specially the stories illustrations, possible for the blind child. It is worth reinforcing that the sighted child also needs physical experimentation [23].

The appropriation of the LEGO® blocks for representing the Braille cells was adopted form the analogy with the need, determined by the Literacy Direct Way Method of the availability and the treatment to/by the child of different graphic letters representations, including their form - handwriting in the general case - and diverse fonts and sizes, among others. Our solution intends to help blind children in current literacy processes in the language acquisition by means of the representation of possible variants of letters references. We expect that from the cell’s concrete representation – in the form of LEGO® blocks or in any other one that conserves its main characteristics, the blind child will be aided to grasp the significant, determinant features of the Braille system. In the context of the Portuguese Language teaching and acquisition (as occurs in natural languages in general), this need is associated to the concept of “neutralizing”, process by which the child establishes commutative pairs, makes the comparison and the substitution, with the intention of identifying the language signs. Our expectation is that the blind child will associate the Braille cells representations to Braille real cells in a similar way to that used by the seeing child to identify the available variants of words oral referents as its phonetic matrix [22].

An academic poster published at SUI’14 [27] described a research process then searching for ways of sighted children modeling tactile books by LEGO® pieces and those pieces being converted to digital models and that could be printed. These authors focused image recognition, mainly of 3D pictures. Their methodological steps were: (i) Putting sighted children to choose a book to model; (ii) using LEGO® pieces to construct physical 3D interpretation of the book’s contents; (iii) scanning the models by a 3D-scanner, in 3 views: top, front and side; (iv) extracting key features through the projections; (v) matching these features to retrieve visually similar 3D models from large repositories [27]. Though interesting for our own purposes, the solution described in [27] can be though to work in a future visionary architecture mainly because it seems to be rather content-dependent, together with the interaction elements proposed by the innovative work in [18].

3 A Concrete Attempt for an Artifact to Support Braille Literacy Teaching and Learning by the Direct Way Literacy Methodology

As a first solution, we created a Tangible User Interface (TUI) made of LEGO® blocks, together with LEGO® blocks adapted for the Braille alphabet, which could be easily connected to a computer by inner components. Our first attempt was justified by the discovery of the sharing of elements and principles with Maria da Gloria Almeida’s thesis that proved the special relevance of good children’s literature for blind children literacy, and by her assertion about the need for concrete scenarios manipulation for this same audience. The proposal had as one of its innovative characteristics, the integration of text in Braille (by means of LEGO® blocks adapted) with 3D illustrations, made possible by the LEGO’s Story Starter®, which allows for some well-known social characters creation (like a policeman, a fireman, a boy, a witch, between others). A hypothetical scenario of this proposal usage is shown in Fig. 1. The scene consisted of a Braille text-like area - respecting what could be words and spaces between them, and a 3D illustration composed by a witch carrying a broom pursuing a running boy carrying a cat, presumably the witch’s one.

Fig. 1.
figure 1

The first TUI proposal built, tested and rejected by the blind teacher

This solution proposal was presented to the blind teacher as a test for the intended audience (blind children at the beginning of the literacy process, 6 to 7-year-old) during a working meeting at her office. We gave her the built scenario (Fig. 1) and asked her to manipulate it and say what she could perceive of it as a Braille text representation for blind children at the ages of hypothesis.

After having manipulated the platform and broken the “human” characters, she made determinant remarks that showed the inappropriateness of our proposal. First of all, she pointed to the necessities of the objects to be manipulated being tough, even more thinking on their intended audience. Also, the distance between different scene object must be enough to allow for the object manipulation from every side of them. These observations caused the immediate failure of the 3D illustration proposal. Additionally, she pointed out that the LEGO® blocks could not be easily manipulated by the target children, since they were excessively tied to the base. It is worth noting that the intended LEGO®’s tool for blocks removal demands manual and motor abilities those children do not usually have because of the lack of previous needed opportunities of manipulating concrete objects [19]. This report buried totally the tested proposal.

4 Towards an Adequate Solution

From the review of related literature, added to our background in the Direct Way Literacy Method, we composed a set of guidelines. It is worth remarking that some of the guides are known ones at Human-Computer Interaction community, but we decided to include them because we though they were relevant enough in its specific (context-dependent) form.

The design process and the tangible interface solution that can be able to support blind children to take advantage of the LDWM in the context of Braille must attend to the following four maxims:

  1. 1.

    Research methodology – The joint work of designers and real blind children teachers is a proved sound research methodology;

  2. 2.

    Literacy activities planning – The activities to be planned must be meaningful for the blind children and include opportunities for cooperative, collaborative and interactive work;

  3. 3.

    The proper tangible interface – The device to provide the computer-human interaction must be as natural as possible, including the attendance to the requirements related to the blind children’s motor and cognitive real situation that can grant accessibility, and, also, mobile linguistic components;

  4. 4.

    The interaction – The interaction process must take advantage of multimodal and multimedia resources, in order to enhance communication, language acquisition and text production.

Based on these maxims, we developed a second solution based on well-known and widely used materials, a “bags” and “pockets” system and contact fixation of the three level of elements (character symbols, words and text lines) on the working space. The scheme for a text row representation can be seen in Fig. 2.

Fig. 2.
figure 2

A row of the second solution

The solution includes:

  1. 1.

    A white rectangular cotton fabric, with lateral borders in the length direction folded up in order to allow for the placement (easy and precise) of the Braille cells made from a thin rubber sheet;

  2. 2.

    Tough vertical fixation seams to prevent from the cells horizontal slipping after insertion;

  3. 3.

    A contact fixation strip on the top margin allows for the placing of three tactile (for the children perception) and visual (for the system recognition) different marking ribbons corresponding to different linguistic aspects currently focused in the real classroom activity (that could be, for instance, names, adjectives and verbs).

  4. 4.

    The same fixation system behind the word structure allows for the fixation of the words on the text row structure, which is, in turn, also mobile.

  5. 5.

    Braille cells made of a thin rubber sheet with the dots marked precisely with black plastic paste, with top and bottom edges to be inserted in the cells boxes, together with an arrow on the top edge to indicate the up-direction for reading purposes.

  6. 6.

    A panel to accommodate the whole text built by a child or by a group in a collaborative activity, with trails marked by contact pasting material to guide the construction of the text lines, to be fixed on the “blackboard” or tied on a table or even fixed somehow on the floor.

The fabric, the line and the sewing must be strong enough for the blind children to manipulate at least during a reading project period (about two months).

The physical solution to be used for the input of the children’s productions was taken from the idea proposed in a work on free playing with music elements [31], a TUI based on computer vision that can be downloaded, printed and used by any user with a personal computer and a webcam. Using cheap common materials, the solution can be adapted to our work’s domain since the camera can stay behind the panel, while the children can be informed of its presence and concentrate on the text reading and production. The software solution to be adopted is a program written in Python language using the computer vision library OpenCV [32] for processing the visual rules of our TUI. On the other side, the solution will also include a set of physical rules to satisfy the blind children needs.

5 Conclusions and Future Work

The solution was built with the continuous support of the teacher from IBC, eliminating, by construction, the possibility of factoring elegant technological solutions with no use in real blind children classrooms. After the discovery of the shared concept of literacy with our consultant teacher, our first proposal integrated a few features collected from literature into a physical platform. When she rejected that solution, we managed to propose the main elements for a tangible interaction platform made from known, cheap and easily available materials that can act as a concrete support for blind children literacy in Braille and can be used both in the stand-alone way and as the concrete part of the TUI.

The concrete concepts translation is done at Brazilian public school of little resources by educational toys build from Ethylene Vinyl Acetate (EVA), as in maps or simple classical games. In this context, we can state that the solution built and presented here has as one of its characteristics its potential use within inclusive educational environments. This feature is socially relevant, and, form a technological point of view, its defines one more axes for innovation in our solution for blind children literacy in Braille, since it can be appropriated both by special literacy time-periods activities (which, for us, must complement the genuine inclusive education because of the need for different conditions to grant equal opportunities) and in the context of inclusive education classrooms.

The solution under construction will allow recording, maintaining and recovering different versions of the children’s textual productions. The physical part of the solution is currently being tested with blind children in the addressed scenarios with beginning indicators of success. Our future work will concentrate on integrating all the interface elements presented here with real interaction in order to get a self-contained, working tool.