Abstract
A significant problem for individuals who are blind or visually impaired is the lack of access to graphical information. In this paper, we describe our work on components of a system to make this access available in real-time, on demand, and through effective means. We start by discussing our current work on converting visual diagrams and images into a representation that can be more effectively interpreted by individuals who are blind or visually impaired. We then describe previous and ongoing work in our laboratory on computer I/O devices we are developing to provide the given representation to the user immediately and interactively. Finally, we describe dynamic methods that we have developed to help manage the information presented more effectively given the constraints of the tactile system.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Visually, graphical information is increasingly being used as the sole method for conveying information whether due to the ease of capturing/creating, storing and transmitting digital pictures and other graphics, or the increasing acknowledgement that visual graphics are a more effective method of communicating information than words for the majority of the population (i.e., sighted individuals). In fact, it is estimated that even in current textbooks that over 70% of information is relayed solely in graphical form (Hasty 2007). This has created an enormous obstacle for the more than 25 million individuals who are blind and visually impaired (American Foundation for the Blind website 2011) as there is no effective independent access to graphical information (as with screen readers). This subsequently limits these individuals’ advancement, or even placement, in their education and careers (of which only 38% of the more than 18.7 million adults of working age are employed, AFB website 2011), and their independence/quality of life in everyday living. Providing individuals who are blind and visually impaired access to the content of these visual graphics would increase their independence and empower them at work, in school, at home, or at play.
Tactile diagrams are the most common alternate representation of visual graphics. Currently, most of these diagrams are made manually, whether by hand or by using a drawing program, and involve a complex method of development in order to be effectively interpreted by touch. Access to electronic tactile diagrams typically uses specialized microcapsule paper and a heater or a thermoforming machine to “print” the diagram. Some progress has been made in developing computer I/O devices as for vision and audition, but there are currently no affordable commercial devices. Advances in describing information that is normally presented graphically in written text or speech form can be very useful and are currently more accessible than the original graphics themselves. However, the ability to relay novel spatial forms, spatial patterns, and spatial relationships is typically lost when replacing the graphics with words.
Unfortunately, spatial information is usually very important when providing instructions for commercially available devices and machinery (whether for work, school, home, or play) or devices that the user may be developing themselves, whether alone or as part of a team. Photographs are commonly taken in a variety of scientific fields from which measurements, spatial relationships, and descriptions may need to be derived. However, the process of turning these photographs and diagrams into precise word descriptions is typically considered an important skill of the job as opposed to being incidental to it. In addition, diagrams are often used to describe biological organisms, processes, weather patterns, maps, and potentially unfamiliar content that is very difficult to put into words. Furthermore, basic graphs are used extensively in mathematics and science.
Pictures and diagrams are also important in the development of young children. For them, it is difficult to replace pictures with words as their vocabulary is not yet fully developed. In fact, young children acquire a basic vocabulary, as well as basic relational concepts, such as above and below, by looking at pictures. In contrast to the staggering resources for sighted children, there is a shortage of accessible material for children who are blind and visually impaired. This is critical, as serious limitations in the variety of information to which a child is exposed can negatively impact a child’s “cognitive, emotional, neurological and physical development” (U.S. Dept. of Health and Human Services 2005).
2 Overview of Approach
An approach to the presentation of graphical information to individuals who are blind or visually impaired needs to be significantly different from that of multi-modal interfaces that include vision. This is not only due to the dominance of vision in multi-modal interfaces but also the extraordinary differences in the information processing capacities, strengths, and weaknesses between the senses. Our laboratory’s approach for presenting graphical information primarily focuses on using haptics, taking into account its strengths and weaknesses, as well as the design recommendations and methods used by teachers of the visually impaired (TVIs) for creating tactile graphics for students. In addition, we use cognitive load theory to motivate the use of audio-haptic displays.
As compared to vision, touch has two primary weaknesses that we have taken into account in our approach. First, the spatial resolution of touch is significantly less than that of vision (Loomis et al. 2012). This suggests that spacing between elements should be larger than for vision and that details should be left out of any initial representation. It also suggests that software to provide zooming may help overcome this problem by increasing the size of local areas of the picture. Second, studies that have considered raised line drawings (2D geometric information) have found that the field of view for vision is considerably larger than that of touch. Most work examining this issue has found that the tactile field of view does not extend beyond a single finger (e.g., Loomis et al. 1991). This means that these diagrams are interpreted sequentially, one finger at a time, which is cognitively demanding and has limited access to top-down information processing.
However, the strength of the haptic system is its ability to simultaneously process material properties across multiple fingers (Lederman and Klatzky 1997). Work in our laboratory (Fig. 1 and Sect. 5) also found this to be true for interpreting tactile diagrams, where information needs to be integrated to understand what is in the diagram (Burch and Pawluk 2011). In this study, performance in an object identification task did not improve when multiple fingers were used (compared to a single finger) for raised line drawings (solely geometric information). However, performance did increase when texture was utilized for the representation, especially with multiple fingers (Fig. 1c).
From the field of making tactile graphics, the most effective method of conveying information through tactile diagrams (Edman 1992), and the one mostly used by TVIs, is to create collages using different types of materials (string, fabric, etc.) to represent different items in the diagram. As with the psychological evidence, this suggests the importance of using texture (material properties) for interpreting tactile diagrams. The resulting tactile diagram also needs to be very different than the original visual diagram, or even an outline drawing, if it is to be used effectively. In addition to simplifying a diagram into objects and object parts, diagram makers are advised (Braille Authority of North America 2010) to: (a) eliminate unnecessary parts, (b) separate a graphic with too many components into sections or layers, (c) determine if objects or shapes need to be exactly reproduced or can be replaced by simpler symbols, (d) enlarge the diagram to fit the page, and (e) reduce clutter (where clutter is defined as when components of the graphics are too close together or not needed for the purpose of the task) (Fig. 2).
Perhaps the most significant advantage to using both audio and haptic feedback for presenting graphical information is that using both sensory systems is expected to improve performance by reducing cognitive load. Each sensory system is posited to have its own working memory, which can work simultaneously, mostly in parallel with others (Samman and Stanney 2006). Although working memory is not doubled, the amount does significantly increase. Work in our laboratory (described in Sect. 5) have used this concept to improve task performance involving maps that contain relational information, such as geography, weather, agricultural industry, etc. Another potential advantage is that using redundant dimensions in different modalities that are integral could improve retention. This could be beneficial in two cases: when cues are given near threshold and when there are noise sources. However, under typical conditions where neither of these are true, we have found that blind and visually impaired participants disliked this method because they found the feedback to be too much sensory stimulation (Adams et al. 2015).
To provide effective independent access to tactile graphics, one must consider the automation of the whole process and the provision of access tools (Fig. 3). If we imagine a diagram appearing on a page of a document or on a web page, it is desirable that the user who is blind or visually impaired can have access to this information instantaneously. This starts with the need to rapidly convert the visual diagram into an appropriate tactile form. Rather than physically print a tactile diagram, which is expensive and time consuming, computer I/O devices that can instantly and effectively provide access to a virtual tactile (or audio-tactile) diagram is desirable. One advantage of using a computer I/O device to display the information is that we can now provide tools for the user to dynamically interact with the diagram. This is necessary if we want to avoid overwhelming the user with information at a single point in time (which quickly becomes unmanageable through touch), while still providing them access to all the information (in contrast to current techniques, where, to reduce the number of diagrams to make, TVIs remove detail that a teacher, for instance, says is not needed). In the next few sections, we will talk about the work in our lab to address all these different aspects of the needed system.
Dynamic, immediate access to diagrams must start with the conversion of the electronic visual diagram into a form that can be effectively interpreted by individuals who are blind or visually impaired. This could either be a virtual tactile, audio or audio-tactile diagram. This process is much more than simply taking the outline of objects if it is to be effectively used. If one is to provide immediate, refreshable access, effective interfaces need to be developed for the computer. Audio interfaces are already very good, while tactile/haptic interfaces are still in need of improvement. Although not needed for immediate access to diagrams, software algorithms can be beneficial to manage the data being explored.
3 Visual to Tactile Diagram Conversion
Visual to tactile diagram translation is a key first step: If this cannot be done effectively and quickly, then it makes the development of tools for dynamic access a moot point. Several research groups (e.g., Ladner et al. 2005) have developed software algorithms to automatically convert visual graphs, such as line graphs and bar charts, into tactile form. There is both commercial and research software (e.g., Firebird Graphics Editor) to assist in developing graphics more quickly. Way and Banner (1997) have examined the use of basic segmentation techniques (blurring, edge detection, adaptive filtering using K-means segmentation, median filtering, and their combinations) to convert visual greyscale photographs automatically to tactile diagrams.
We are developing a rapid visual to tactile diagram conversion technique which includes more modern segmentation techniques and the additional key step of automatically simplifying the diagram. To this end, the intent is to mimic the resultant diagrams produced by experienced TVIs. For reference, we asked two experienced TVIs to translate approximately 30 photographs and 30 diagrams from their visual representation to a tactile representation that they would use with their students. Photos and diagrams were taken from those commonly encountered in everyday life and at school. The TVIs developed their diagrams using Adobe Photoshop with a validated texture set (Ferro and Pawluk 2015). After completion of the diagrams, the TVIs discussed their representations to produce a single standard for each diagram (Fig. 3b). Diagrams were then produced on microcapsule paper.
To date, we have examined the application of segmentation algorithms for use in visual to tactile diagram conversion and as a first stage before we apply simplification techniques. The objective was to find a method that came closest to producing closed regions representing objects and object parts. The segmentation classes considered were: edge detection, clustering, graphing, contour detection, and differential equations. Edge detection was eliminated as a method to explore due to its susceptibility to noise and lack of contours for region boundaries required to apply texture (which we found key to tactile diagram interpretation, Sect. 2). Graphing methods were also quickly eliminated as they are computationally expensive and cannot be completed in real time. K-means (Luo et al. 2003) was used to represent clustering methods as Mean shift (Comaniciu and Meer 2002) was also found to be computationally too expensive.
We examined the use of K-means by adapting the feature space to extend beyond color to include texture. We also explored nontraditional feature space distance measures such as Mahalanobis distance. Unfortunately K-means lacks components relevant to image space such as proximity. Level-set (e.g., Borx and Wieckert 2004) was used to represent differential equation methods as it builds on the other algorithms and has better speed and accuracy. Previous versions of this method considered pixel intensity, proximity to an intensity gradient, and contour stiffness as features in the algorithm. We added the use of texture gradients and examined different initialization structures, feature space and image space distance measures (Euclidean, dihedral, normalized Euclidean, Mahalanobis), feature weights, gradient extractors (central distance, monogenic signal, probability of boundary), a variety of filter sizes and configurations, and new models we developed for merging/splitting segments for better convergence. For both K-means and Level-set, we extensively explored the parameter settings, using the best-performing sets of parameters in our algorithm comparisons. For contour detection, we used the global probability of a boundary with an oriented watershed transform (gPb-owt) developed at Berkley (Arbelaez et al. 2011) with their optimized parameter set because of its recent success over many other segmentation algorithms in visual segmentation (Fig. 4).
The segmentation techniques described above produce images that display the outlines of the segmented parts. However, as described earlier (Sect. 2), texturing the regions significantly improves performance over the use of raised line drawings. Segments were textured according to the size of the area covered. The largest area, assumed to be background, received no texture and the following 6 largest areas received preselected textures. The algorithm used the same texture set as that used by the TVIs.
The algorithms were compared in performance by using both scoring metrics developed to assess segmentation performance and participant testing with individuals who are visually impaired. Some of the scoring metrics used were those commonly used for visual segmentation: the F-measure and the probabilistic rand index (PRI; Everingham and Winn 2012). Regression to PRI was also used to predict how individual parameters affected PRI while accounting for noise and outliers as opposed to using the absolute best that may be caused by chance. Time was also included because of the crucial need to approximate real time in converting diagrams.
One difficulty with these metrics is that they do not take into account the differences between the visual and tactile systems. Previously, Loomis (1990) developed a method that takes into account the differences in the spatial resolution between vision and touch. This method involves applying a low pass filter to the visual image that corresponds to the difference in resolution between vision and touch. Then a dissimilarity type distance is measured between the filtered versions of the results of the algorithm and the standard reference. We will refer to this as the Loomis Distance. However, this still does not take into account the more limited field of view of touch and the cognitive processing of the image. Hence, experimental testing of the ability of individuals who are blind or visually impaired to identify the object(s) in the image using the different algorithms was also examined.
For experimental assessment, in order to decrease testing time to three 3 h sessions, only the best parameter sets for gPb-owt and Level Set were selected to be tested. These were compared to performance with the standard reference set. For each participant, each algorithm condition received a block of tactile diagrams (based on 12 visual photos and 12 visual diagrams) drawn from a pool corresponding to 36 different visual photos and 36 different visual diagrams. To avoid learning effects, the diagrams, were drawn randomly (without replacement) from a triplet of photos or diagrams on the same topic and with approximately the same difficulty of identification in the standard reference representation (e.g., the food triplet consisted of bananas, apples and ice cream). The order of presentation of the algorithms was counterbalanced across participants.
The task was to identify the shape of the object(s) in the diagram, their name and their category. The only hint participants were given was whether the diagram represented a single object, multiple objects or a scene. Although descriptive words are often provided with a diagram, none were included, by this means, we chose the most difficult condition to explore a diagram – namely, understanding an unknown object. Participants were all blind or visually impaired, and were blindfolded so that they relied solely on tactile information. Seven participants completed the experiment.
The comparison between the different segmentation methods using the scoring metrics mentioned above is given in Table 1. The results for the best performing parameter set for each metric is given. Based on these results all computational algorithms performed relatively similarly although notably different from the TVIs for the probabilistic rand index and Loomis distance. To determine which metric should be used for selecting the parameter set for further study, we had a laboratory intern determine the viability of each metric for tactile information processing based on their observation of similarity to the reference image, clarity, distinguishability, continuity and amount of noise. The intern rated each picture in a set corresponding to the given metric on these items on a scale of 1 (best) to 5 (worst). Both the metrics and human inspection suggest that further work is needed to produce tactile diagrams closer to that of the Reference.
However, we wanted to examine how well individuals who are blind or visually impaired can understand diagrams after only the segmentation techniques are applied. We were particularly interested in understanding what problems identified visually impeded tactile identification and which were unimportant due to the differences between the visual and tactile sensory systems. The parameter sets based on the regression analysis were chosen because of its more robust approach to calculating the PRI metric. Due to the time consuming nature of human testing, only two of the algorithms were used in our experiment: the Level Set algorithm and the gPb-owt-ucm algorithm. This was because, when examining the internal consistency of each algorithm the kMean algorithm was the most inconsistent. In contrast the Level Set algorithm was more consistent and the gPb-owt-ucm algorithm was the most consistent.
Perhaps the most surprising result from the human testing is that participants had less of a problem with the Level Set algorithm, which typically shows a fair amount of noise (Fig. 3c) as compared to the gPb-owt. In fact, users did not find the noise that distracting. In contrast, the gPb-owt algorithm had a tendency to underrepresent the needed lines, which made it more difficult for participants to interpret. For both algorithms, “stray regions”, which were separated from the object to which they belonged, caused the most difficulty in interpretation and will be further considered in the simplification process. Another surprising result is that most participants found that there was too much use of texture and the textures presented should not be as strong as the edge lines. This suggests that, although textures improve performance (see Sect. 2), edge lines are still a critical component for diagram interpretation.
4 Tactile Hardware
Visual and auditory feedback displays have long benefited from commercial devices developed for other purposes (such as entertainment). Tactile and haptic displays have been much more limited in their development. To a large degree this has to do with the difficulty in developing feedback for a sense that requires physical interaction with the interface. A review of currently existing tactile and haptic displays that have been used to present graphical information is given in (Pawluk et al. 2015) and (Vidal-Verdu and Hafez 2007). Here we will describe three different displays that have been developed in our lab: fingertip vibratory displays, a moving pin display and a more complex display merging multiple approaches together.
The fingertip vibratory displays (Burch and Pawluk 2011) were designed to be low cost vibratory feedback displays that could each be attached to a pad of a finger, together providing separate feedback to different fingers, or even different segments of a finger. Each feedback device senses the color underneath it using a color sensor and renders it as a texture (vibration) on the tip of the finger (Fig. 1). The different colors on the diagram in the visual display were used to format the object so that different textures were used to distinguish different parts of an object as well as their orientation, the two primary difficulties with raised line drawings; this was similar to parallel work by Thompson and her colleagues (2006), who developed their representation for paper diagrams constructed by hand. As described in Sect. 2, the use of texture with multiple fingers greatly improved performance. In fact, it was actually surprising how well these simple, low-cost devices performed on, probably, the most difficult task that can be performed (i.e., an object identification task without cuing).
The moving pin display is a “mouse-like” display (Fig. 5) that can directly control vibratory feedback to individual pins in the display (e.g., Headley et al. 2011). It is similar to other displays in presenting a moving display of a matrix of pins. However, the frequency response range for our device is quite large in order to ensure that textures can be displayed. We also examined the use of four level of amplitudes, however, these were not easily distinguished and so only binary amplitude is used. The device consists of a single electronic Braille cell that is mounted on a hollow case. The cost of the device is less than $400 to prototype, which is a much more viable option compared to $50,000 commercial multi-pin graphics displays. These latter displays also cannot provide vibratory feedback as they work through the use of shift registers to turn pins on and off, to conserve costs, rather than direct drive.
The most important difference between this moving pin display and the previously mentioned fingertip display is that this display can provide distributed spatial information across the fingertip, which is expected to improve performance over a single contact point. Most of that improvement is expected to occur from changes the feedback from one point of contact to four (Weisenberger 2007). Besides the use of vibration to provide texture-like feedback to represent textured diagrams, the most important concept introduced by this display, compared to those previously, is the use of absolute positioning at the location of the pins rather than relative positioning at the location at the palm. We found these improvements significantly improved performance in understanding diagrams by improving the accuracy of the spatial representation (Rastogi et al. 2009).
The current incarnation of our display design is to obtain the benefits of using texture feedback over multiple fingers, like our fingertip displays, with multi-pin feedback at each finger, like our moving pin display. Based on our work with multi-finger feedback, we believe feedback to two fingers of each hand (one “mouse” per hand) will produce the largest performance benefit while limiting the increase in complexity. However, one difficulty with both of these feedback displays is that edges are not easily tracked. Although we found that using textured areas in addition to indicating the edges significantly improved task performance, edge information is still important. With the above mentioned devices, users found the need to move the devices side to side across edges to obtain spatial information about them. This greatly increased the exploration time compared to that when performing tasks with real physical diagrams. One possibility to improve smooth tracking and still allow exploration of textured areas is to use soft haptic fixtures. Currently we are developing a mobile, force feedback mouse using an omni-drive system (Lazea and Pawluk 2016) to provide these soft fixtures. Two Braille cells will be mounted within the mouse casing to provide textured tactile feedback to two fingers.
5 Dynamic Interaction
Direct interaction with virtual tactile diagrams on a computer is advantageous when considering access speed and long-term costs. However, another advantage of this interaction is that a user can manage the limited information processing capacity of touch while still having access to all the information in a diagram. This can be achieved by providing tools for the user to manipulate the diagram and decrease the amount of information observed at a time. Currently, TVIs who develop diagrams for students remove material from the diagram not related to the lesson plan for simplification and provide magnified versions of some components, again based on the lesson plan. However, this is problematic if the teacher changes their plan during class and also limits incidental learning. For dynamic interaction on a computer, there is no reason not to store all the information of a diagram in memory but allow the user to limit how much they explore at a time. We have been developing tools allowing users to zoom more effectively on a diagram and also simplify it in different manners through a point and a click.
Most previous researchers on zooming have considered the application of visual methods (both smooth and step zooming) to tactile diagrams (Magnuson and Rassmus-Grohn 2003; Walker and Salisbury 2003; Ziat et al. 2007). However, unlike vision, it is not possible to take a quick glance at a tactile diagram and decide whether the level of zoom is appropriate: instead, it is a time consuming process. Our objective was to develop a zooming method that would only produce zooming levels significantly different in content from the previous level. Schmitz and Ertl (2010) focused on using the density of streets in a city map to select zoom levels. Density works well for street maps.
However, we have focused more on pictures and images for which we wish to avoid clipping of objects or object parts because otherwise the displayed area becomes exceedingly difficult to tactually interpret. For these cases, we considered a diagram hierarchy based initially on objects and object groups (e.g., house with flowers beside it), and being broken down into object components (e.g., first house and each flower, then the door, windows, etc. of the house and the petals, stem, etc. of the flowers). We then allowed users to zoom between different levels of the picture/image hierarchy. Experimental testing with users who are visually impaired found that our image hierarchy method significantly increased performance and usability compared to step and smooth zooming methods (Rastogi and Pawluk 2013a).
We have developed and examined the use of two types of tools for simplification: boundary simplification and content simplification. Boundary simplification was motivated by the fact that TVIs do simplify shape boundaries, if possible, to help with exploration. We found this to be particularly relevant for our developed tactile I/O devices as users had previously commented that it was much easier to track straight lines than more squiggly ones with these devices (Fig. 6). We considered context simplification in terms of a geographical map that may have cities, roads, weather patterns, crops and industries. Such a map, as is, has too much clutter to be easily interpreted. However, a user often does not need to look at more than a couple aspects of the map at a time (e.g., weather patterns and crops). Allowing the user to remove the other information to answer the question, but still have that information available for other questions seems desirable.
Top left, original geographical drawing of a country with 3 states; Top right, boundary simplification to produce straight lines; Bottom left, original geographical diagram to indicate cites, roads, resources and topology; Bottom, right, simplified diagram showing only road and topological boundaries.
We found both simplification methods helpful: boundary simplification for answering general questions about borders on a geographical map, and contextual simplification in answering relational questions (Rastogi and Pawluk 2013b). The latter was also found to improve when feature sets (e.g., weather patterns and crops) were divided between the auditory and sensory modalities: presumably due to the improved ability to handle the cognitive load by using two senses.
6 Conclusions
Being able to have immediate access to any and all diagrams is expected to improve the accessibility of information for individuals who are blind or visually impaired. This is expected to allow these individuals more independence whether at work, school or home. Although the algorithms being developed for the automatic conversion from visual to tactile diagrams described here used microcapsule paper, the produced virtual diagrams can be used for computer I/O devices as well. For the devices that we have described, one would simply use vibratory feedback to produce pseudo-textures rather than the physical textures for the paper. However, more work is needed on the automatic conversion methods before the components are integrated into a complete system.
References
Adams, R.J., Pawluk, D.T.V., Fields, M.A., Clingman, R.: Multimodal Application for the Perception of Spaces (MAPS). In: ACM ASSSETS 2015, Lisbon, Portugal, 26–28 October (2015)
Arbeláez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 898–916 (2011)
Brox, T., Weickert, J.: Level set based image segmentation with multiple regions. In: Rasmussen, C.E., Bülthoff, Heinrich H., Schölkopf, B., Giese, Martin A. (eds.) DAGM 2004. LNCS, vol. 3175, pp. 415–423. Springer, Heidelberg (2004). doi:10.1007/978-3-540-28649-3_51
Burch, D., Pawluk, D.: Using multiple contacts with texture-enhanced graphics. In: World Haptics 2011 Conference Proceedings, Istanbul Turkey, 21–24 June (2011)
Comaniciu, D., Meer, P.: Mean shift: a Robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)
Everingham, M., Winn, J.: The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Development Kit, pp. 1–32 (2012)
Ferro, T., Pawluk, D.: Developing tactile diagrams with electronic drawing programs using a validated texture palette. In: AER Annual Conference on Becoming Agents of Change (2015)
Hasty, L.: Personal Communication (2007)
Headley, P.C., Hribar, V.E., Pawluk, D.T.V.: Displaying braille and graphics on a mouse-like tactile display. In: ACM ASSETS 2011, Dundee, Scotland, 24–26 October (2011)
Ladner, R.E., Slabosky, B., Martin, A., Lacenski, A., Olsen, S., Groce, D., Ivory, M.Y., Rao, R., Burgstahler, S., Comden, D., Hahn, S., Renzelmann, M., Krisnandi, S., Ramasamy, M.: Automating tactile graphics translation. In: Proceedings of the 7th International ACM SIGACCESS Conference on Computers and Accessibility - Assets 2005, vol. 150 (2005)
Lazea, A., Pawluk, D.: Design and testing of a haptic feedback active mouse for accessing virtual tactile diagrams. In: RESNA 2016, Arlington, VA, 12–14 July (2016)
Lederman, S.J., Klatzky, R.L.: Relative availability of surface and object properties during early haptic processing. J. Exp. Psychol. Hum. Percept. Perform. 23(6), 1680–1707 (1997)
Loomis, J.M.: A model of character recognition and legibility. J. Exp. Psychol. Hum. Percept. Perform. 16(1), 106–120 (1990)
Loomis, J.M., Klatzky, R.L., Lederman, S.J.: Similarity of tactual and visual picture recognition with limited field of view. Perception 20, 167–177 (1991)
Loomis, J.M., Klatzky, R.L., Giudice, N.A.: Sensory substiution of vision: importance of perceptual and cognitive processing. In: Assistive Technology for Blindness and Low Vision. CRC Press, Boca Raton (2012)
Luo, M., Ma, Y.-F., Zhang, H.-J.: A spatial constrained k-means approach to image segmentation. In: Proceedings of the 2003 Joint Conference of the Fourth International Conference on Information, Communications and Signal Processing, 2003 and Fourth Pacific Rim Conference on Multimedia, vol. 2, pp. 738–742 (2003)
Magnuson, C., Rassmus-Grohn, K.: Non-visual zoom and scrolling operations in a virtual haptic environment. In: Eurohaptics 2003, Dublin, Ireland, 6–9 July (2003)
Pawluk, D.T.V., Adams, R.J., Kitada, R.: Designing haptic assistive technology for individuals who are blind or vision impaired. IEEE Trans. Haptics 8(3), 258–278 (2015)
Rastogi, R., Pawluk, D., Ketchum, J.M.: Issues of using tactile mice by individuals who are blind and visually impaired. IEEE Trans. Neural Syst. Rehabil. Eng. 18(3), 311–318 (2009)
Rastogi, R., Pawluk, D.: Intuitive tactile zooming for graphics accessed by individuals who are blind and visually impaired. IEEE Trans. Neural Syst. Rehabil. Eng. 21(4), 655–663 (2013a)
Rastogi, R., Pawluk, D.: Development of an intuitive haptic zooming algorithm for graphical information accessed by individuals who are blind and visually impaired. Assistive Technol. 25(1), 9–15 (2013b)
Samman, S.N., Stanney, K.M.: Multimodal interaction. In: Karwowski, W. (ed.) International Encyclopedia of Ergonomics and Human Factors, 2nd edn., vol. 2. Taylor and Francis, Boca Raton (2006)
Thompson, L.J., Chronicle, E.P., Collins, A.F.: The role of pictorial convention in haptic picture perception. Perception 32(7), 887–893 (2003)
Vidal-Verdu, F., Hafez, M.: Graphical tactile displays for visually-impaired people. IEEE Trans. Neural Syst. Rehabil. Eng. 15(1), 119–130 (2007)
Walker, S., Salisbury, J.K.: Large haptic topographic maps: marsview and the proxy graph algorithm. In: ACM Siggraph 2003, pp. 83–92 (2003)
Way, P., Barner, K.E.: Automatic visual to tactile translation–Part I: human factors, access methods, and image manipulation. IEEE Trans. Rehabil. Eng. 5(1), 81–94 (1997)
Weisenberger, J.M.: Changing the haptic field of view: tradeoffs of kinesthetic and mechanoreceptive spatial information. In: World Haptics Conference 2007, Japan (2007)
Ziat M., Gapenne, O., Stewart, J., Lenay, C., Bausse, J.: Design of a haptic zoom: levels and steps. In: World Haptics Conference 2007, pp. 102–108 (2007)
Acknowledgements
The work presented here was funded by NSF IIS Grants ##1218310 and ## 0712936, as well as NSF CBET Grant #0754629. We would also like to thank Megan Lavery, Janice Johnson and Kit Burnett for their assistance with the work on automatic visual to tactile diagram conversion.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Ferro, T., Pawluk, D. (2017). Providing Dynamic Access to Electronic Tactile Diagrams. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human–Computer Interaction. Designing Novel Interactions. UAHCI 2017. Lecture Notes in Computer Science(), vol 10278. Springer, Cham. https://doi.org/10.1007/978-3-319-58703-5_20
Download citation
DOI: https://doi.org/10.1007/978-3-319-58703-5_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-58702-8
Online ISBN: 978-3-319-58703-5
eBook Packages: Computer ScienceComputer Science (R0)