SENSORY ENHANCEMENT AND SUBSTITUTION (2024)

Blind people must depend on nonvisual senses for information to help them locate and identify objects and persons, to guide the development of their personal relations, to regulate their motor behavior in space, and to provide an overall conceptual organization of their spatial environments. Determining the role, or roles, of other senses in compensating for the loss of visual information should become a major focus of scientific inquiry. There is an urgent need to determine how and to what extent sensory substitution or enhancement techniques can best be used to prevent or compensate for deficiencies in spatial motor behaviors in persons with visual impairments.

Mobility involves more than just a resolution of the problem of getting from here to there. Many kinds of sensory information have to be dealt with in both the immediate or near environment and the more remote environment, and, in both environments, there is information that requires high resolution. In the immediate environment, within touch by the body or with a cane, not much resolution is required to determine if a person is present or if a chair is occupied; it requires more resolution, however, to read the numbers on a hotel room door or to know a person's identity. The same is true of the more remote environment. It does not require much resolution to know that a building is present or that one is near a street with traffic present, but to know the name of a building or a street, and whether it is safe to cross the street, requires higher resolution of the sensory information. Both environments require appropriate orientation to the stimuli present in them and certain mobility skills in order to move about.

The visual system is capable of resolving information in both the immediate and remote environments, but this is not always true for the other sensory systems, such as the auditory, somatosensory, and kinesthetic systems. Use of these other modalities requires that a person pay more careful attention to the spatial stimuli present in the environment and the use of sensory aids, especially for information requiring high resolution. Although learning is necessary to use spatial information acquired through any sensory modality, there are special problems in the absence of visual information or with the use of sensory aids. The information needed to locate something in the environment may be quite different from the information needed to identify it. The problems posed by sensory enhancement are different from the problems posed by sensory substitution, as are the problems posed by age and by type of visual impairment.

THE VISUAL SYSTEM

What are the visual requirements of mobility? We consider separately visual information gathered from immediate and more remote environments.

The Immediate Environment

By immediate environment, we mean information within a stride or two (perhaps the space defined by the extension of the long cane). Of primary concern within the immediate environment is obstacle avoidance. Experimental data exist concerning the visual requirements of obstacle avoidance. Pelli and Serio (1984), at the Institute for Sensory Research, Syracuse University, have studied how visual restrictions of several types limit the ability of subjects to traverse a maze constructed of three-dimensional obstacles. They have found that the visual requirements of this task are very low, that is, substantial restrictions can be tolerated before performance is affected—fields down to 10 degrees, contrast reduction by more than a factor of 10, and spatial-frequency bandwidth of less than 1 cycle per degree. These findings indicate that very little spatial information is required for obstacle avoidance and that most people with low vision will be able to perform this task. Marron and Bailey (1982) have shown that visual field and contrast sensitivity are better predictors of performance on orientation and mobility tasks than acuity.

Some mobility-related tasks may require acquisition of more detailed information from the immediate environment than is needed just for obstacle avoidance. Examples include identifying objects (distinguishing between a person and a mailbox or a tree and a lamp post or recognizing an acquaintance in the hallway). In the extreme, the immediate environment may contain high-resolution information, such as signs or the labels on soup cans. Some laboratory data exist on the visual requirements of these tasks—field, spatial-frequency and magnification requirements for reading (Legge et al., 1985) and face recognition (Iniui and Kensaku, 1984). However, such data have not been collected in the context of mobility, in which coordination of low- and high-resolution tasks may impose added burdens, as described below.

Remote Environments

Vision is used to gather information about objects and spatial arrangement at distances remote from the individual, that is, beyond the reach of the long cane. Work by Gibson (1979) and more recently by Marr (1982) and colleagues suggests that information of this sort relies on a variety of visual cues, such as motion parallax, stereopsis, shading, texture gradients, and optic flow patterns. Psychology has not yet revealed in detail the circ*mstances in which these cues are used. Moreover, we have only an elemental understanding of the physiological mechanisms used for processing such information. We do not know how visual impairments of different types and degrees affect the ability to gather such information. Experimental psychologists could contribute to the understanding of mobility by defining the involvement of these cues in mobility tasks and by showing how their use is affected by various visual impairments.

Vision of the remote environment may also be discussed in terms of more global abilities, such as the ability to follow a path from instructions, the ability to reverse a path traveled once, the ability to locate oneself with respect to objects, and the ability to achieve a memorial representation of the environment in which the travel task is performed. We might ask, for example, how severity of field loss or reduction in acuity affects the time needed to master a complex spatial environment.

Information gathered from remote environments may also be discussed in terms of low- and high-resolution tasks. An example of the former is the use of buildings or other large structures as landmarks. An example of a high-resolution demand of urban mobility is the need to read signs at a distance. High-resolution tasks are a major problem in low-vision mobility.

This brings us to the matter of coordinating low- and high-resolution tasks. When individuals with normal vision arrive at unfamiliar street corners, they rapidly locate street signs and traffic lights by first locating them in peripheral vision and then employing saccadic eye movements to bring them to the fovea. Individuals with restricted fields and low acuity have a more difficult job. Although there may be sufficient residual vision to follow the crosswalk across the street, finding and reading the sign is another matter. Typically, pedestrians who have low vision are obliged to use telescopes with small fields. Even if it is successful, the scanning required to find the sign is time-consuming, inconvenient, and, although magnification may be adequate for reading, the demand for scanning and search substantially impedes mobility. More accessible street signs and traffic signals would help. In this connection, talking signs have been explored. Other high-technology navigational aids might also be explored.

It is important to recall that low vision cannot be characterized simply as a loss in acuity. For example, in the case of reading, we know that the presence or absence of central vision is a better predictor of maximum reading speed than is acuity (Legge et al., 1985). With regard to the different component tasks of mobility, researchers will have to determine the significance of subject variables like acuity, field, and contrast sensitivity.

Sensory Enhancement

Sensory enhancement refers to artificial manipulation of patterns of sensory stimulation to make them more useful. The classic form of sensory enhancement in low vision is magnification, which acts to replace small retinal images with larger ones. This reduces the resolving demands of the task. There are other examples of the manipulation of visual stimulation to enhance its value for people with low vision. For example, reversed telescopes can compress a large visual field into a smaller one (minification). In some instances of severe field restriction, this may have benefits for mobility. Careful analysis of the value of such devices for mobility is in order. Other examples of sensory enhancement within the visual domain include the use of image intensifiers for people with night blindness or contrast enhancement hardware in closed-circuit TV magnifiers. The use of Corning CPF color filters for mobility is an intriguing but not yet well-understood form of sensory enhancement. In these examples, some properties of patterns of visual stimulation are transformed, while other properties are invariant. It would be valuable to explore other forms of sensory transformation.

In recent years, engineers and computer scientists have devoted substantial effort to the development of digital techniques for image enhancement and image restoration. Most of these techniques have not been explicitly motivated by consideration of the sensory capacities of the observer. However, it seems likely that image enhancement methods could be tailored to optimize the residual capacity of observers with low vision. Some preliminary work in this direction has been undertaken by Eli Pelli and colleagues at the Retina Foundation in Boston (Pelli and Pelli, 1984). It is an open question as to how such techniques could be implemented to aid mobility.

Finally, characterization of visual capacity is dominated by three variables—field, spatial resolution (acuity), and contrast sensitivity. It would be helpful to give an initial characterization of vision substitution systems using the same variables. For example, we might ask about the field in degrees, spatial resolution in cycles per degree, and modulation sensitivity of a prospective tactile imaging array. The use of common measurement metrics would help in the comparative evaluation of vision substitution systems.

THE AUDITORY SYSTEM

With regard to the auditory system's ability to resolve signals containing distance information, it has an enormous dynamic range (100 trillion to one) and distance information coded primarily through intensity. Accordingly, the distinction between immediate space and remote space is of relatively little importance. The near field flows smoothly into open space with a surprising continuity, except for the tendency of high-frequency components to be more susceptible to air transmission loss, and this susceptibility provides an additional timbre cue in distance estimation.

Given constant loudness of a sound source, sensory resolution in terms of localization and identification remains relatively constant. Median plane localization of sound sources is achieved through the use of binaural time and intensity differences. The human auditory system is capable of detecting binaural time of arrival differences as short as 30 microseconds and binaural intensity differences in the single dB range. The pinna also has as effect on the localization of sound sources in space. The accuracy of localization is determined by the spectral composition of the sound source with pure tones of 3,000 Hz being the most difficult and noise bursts with rapid onset being most effective.

Because most objects do not emit sounds, two other acoustic mechanisms come into play. The first involves passive interaction with sound stimuli emitted from another independent source. These ambient sounds are attenuated, in part because of a reduction in the high frequency components of the complex wave due to the inability of the sound to bend around the object, and in part because of resonances caused by the positioning of the object and the recipient's ear. The second mechanism involves active emission of acoustic signals by the observer and the subsequent detection of the echo reflected by the object. The time delay between the production of the sound and the detection of the echo reveals information primarily about the distance of the object, but also about its position and its textural characteristics.

In both of these cases, the auditory system must be regarded as functioning in a low-resolution mode. The median plane localization of an active sound source is rarely more accurate than 5 degrees. The passive mechanisms are even cruder, providing the observer with little more than presence and minimal position and distance indices.

The ability of the auditory system to identify and interpret sounds, however, is the result of a very high capacity for resolving acoustic signals. Variations in pitch, intensity, and timbre and temporal variation in all three can be discriminated to a fine degree. All of these parameters give natural sounds distinctive signatures and can be easily manipulated in an electro-acoustic system, and this ability provides tempting options for environmental coding. The intrinsic utility of the uncompromised auditory system to gain information about the environment is a very important consideration.

THE SOMATOSENSORY SYSTEM

The somatosensory system is complex system that provides a wide range of information about the near environment and mediates the perception of motion. In the natural mode (without artificial stimulation), the sensory information may be categorized as a series of progressively more complex functions classified roughly as: thermal sensitivity, sensitivity to pressure and vibration, perception of texture and form, and stereognosis. Appropriate sensory prostheses might use any one or a combination of these somatosensory capacities.

Each of these categories had been studied fairly extensively in relation to the hand and fingers, but much less is known about these sensory capacities in other areas of the body.

Pressure

In the hand, the sense of pressure depends on activity in slowly adapting (Merkel) afferents (Mountcastle et al., 1966). The relationship between skin indentation and subjective magnitude is linear, as is the relationship between indentation and impulse rate in the slowly adapting afferents. Subjectively, one is more sensitive to edges than to flat surfaces. Coincidentally, the slowly adapting afferents are much more sensitive to edges than to flat surfaces (Phillips and Johnson, 1981).

Vibration

Cutaneous vibration evokes a sensation that depends on neural activity in the QA (cutaneous quickly adapting or Neissner) afferents and/or the PC (Pacinian) afferents. At low frequencies, less than 40–50 Hz, the QA's dominate. At higher frequencies, the PC's dominate. Subjective intensity is a monotonic function of vibratory intensity, as are the impulse rates in the QA and PC populations (Mountcastle et al., 1972).

Tactile Form Perception

Tactile form perception is most highly developed at the fingertips, where the limit of resolution is approximately 0.8 mm (Johnson, 1983). This value is based on studies using gap detection, mechanical gratings, and embossed letters as stimuli. This resolution appears to be based on neural activity transmitted via slow adapting (SA) afferents and for complex patterns, like letters, it is not greatly different whether the spatial stimuli are applied to the finger in a stationary manner or the fingers are swept across the stimuli. It is also an interesting fact that the relationship between resolution and the primary afferents is the same in the skin and in the foveal region of the retina.

Spatiotemporal integration involving the fingertips occurs as the fingers are moved across a stimulus or, conversely, a stimulus is moved across the fingers. This integration compensates in some ways for the limited spatial field of the fingertips.

All the proven methods of high information transfer through the skin (Braille, the Optacon, and the Tadoma method) employ the finger pads. That is not to say that other skin areas could not serve equally well, but there is no concrete evidence that they can.

Texture

Tactual perception of texture, which provides information about a wide range of surfaces, is not understood at either a psychophysical or neurophysiological level (Johnson, 1983). The sense of surface roughness, which is one facet of texture perception, is understood to some extent. Subjective roughness magnitude is a nonmonotonic function of surface spatial frequency. At the neural level it appears to be mediated by variation in the impulse rate of SA afferents (Fasman et al., 1985).

Stereognosis

Stereognosis is the appreciation of a three-dimensional form through manual exploration. Recognition of a door handle or of a larger object, such as a chair, are familiar examples. Stereognosis is a complex function of joint angle sensation, which appears to be mediated largely by muscle spindle afferents, tactile form perception, and vibration. Beyond these simple observations, relatively little is known about the physiological mechanisms of Stereognosis. However, Stereognosis is extremely important and lies at the heart of the success of the cane.

Implications for Sensory Aids

How might the somesthetic sense serve the purposes of a sensory aid? We consider this question within the near field/far field, low resolution/high resolution framework that we have adopted.

All the available evidence suggests that the appropriate sites for the delivery of high-resolution information about the environment are on the hands and fingers. The ideal device would be something like the Optacon; that is, a portable device with a dynamic, two-dimensional display. The user might carry it like a shoulder bag, slipping a hand into the device when he or she wants high-resolution information. Such a device might be used for orientation with a wide-angle view and then zoomed to a specific object, e.g., a sign. When the user is seated, it might be used to examine detailed materials of various kinds, although the most important use may not be text reading; within the near future, that function will probably be accomplished most effectively by character recognition and speech synthesis hardware. The dynamic display may serve most effectively to find and bring the appropriate text into the view of these pattern recognition mechanisms.

The main problem in this area is the lack of appropriate instruments for displaying spatiotemporal information to the skin. It seems clear that the appropriate mode of presentation is the projection of isomorphic spatial patterns onto the skin. It is also clear from everyday experience that the tactile sense of the hands and fingers provides rich imagery concerning texture, form, remote vibration, etc. The ability to read Braille (Foulke, 1982a) and recognize speech by the Tadoma method (Snyder et al., 1982) are two examples that provide some quantitative indication of this capability. However, we cannot specify with any certainty the form that the dynamic display should take. What is needed is a research instrument that meets or exceeds the capacities of the tactile system. The only instrument currently available for studying the efficacy of dynamic, two-dimensional displays is the Optacon, which falls far short of the physiological capacity of the system. Its pin spacing is 3 times greater than the resolution limit of the fingers, and its stimulus mode is poorly matched to the underlying receptor mechanism. For example, at 230 Hz, the frequency at which the individual pins vibrate, the receptor system with the highest spatial resolution, the SA population, is not even activated. Subjects report that the sensations evoked are unnatural and poorly differentiated. Most find it very difficult to use. The spatiotemporal sensations are not robust and highly differentiated, like normal tactile sensation.

What is needed is a device that spans the full intensive, temporal and spatial range of the tactile system. Investigations with such a device would provide a basis for determining the specifications of working devices and display methods.

It seems unlikely that skin areas other than the hand and fingers will be suitable for acquiring information of the sort that demands high resolution. However, these skin areas might provide an effective portal for low-resolution information.

EXISTING ENVIRONMENTAL SENSORS

A variety of sensory aids for visually impaired people have been developed as technology has advanced, which are discussed in Chapter 6. Here we concentrate on aids that have been termed “environmental sensors.” An environmental sensor should be more than an obstacle detector: it should convey the full dynamic character of visual information. We concentrate on this class of aids because, to the extent that they are designed to incorporate imminent advances in electronics and optics, they are more likely to take advantage of the capacities of nonvisual systems.

Sonar Substitution

The first kind of environmental sensor to be considered is the sonar sensory aid. In aids of this type, a small transmitter irradiates the field of view with acoustic waves of very high frequency that encounter objects in space. The reflected waves that are returned to the observer (echoes) are detected and made audible by suitable transducers. This display contains information about the directions, distances, and surface textures of objects in the field of view. The latest advance in this technology has been developed by Leslie Kay of New Zealand (Kay, 1982; Easton and Jackson, 1983). Kay's Trisensor is a modified version of his earlier Binaural Sensory Aid (BSA). As in the BSA, the Trisensor is fitted with widely angled receivers that sense reflected energy. Energy reflected from an object to either side of the midline results in a transduced signal with an interaural difference in amplitude that serves as a cue for the location of the object. In addition, the Trisensor is fitted with a transmitter that emits a very narrow beam of ultrasonic energy that is returned by reflection and results in a signal that, because it does not exhibit an interaural difference in amplitude, is a monaural signal. The signal provides the user with more precise information regarding the size and location of objects directly ahead. In addition to indicating object direction, both the BSA and the Trisensor code object distance in terms of the pitch of the acoustic display, while surface texture is signaled by the timbre of the auditory signal. It should also be noted that these aids also have adjustable range controls that create “windows” of optimal localization from about 0.3 m up to 5 m.

Recent research has centered increasingly on the psychophysical characteristics and the usability by humans of both the BSA and the Trisensor (Warren and Strelow, 1984a; Easton, 1985). When blind children or adults working under blindfold use the aids to locate small cylindrical objects in near space (within arm's reach), the error of estimation in judging the direction to a detected object is about 5 degrees, as opposed to approximately 1 degree in judging the direction to the source of a natural sound. However, the error of estimation in judging the distance to a detected object is about 5 cm, which is substantially smaller than the error of estimation in judging the distance to the source of a natural sound. This level of performance is typically achieved after 3–4 hours of training sessions, when the space observed by means of the Trisensor is expanded enough to allow sensed objects to be as far as 5 meters from the observer, the error of estimation in judging the direction of an object is approximately 6 degrees, and the error of estimation in judging the distance to an object is approximately 0.3 meters in magnitude. One effect of using the Trisensor appears to be a reduction over previous devices in the magnitude of directional errors, and it is reasonable to attribute such an effect to an increase in angular resolving power brought about by the incorporation of a center channel. However, so far there has been no direct psychophysical comparison of the Trisensor and the Binaural Sensory Aid.

The next logical step in evaluating these aids is a comparison of their usefulness in forming memorial representations of the spatial layout of objects. Both the ability of blind pedestrians to move through object-filled spaces and their ability to update their changing spatial positions and relationships to objects in space need further study.

Tactile Substitution

Bach-y-Rita and colleagues (Bach-y-Rita, 1972; Bach-y-Rita et al., 1969) have developed a Tactile-Vision Sensory Substitution System (TVSS). Early research in the field entailed presenting television camera images directly on the skin, on a point-by-point basis using tactile arrays worn on the abdomen. The results obtained with the TVSS suggested that such systems can have educational value if the shapes of objects are kept simple and the background against which they are observed is carefully controlled. When the subjects were moving while they observed complex, dynamic scenes, they found the TVSS display very difficult to interpret. In addition, the original TVSS proved to be a relatively cumbersome, uncomfortable system to wear and use while moving about.

Recently, Bach-y-Rita and Hughes (1985) have developed a more reliable and portable TVSS by modifying the Optacon, a device originally intended for use as a reading aid. The modified Optacon presents vibrotactile stimulation to the user's fingertip via the transduction of optical images of distal objects picked up by an array of photosensitive elements in a camera (an optician scanner fitted with a suitable focal-length lens) under the direct motor control of the user (mounted on a headband). The main advantages of the modified Optacon is that it uses reliable engineered hardware (the Optacon), which is readily available to most schools and institutes for blind persons.

The feasibility of this approach rests on the as yet unproven assumption that the skin is functionally similar to the retina in its capacity to mediate information. Like the retina, the skin can sense variation in two spatial dimensions and is capable of temporal integration. Thus there is generally no need for complex topological transformation or for temporal coding, although temporal display factors are being explored with the goal of transmitting spatial information across the skin more quickly than is possible with present systems.

Psychophysical experiments are presently being conducted in order to determine the sensory capacity of the skin with respect to the variables represented in the Optacon display.

RECOMMENDATIONS

On the basis of our findings, we make the following recommendations regarding physiological considerations in sensory enhancement and substitution.

The Somatosensory System

If sensory aids are to take full advantage of the capacities of the somatosensory system, they must effectively engage the mechanisms responsible for tactile perception and stereognosis. Natural examples of tactile perception are Braille reading and texture perception.

The necessary condition for tactile perception is cutaneous deformation. There is, in principle, no reason why a device cannot simulate the deformation patterns encountered in normal tactile experience and recreate any tactile sensation of which the system is capable. However, no currently available device comes close to achieving this objective. Such a device should have a dense array of probes with a spacing of 0.8 mm or less. Individual probes should have a dynamic range of a least 2 mm and a frequency range from 0–300 Hz. Another form of an optic device might consist of a modifiable bas relief display with protrusions of nonvibratory pins proportional to the gray level. The image received by the fingertips would consist of a TV frame, updated at will by the blind subject scanning the image with the fingertips of one hand. Once such a device is developed, the question of appropriate methods of stimulation can be pursued.

RECOMMENDATION: To speed up progress in mobility research, high priority should be placed on the design and development of a device that can simulate the tactile perception that results from cutaneous deformation.

Since the devices currently available for stimulating skin are so much more limited than the sensory system they address, research tends to illustrate the limitations of the device rather than the sensory system. To take a specific example, using the Optacon as a research instrument is analogous to doing auditory research over the telephone. The device has limited utility as a research instrument.

Stereognosis depends on the patterns of neural impulses that are generated when, as a consequence of the movement of parts of the body, the receptors in muscles, tendons, and joints and the receptors in the skin are excited. It is Stereognosis that largely accounts for the success of the long cane as a mobility aid. The cane works because it is effectively coupled to the stereognostic system. A hypothetical example of the kind of device that we have in mind is an electronic cane operating on some reflectance principle (e.g., sonar or radar) with an adjustable range. The electronic cane, which might be held in the hand like a flashlight, could offer its user a menu of functions. In one mode, it might simulate a rigid cane of fixed length and present to the hand of its user the same pattern of stimulation that would be presented by a real cane, like the jolting sensation that occurs when a cane comes in contact with an object. In another mode, it might function as a directional range finder and be used to detect obstacles or openings such as doorways.

RECOMMENDATION: High priority should be given to the development of mobility aids that engage Stereognosis.

The Auditory System

As in the case of the TVSS, the performance enabled by existing devices that substitute auditory stimulation for visual stimulation should be assessed more thoroughly and carefully than it has been to date. Closer interaction between engineers and psychologists with specialization in human factors engineering would facilitate this endeavor. A more thorough assessment of existing sensory aids is needed, but there are more fundamental issues that must be addressed.

RECOMMENDATION: We recommend that basic research be fostered concerning the cues on which the auditory perception of distance depends as well as research to develop transduction schemes that yield acoustic displays whose cues to the perception of space are analogous to the cues in natural acoustic displays.

Mobility and Low Vision

We need research that will make possible a better definition of the relative importance of the several cues available for the perception of depth, and how their use is limited by different forms of visual impairment. The effect of visual impairment on spatial learning also merits study. When the natural optical display from which visual observers acquire spatial information is transduced to create an acoustical display, extensive receding takes place. Only one cue, the directional cue provided by interaural differences in amplitude, is the same in the transduced display and the natural acoustic display from which auditory observers acquire information about space. The cue to distance provided by differences in pitch, and the cue to surface texture provided by differences in timbre, are arbitrary. What is important to recognize is that a transduced display more closely analogous to the natural acoustic display should be interpretable with significantly greater speed and accuracy than the transduced displays of existing devices. Of course, the auditory perception of space, in many respects, is not well understood. For example, the cues for auditory ranging are apparently provided by complex variation in the amplitudes of several simultaneously occurring signals. The cues provided by variations in the amplitude of a single signal would be ambiguous. However, basic research concerning the cues on which auditory ranging depends will be needed before such information can be taken into account in the development of a transduced acoustical display that is analogous to the natural acoustical display.

RECOMMENDATION: A careful experimental analysis is required to identify the components of the mobility task. Once this is done, further research should be carried out to clarify the demands on vision made by each of these components.

Performing tasks that require changing from one level of resolution to another, such as the task of first finding a street sign and then reading it, poses a serious problem for pedestrians who depend on mobility aids for the information they need to travel, and it may be that current or imminent technology can offer a solution to this problem. There are some possibilities for visual enhancement that depend on techniques and instruments currently available or that could be made available with little additional development. For instance, tunnel vision might be enhanced by reversed telescopes or Corning CPF filters, and digital image enhancement could be used to create displays that compensate for various visual deficiencies. Finally, it is generally agreed that visual capacity is adequately defined by the measured values of three variables: spatial resolution, contrast sensitivity, and extent of the visual field. The visual capacity enabled by a mobility aid could be defined in the same way.

RECOMMENDATION: Research should be conducted to determine the feasibility of defining the sensory capacity enabled by the use of a sensory substitution system, such as a VTSS, in terms of the three variables that define the capacity of the visual system. For example, at the appropriate point in the development of a new VTSS designed to display a tactile analogue of a visual image, its field of view, spatial resolution, and contrast sensitivity could be measured. The measured values thus obtained should give a fairly accurate indication of expected performance. This approach would be useful, not only to evaluate individual substitution systems, but also to compare different substitution systems.

Detrimental Effects of Mobility Aids

The designer of a sensory substitution system should be mindful of the natural ability of the sensory system to be addressed and the interference that may be caused by substitute signals. For instance, the auditory system has useful ability to acquire spatial information from the natural acoustic display, and the benefits associated with the use of a sensory substitution system must be weighed against the costs incurred by compromising natural auditory ability.

A sensory aid with the intended function of enhancement may also interfere with reception of the natural acoustical display. For instance, in an effort to improve the signal-to-noise ratio in the region of the audible spectrum in which speech signals occur, a hearing aid may be designed to filter out both the low frequencies, on which the detection of resonances depends, and the high frequencies, on which the detection of sound shadows and distance cues depends. Automatic volume level controls, commonly used in modern hearing aids, eliminate distance cues provided by differences in loudness. A hearing aid with these features might seriously interfere with the ability of a pedestrian who is both visually and hearing impaired to acquire spatial information.

RECOMMENDATION: An effort should be made to develop a hearing aid that is effective with regard to the reception of speech, but not at the expense of effectiveness with spatial information.

Animal Models

Sensory substitution experiments with animals, such as primates, may be the only practical way to acquire an understanding of the potential benefits and limitations of the devices currently available or in the planning stage. Sensory substitution experiments using animals provide the only means available to use for evaluating the possibility of negative effects on neurophysiological development as a consequence of early and long-term use of sensory aids, and such experiments should be conducted before the effects of their long-term use by blind human infants are assessed.

Observations of behavioral changes that correlate with changes in functional activity patterns of the central nervous system (Bach-y-Rita, 1972) should prove useful in identifying characteristics of device input that prove to be detrimental to the individual user, so that such characteristics can be altered or eliminated. Beneficial characteristics could also be identified and enhanced. Sensory substitution devices may have to be tailored to meet the needs of the individual, much as other prostheses are, and just as prescribed medicines are.

At the present time, we can only speculate about the physiological basis of effective sensory substitution, because the data available will not allow us to do otherwise. However, we can make and test some rudimentary hypotheses, most of which involve plastic properties of the central nervous system. In essence, when we ask questions about the basis of sensory substitution, we are also asking questions about the nature of processes that depend on the plasticity and compensation properties of the central nervous system. The possibility of being able to control modifications or organizational processes in order to enhance compensation in the advent of a sensory deficit presents a significant challenge to science and technology.

RECOMMENDATION: Research should be encouraged on the use of animal models to study the effects of device use on sensory development and functioning and to assess the relative contributions of the various kinds of information provided by sensory aids to the development of spatial knowledge and spatial ability.

SENSORY ENHANCEMENT AND SUBSTITUTION (2024)

FAQs

What is sensory substitution examples? ›

The first sensory substitution device was the white cane which is still widely used by the blind community. Braille is another example of tactile sensory substitution, and software programs such as JAWS substitute are auditory based.

What sense do most sensory substitution devices compensate for? ›

Tactile communication systems based on vibrotactile signals have been developed as sensory substitution devices for those with visual, auditory, or vestibular impairments and to assist users in spatial orientation and navigation in unfamiliar environments.

What are sensory substitution devices for blindness? ›

One of the earliest and most well known form of sensory substitution devices was Paul Bach-y-Rita's TVSS that converted the image from a video camera into a tactile image and coupled it to the tactile receptors on the back of his blind subject.

What are sensory enhancement techniques? ›

Sensory enhancement refers to artificial manipulation of patterns of sensory stimulation to make them more useful. The classic form of sensory enhancement in low vision is magnification, which acts to replace small retinal images with larger ones. This reduces the resolving demands of the task.

What is an example of a sensory response? ›

Senses and Memory

When we remember a specific place or scene or picture something in our minds, sensory processing areas in the brain become activated. Sensory inputs can also cause us to remember. Tasting a food, smelling an odor, or seeing a photo can bring back a flood of memories connected to it.

What is a sensory substitution for deaf people? ›

Haptic devices use the sense of touch to transmit information to the nervous system. As an example, a sound-to-touch device processes auditory information and sends it to the brain via patterns of vibration on the skin for people who have lost hearing.

What is the most powerful sensory? ›

Smell is in fact the strongest human sense, and contrary to popular belief, may be just as powerful as the snout sniffers in dogs and rodents (to certain degrees).

Is sensory adaptation good or bad? ›

1 While sensory adaptation reduces our awareness of a stimulus, it helps free up our attention and resources to attend to other stimuli in our environment.

What is the sensory substitution method useful for? ›

Sensory substitution is the process of using a different sense (such as touch) to replace or makeup for the lack of another (commonly sight). Often technology is required to make this process more feasible and provides an alternative to those who lack the ability to experience sensations in the usual way.

Is being blind a sensory disability? ›

“Sensory disabilities” can involve any of the five senses, but for educational purposes, it generally refers to a disability related to hearing, vision, or both hearing and vision. Sensory disabilities affect access – access to visual and/or auditory information.

What is the new device for blind people? ›

Blind Eye represents a revolutionary wearable assistive device that leverages state-of-the-art technology to offer real-time obstacle detection, facial recognition, multilingual reading capabilities, and object detection to empower visually impaired individuals.

What are low vision assistive devices? ›

Low vision aids (LVA) or low vision assistive products (LVAPs) are devices that aid people with low vision and allow them to use their residual vision for better living. [1] LVAP work by making the objects appear bigger, brighter, and blacker or more closely, with improved contrast.

What things do you crave for sensory stimulation? ›

Types of sensory input
  • Sight: Visual patterns, certain colors or shapes, moving or spinning objects, and bright objects or light.
  • Smell: Specific smells. ...
  • Hearing: Loud or unexpected sounds like fire alarms or blenders, singing, repetitive or specific types of noises (like finger snapping or clapping).

How to stimulate sensory nerves? ›

Sensory stimulation involves the presentation of stimuli that affect different sensory modalities, including hearing, touching, seeing, and smelling. Therefore, it can include elements of the massage and music interventions described previously as well as aromatherapy, moving lights, pictures, and the like.

What are examples of ways to increase sensory stimulation? ›

Games, quizzes, craft groups, gardening or pottery groups, outings, concerts, exercise programs, cooking, food tasting, sing-alongs, religious services and spiritual events can give sensory stimulation. Be conscious of the impact of lighting, flowers, décor, access to gardens and sunshine.

What is an example of a sensory deficit? ›

Loss of Smell and Taste

Loss of taste and/or smell has been reported to be as high as 25 percent after traumatic brain injury. The loss of taste is generally due to loss of smell. Loss of smell has many possible causes including injury to the nose, nasal passages, sinuses, olfactory nerve, and injury within the brain.

What are the fundamentals of sensory substitution? ›

In sensory substitution, one tries to replace a missing sense by delivering some or all of the information usually gathered by one sense to another sense. What counts as a missing sense in a person in this context is determined by considering the normal sensory capacities of humans.

References

Top Articles
Latest Posts
Article information

Author: Gregorio Kreiger

Last Updated:

Views: 5892

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Gregorio Kreiger

Birthday: 1994-12-18

Address: 89212 Tracey Ramp, Sunside, MT 08453-0951

Phone: +9014805370218

Job: Customer Designer

Hobby: Mountain biking, Orienteering, Hiking, Sewing, Backpacking, Mushroom hunting, Backpacking

Introduction: My name is Gregorio Kreiger, I am a tender, brainy, enthusiastic, combative, agreeable, gentle, gentle person who loves writing and wants to share my knowledge and understanding with you.