How Thought-Controlled Wheelchairs Work

Bodily Feats Image Gallery NASA scientist Chuck Jorgensen tests a subvocal speech-recognition device. See more bodily feat pictures.
NASA Ames Research Center, Dominic Hart

Complete tetraplegia: In many ways, it is the worst possible medical diagnosis, short of imminent death. Total physical paralysis from the neck down can result from spinal cord injuries or diseases such as Amyotrophic Lateral Sclerosis (also known as Lou Gehrig's disease). Sufferers become totally dependent upon others, but they often feel isolated because they have lost the ability to talk. Most of us take for granted the ability to walk from one room to another, but for the severely disabled, even this common action requires assistance from someone else.

Imagine, then, that a completely paralyzed person could control a motorized wheelchair simply by thinking about it. By bypassing damaged nerves, such a device could open many doors to independence for disabled people. In this article, we'll examine a company that is working to make that "what if" into reality. We'll also find out how the same technology could restore speech to people unable to talk.

Whenever you perform a physical action, neurons in your brain generate minute electric signals. These signals move from the brain and travel along axons and dendrites, passing through your nervous system. When they reach the right area of the body, motor neurons activate the necessary muscles to complete the action.

Almost every signal passes through the bundle of nerves inside the spinal cord before moving on to other parts of the body. When the spinal cord is severely damaged or cut, the break in the nervous system prevents the signals from getting where they need to be. In the case of neuromuscular disease, the motor neurons stop functioning -- the signals are still being sent, but there's no way for the body to translate them into actual muscle action.

How can we solve the problem of a faulty nervous system? One way is to intercept signals from the brain before they are interrupted by a break in the spinal cord or degenerated neurons. This is the solution that the thought-controlled wheelchair will put to use.

­ ­ ­

Ambient Audeo System

Scientist Chuck Jorgensen uses a subvocal speech-recognition device to move a simulated Mars rover.
Scientist Chuck Jorgensen uses a subvocal speech-recognition device to move a simulated Mars rover.
NASA Ames Research Center, Dominic Hart

Michael Callahan and Thomas Coleman founded Ambient, the company that develops and markets the Audeo system. Audeo was initially envisioned as a way for severely disabled people to communicate, but Ambient expanded the control systems to include the ability to control a wheelchair or interact with a computer.

The Audeo is based on the idea that neurological signals sent from the brain to the throat area to initiate speech still get there even if the spinal cord is damaged or the motor neurons and muscles in the throat no longer work properly. Thus, even if you can't form understandable words, neurological signals that represent the intended speech exist. This is known as subvocal speech. Everyone performs subvocal speech -- if you think a word or sentence without saying it out loud, your brain still sends the signals to your mouth and throat.

A lightweight receiver on the subject's neck (a small array of sensors attached near the Adam's apple area) intercepts these signals. It functions much like an electroencephalogram, a device that can receive neurological signals when placed on a subject's scalp. The Audeo receives specific speech-related signals because it is placed directly on the neck and throat area. The sensors in the receiver detect the tiny electric potentials that represent neurological activity. It then encrypts those signals before sending them wirelessly to a computer. The computer processes the signals and interprets what the user intended to say or do. The computer then sends command signals to the wheelchair or to a voice processor.

Here is an example of the Audeo system in action: You want to say, "Hello, how are you?" and say it silently in your mind. Your brain sends signals to the motor neurons in your mouth and throat. The signals are the same as the ones that would be sent if you had really said it out loud. The Audeo receiver placed on your throat registers the signals and sends them to the computer. The computer knows the signals for different words and phonemes (small units of spoken speech), so it interprets the signals and processes them into a sentence. It works in much the same way as voice-recognition software. The computer finishes the process by sending an electronic signal to a set of speakers. The speakers then "say" the phrase.

If you want to control a wheelchair, the process is similar, except you learn certain subvocal phrases that the computer interprets as control commands rather than spoken words. The user thinks, "forward," and the Audeo processes that signal as a command to move the wheelchair forward.

Audeo uses a National Instruments CompactRIO controller to collect the data coming from the sensors. Embedded software known as LabVIEW then crunches the numbers and converts the signals into control functions, such as synthesized words or wheelchair controls. Ambient has developed the communication aspect of Audeo to the point that users can create continuous speech, rather than speaking on word at a time [source: Ambient].

NASA's Subvocal Speech Research

NASA i­s developing subvocal control for potential use by astronauts. Astronauts on spacewalks or in the International Space Station work in noisy environments doing jobs that often don't leave their hands free to control computer systems. Voice-recognition programs don't work well in these situations because all the background noise makes voice commands difficult to interpret. NASA hopes the use of subvocal signals will circumvent this problem.

While NASA's system could also be extremely beneficial for disabled people, it has other applications in mind, including the ability to speak silently on a cell phone and uses in military or security operations where speaking out loud would be disruptive.

NASA's subvocal system requires two sensors attached to the user's neck, and the system has to be trained to recognize a particular user's subvocal speech patterns. It takes about an hour of work to train six to 10 words, and the system as of 2006 was limited to 25 words and 38 phonemes [source: TFOT].

In an early experiment, NASA's system achieved higher than 90 percent accuracy after "training" the software. The system controlled a Web browser and did a Google search for the term "NASA" [source: NASA].

­

When Will They Be Available?

You won't find thought-controlled wheelchairs or other devices at your local electronics store -- yet. Ambient has a way for potential users to contact the company, but no pricing or availability information was forthcoming (Ambient didn't respond to requests for information).

In an interview with the Web site "The Future of Things," Dr. Chuck Jorgensen, chief scientist for neuroengineering at NASA Ames Research Center, claimed that commercial applications of subvocal control technology were two to four years in the future [source: TFOT].

To learn more about thought-controlled wheelchairs and subvocal speech, check out the links on the next page.

Related HowStuffWorks Articles

More Great Links

Sources

  • Ambient. "Technology." http://www.theaudeo.com/tech.html
  • Gennuth, Iddo. "Speaking Without Saying a Word." The Future of Things, Oct. 12, 2006. http://www.tfot.info/articles/28/speaking-without-saying-a-word.html
  • Hawking, Stephen. "The Computer." http://www.hawking.org.uk/disable/computer.html
  • LabVIEW: http://www.ni.com/labview/
  • NASA. "Subvocal Speech Demo." http://www.nasa.gov/centers/ames/news/releases/2004/subvocal/subvocal.html
  • The CompactRIO: http://sine.ni.com/nips/cds/view/p/lang/en/nid/14155
  • Tobii Technologies. "Eye control for accessibility." http://www.tobii.com/application_areas/eye_control_for_accessibility