language-icon Old Web
English
Sign In

Imagined speech

Imagined speech (silent speech or covert speech) is thinking in the form of sound – “hearing” one’s own voice silently to oneself, without the intentional movement of any extremities such as the lips, tongue, or hands. Logically, imagined speech has been possible since the emergence of language, however, the phenomenon is most associated with the signal processing and detection within electroencephalograph (EEG) data as well as data obtained using alternative non-invasive, brain–computer interface (BCI) devices. Imagined speech (silent speech or covert speech) is thinking in the form of sound – “hearing” one’s own voice silently to oneself, without the intentional movement of any extremities such as the lips, tongue, or hands. Logically, imagined speech has been possible since the emergence of language, however, the phenomenon is most associated with the signal processing and detection within electroencephalograph (EEG) data as well as data obtained using alternative non-invasive, brain–computer interface (BCI) devices. In 2008, the US Defense Advanced Research Projects Agency (DARPA) provided a $4 million grant to the University of California (Irvine), with the intent of providing a foundation for synthetic telepathy. According to DARPA, the project “will allow user-to-user communication on the battlefield without the use of vocalized speech through neural signals analysis. The brain generates word-specific signals prior to sending electrical impulses to the vocal cords. These imagined speech signals would be analyzed and translated into distinct words allowing covert person-to-person communication.” DARPA's program outline has three major goals: The process for analyzing subjects' silent speech is composed of recording subjects’ brain waves, and then using a computer to process the data and determine the content of the subjects' covert speech. Subject neural patterns (brain waves) can be recorded using BCI devices; currently, use of non-invasive devices, specifically the EEG, is of greater interest to researchers than invasive and partially invasive types. This is because non-invasive types pose the least risk to subject health; EEG's have attracted the greatest interest because they offer the most user-friendly approach in addition to having far less complex instrumentation than that of functional magnetic resonance imaging (fMRI’s), another commonly used non-invasive BCI. The first step in processing non-invasive data is to remove artifacts such as eye movement and blinking, as well as other electromyographic activity. After artifact-removal, a series of algorithms is used to translate raw data into the imagined speech content. Processing is also intended to occur in real-time—the information is processed as it is recorded, which allows for near-simultaneous viewing of the content as the subject imagines it. Presumably, “thinking in the form of sound” recruits auditory and language areas whose activation profiles may be extracted from the EEG, given adequate processing. The goal is to relate these signals to a template that represents “what the person is thinking about”. This template could for instance be the acoustic envelope (energy) timeseries corresponding to sound if it were physically uttered. Such mapping from EEG to stimulus is an example of neural decoding techniques.

[ "Brain–computer interface" ]
Parent Topic
Child Topic
    No Parent Topic