«Brain connecting technology for healthy people»
by Sergei Shishkin, PhD
Head of the Department for Neurocognitive Technologies, NRC Kurchatov Institute
Address: Volgogradsky Prosp., 46B
Пропуск в НИУ ВШЭ: Алисия Воробьёва [firstname.lastname@example.org]
For many years, experts considered brain-computer interfaces (BCIs) mainly as an assistive and rehabilitation technology. Recently, the passive BCI approach (Zander & Kothe, 2011, J. Neural Eng. 8:025005) has become the basis for new prospective BCI applications which can become useful for healthy people.
Operating a traditional BCI requires that a user performs certain mental tasks or attends specific external stimuli. The BCI detects correlates of these activities in the user’s brain signals and translates them into commands or messages. A passive BCI analyzes brain signals during the user’s usual interaction with a machine, without requiring to perform any additional task. The information about the user’s current brain state is used to improve the interaction or for other purposes.
Similarly to other noninvasive BCIs, passive BCIs usually employ variations in the electroencephalogram (EEG) spectral components or the components of the event-related potentials (ERPs), especially the P300 wave. However, the range of passive BCI solutions is now evolving especially quickly (Blankertz et al., 2016, Front. Neurosci. 10:530). The ERP based passive BCIs were already applied for the disambiguation of search queries (e.g., Google search for an image using a keyword “chain” produces images related to its different meanings, but only relevant images elicit a strong P300), for visual search enhancement (finding a relevant face in a crowd also elicits a strong P300), for guiding and teaching robots (when an observer notice the robot is doing something incorrectly, an error-related potential is produced by his or her brain), for stopping a car in emergency situations, for fatigue monitoring and even for studying Libet’s “point of no return”. Gaze fixations are often used to enhance such BCIs by indicating locations and time points where the target brain potentials may start. Some applications benefit from the multiuser approach (e.g., a BCI may use data from several observers who simultaneously watch the same robot), which effectively compensate for the low accuracy of the single-user based detection of the EEG/ERP markers.
We recently developed a hybrid passive BCI based system for a typical active BCI’s task, namely for “clicking” objects on a computer screen (Shishkin et al., 2016, Front. Neurosci. 10:528). In our system, active object selection is based on gaze dwells, while the intended dwells are separated from the spontaneous ones with a new passive BCI. This BCI detects an ERP component that appears when the user expects the feedback from the interface (intentions are translated into actions through expectation!). Our “Eye-Brain-Computer Interface” (EBCI) has been implemented as probably the fastest hybrid BCI + gaze near real-time system to date, using only 300 ms fixation-related EEG segments for classification.
Why the development of effective non-invasive BCIs is not an easy task? What kinds of neuro/psychophysiological, mathematical, engineering and programming efforts are needed? Can we expect that the new technology of “passively” connecting brain with machines – and, possibly, brains with brains?! – will enable us to create human-machine systems with new emergent properties that could be not only practically useful but also interesting from scientific and philosophical points of view? We will try to find answers to these questions.