Stephanie Martin

Imagined speech can be decoded from low- and cross-frequency intracranial EEG features

Abstract:

Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.

Authors:

  • Timothée Proix

  • Jaime Delgado Saa

  • Andy Christen

  • Stephanie Martin

  • Brian N. Pasley

  • Robert T. Knight

  • Xing Tian

  • David Poeppel

  • Werner K. Doyle

  • Orrin Devinsky

  • Luc H. Arnal

  • Pierre Mégevand

  • Anne-Lise Giraud

Date: 2021

DOI: https://doi.org/10.1038/s41467-021-27725-3

View PDF

Gender bias in academia: A lifetime problem that needs solutions

Summary:

Despite increased awareness of the lack of gender equity in academia and a growing number of initiatives to address issues of diversity, change is slow, and inequalities remain. A major source of inequity is gender bias, which has a substantial negative impact on the careers, work-life balance, and mental health of underrepresented groups in science. Here, we argue that gender bias is not a single problem but manifests as a collection of distinct issues that impact researchers’ lives. We disentangle these facets and propose concrete solutions that can be adopted by individuals, academic institutions, and society.

Authors:

  • Anaïs Llorens

  • Athina Tzovara

  • Ludovic Bellier

  • Ilina Bhaya-Grossman

  • Aurélie Bidet-Caulet

  • William K Chang

  • Zachariah R Cross

  • Rosa Dominguez-Faus

  • Adeen Flinker

  • Yvonne Fonken

  • Mark A Gorenstein

  • Chris Holdgraf

  • Colin W Hoy

  • Maria V Ivanova

  • Richard T Jimenez

  • Soyeon Jun

  • Julia WY Kam

  • Celeste Kidd

  • Enitan Marcelle

  • Deborah Marciano

  • Stephanie Martin

  • Nicholas E Myers

  • Karita Ojala

  • Anat Perry

  • Pedro Pinheiro-Chagas

  • Stephanie K Riès

  • Ignacio Saez

  • Ivan Skelin

  • Katarina Slama

  • Brooke Staveland

  • Danielle S Bassett

  • Elizabeth A Buffalo

  • Adrienne L Fairhall

  • Nancy J Kopell

  • Laura J Kray

  • Jack J Lin

  • Anna C Nobre

  • Dylan Riley

  • Anne-Kristin Solbakk

  • Joni D Wallis

  • Xiao-Jing Wang

  • Shlomit Yuval-Greenberg

  • Sabine Kastner

  • Robert T Knight

  • Nina F Dronkers

Date: 2021

DOI: https://doi.org/10.1016/j.neuron.2021.06.002

View PDF


Using Coherence-based spectro-spatial filters for stimulus features prediction from electro-corticographic recordings

Abstract:

The traditional approach in neuroscience relies on encoding models where brain responses are related to different stimuli in order to establish dependencies. In decoding tasks, on the contrary, brain responses are used to predict the stimuli, and traditionally, the signals are assumed stationary within trials, which is rarely the case for natural stimuli. We hypothesize that a decoding model assuming each experimental trial as a realization of a random process more likely reflects the statistical properties of the undergoing process compared to the assumption of stationarity. Here, we propose a Coherence-based spectro-spatial filter that allows for reconstructing stimulus features from brain signal’s features. The proposed method extracts common patterns between features of the brain signals and the stimuli that produced them. These patterns, originating from different recording electrodes are combined, forming a spatial filter that produces a unified prediction of the presented stimulus. This approach takes into account frequency, phase, and spatial distribution of brain features, hence avoiding the need to predefine specific frequency bands of interest or phase relationships between stimulus and brain responses manually. Furthermore, the model does not require the tuning of hyper-parameters, reducing significantly the computational load attached to it. Using three different cognitive tasks (motor movements, speech perception, and speech production), we show that the proposed method consistently improves stimulus feature predictions in terms of correlation (group averages of 0.74 for motor movements, 0.84 for speech perception, and 0.74 for speech production) in comparison with other methods based on regularized multivariate regression, probabilistic graphical models and artificial neural networks. Furthermore, the model parameters revealed those anatomical regions and spectral components that were discriminant in the different cognitive tasks. This novel method does not only provide a useful tool to address fundamental neuroscience questions, but could also be applied to neuroprosthetics.

Authors:

  • Jaime Delgado Saa

  • Andy Christen

  • Stephanie Martin

  • Brian N Pasley

  • Robert T Knight

  • Anne-Lise Giraud

Date: 2020

DOI: https://doi.org/10.1038/s41598-020-63303-1

View PDF


The use of intracranial recordings to decode human language: Challenges and opportunities

Abstract:

Decoding speech from intracranial recordings serves two main purposes: understanding the neural correlates of speech processing and decoding speech features for targeting speech neuroprosthetic devices. Intracranial recordings have high spatial and temporal resolution, and thus offer a unique opportunity to investigate and decode the electrophysiological dynamics underlying speech processing. In this review article, we describe current approaches to decoding different features of speech perception and production – such as spectrotemporal, phonetic, phonotactic, semantic, and articulatory components – using intracranial recordings. A specific section is devoted to the decoding of imagined speech, and potential applications to speech prosthetic devices. We outline the challenges in decoding human language, as well as the opportunities in scientific and neuroengineering applications.


Authors:

  • Stephanie Martin

  • José del R. Millán

  • Robert T. Knight

  • Brian N. Pasley

Date: 2019

DOI: http://dx.doi.org/10.1016/j.bandl.2016.06.003

View PDF


Decoding Inner Speech Using Electrocorticography: Progress and Challenges Toward a Speech Prosthesis

ABSTRACT

Certain brain disorders resulting from brainstem infarcts, traumatic brain injury, cerebral palsy, stroke, and amyotrophic lateral sclerosis, limit verbal communication despite the patient being fully aware. People that cannot communicate due to neurological disorders would benefit from a system that can infer internal speech directly from brain signals. In this review article, we describe the state of the art in decoding inner speech, ranging from early acoustic sound features, to higher order speech units. We focused on intracranial recordings, as this technique allows monitoring brain activity with high spatial, temporal, and spectral resolution, and therefore is a good candidate to investigate inner speech. Despite intense efforts, investigating how the human cortex encodes inner speech remains an elusive challenge, due to the lack of behavioral and observable measures. We emphasize various challenges commonly encountered when investigating inner speech decoding, and propose potential solutions in order to get closer to a natural speech assistive device.






AUTHORS

  • Stephanie Martin

  • Iñaki Iturrate

  • José del R. Millán

  • Robert T. Knight

  • Brian N. Pasley

Date: 2018

DOI: 10.3389/fnins.2018.00422

View PDF