Peter Brunner

Anatomical registration of intracranial electrodes. Robust model-based localization and deformable smooth brain-shift compensation methods

Abstract:

Background: Intracranial electrodes are typically localized from post-implantation CT artifacts. Automatic algorithms localizing low signal-to-noise ratio artifacts and high-density electrode arrays are missing. Additionally, implantation of grids/strips introduces brain deformations, resulting in registration errors when fusing post- implantation CT and pre-implantation MR images. Brain-shift compensation methods project electrode coordinates to cortex, but either fail to produce smooth solutions or do not account for brain deformations. New methods: We first introduce GridFit, a model-based fitting approach that simultaneously localizes all electrodes’ CT artifacts in grids, strips, or depth arrays. Second, we present CEPA, a brain-shift compensation algorithm combining orthogonal-based projections, spring-mesh models, and spatial regularization constraints. Results: We tested GridFit on ~6000 simulated scenarios. The localization of CT artifacts showed robust performance under difficult scenarios, such as noise, overlaps, and high-density implants (<1 mm errors). Validation with data from 20 challenging patients showed 99% accurate localization of the electrodes (3160/3192). We tested CEPA brain-shift compensation with data from 15 patients. Projections accounted for simple mechanical deformation principles with <0.4 mm errors. The inter-electrode distances smoothly changed across neighbor electrodes, while changes in inter-electrode distances linearly increased with projection distance. Comparison with existing methods: GridFit succeeded in difficult scenarios that challenged available methods and outperformed visual localization by preserving the inter-electrode distance. CEPA registration errors were smaller than those obtained for well-established alternatives. Additionally, modeling resting-state high-frequency activity in five patients further supported CEPA.

Authors:

  • Alejandro Omar Blenkmann

  • Sabine Liliana Leske

  • Anaïs Llorens

  • Jack J. Lin

  • Edward F. Chang

  • Peter Brunner

  • Gerwin Schalk

  • Jugoslav Ivanovic

  • Pål Gunnar Larsson

  • Robert Thomas Knight

  • Tor Endestad

  • Anne-Kristin Solbakk

Date: 2024

DOI: https://doi.org/10.1016/j.jneumeth.2024.110056

View PDF

Music can be reconstructed from human auditory cortex activity using nonlinear decoding models

Abstract:

Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain. We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining encoding and decoding analyses, we found a right-hemisphere dominance for music perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior–posterior STG organization exhibiting sustained and onset responses to musical elements. Our findings show the feasibility of applying predictive modeling on short datasets acquired in single patients, paving the way for adding musical elements to brain–computer interface (BCI) applications.

Authors:

  • Ludovic Bellier

  • Anaïs Llorens

  • Déborah Marciano

  • Aysegul Gunduz

  • Gwerwin Schalk

  • Peter Brunner

  • Robert T. Knight

Date: 2023

DOI: https://doi.org/10.1371/journal.pbio.3002176

View PDF

Unexpected sound omissions are signaled in human posterior superior temporal gyrus: an intracranial study

Abstract:

Context modulates sensory neural activations enhancing perceptual and behavioral performance and reducing prediction errors. However, the mechanism of when and where these high-level expectations act on sensory processing is unclear. Here, we isolate the effect of expectation absent of any auditory evoked activity by assessing the response to omitted expected sounds. Electrocorticographic signals were recorded directly from subdural electrode grids placed over the superior temporal gyrus (STG). Subjects listened to a predictable sequence of syllables, with some infrequently omitted. We found high-frequency band activity (HFA, 70–170 Hz) in response to omissions, which overlapped with a posterior subset of auditory-active electrodes in STG. Heard syllables could be distinguishable reliably from STG, but not the identity of the omitted stimulus. Both omission- and target-detection responses were also observed in the prefrontal cortex. We propose that the posterior STG is central for implementing predictions in the auditory environment. HFA omission responses in this region appear to index mismatch-signaling or salience detection processes.

Authors:

  • Hohyun Cho

  • Yvonne M Fonken

  • Markus Adamek

  • Richard Jimenez

  • Jack J Lin

  • Gerwin Schalk

  • Robert T Knight

  • Peter Brunner

Date: 2023

DOI: https://doi.org/10.1093/cercor/bhad155

View PDF

Encoding and decoding analysis of music perception using intracranial EEG

Abstract:

Music perception engages multiple brain regions, however the neural dynamics of this core human experience remains elusive. We applied predictive models to intracranial EEG data from 29 patients listening to a Pink Floyd song. We investigated the relationship between the song spectrogram and the elicited high-frequency activity (70-150Hz), a marker of local neural activity. Encoding models characterized the spectrotemporal receptive fields (STRFs) of each electrode and decoding models estimated the population-level song representation. Both methods confirmed a crucial role of the right superior temporal gyri (STG) in music perception. A component analysis on STRF coefficients highlighted overlapping neural populations tuned to specific musical elements (vocals, lead guitar, rhythm). An ablation analysis on decoding models revealed the presence of unique musical information concentrated in the right STG and more spatially distributed in the left hemisphere. Lastly, we provided the first song reconstruction decoded from human neural activity.

Authors:

  • Ludovic Bellier

  • Anaïs Llorens

  • Déborah Marciano

  • Gerwin Schalk

  • Peter Brunner

  • Robert T. Knight

  • Brian N. Pasley

Date: 2022

DOI: https://doi.org/10.1101/2022.01.27.478085

View PDF

Word pair classification during imagined speech using direct brain recordings

ABSTRACT

People that cannot communicate due to neurological disorders would bene t from an internal speech decoder. Here, we showed the ability to classify individual words during imagined speech from electrocorticographic signals. In a word imagery task, we used high gamma (70–150 Hz) time features with a support vector machine model to classify individual words from a pair of words. To account for temporal irregularities during speech production, we introduced a non-linear time alignment into the SVM kernel. Classi cation accuracy reached 88% in a two-class classi cation framework (50% chance level), and average classi cation accuracy across fteen word-pairs was signi cant across ve subjects (mean = 58%; p < 0.05). We also compared classi cation accuracy between imagined speech, overt speech and listening. As predicted, higher classi cation accuracy was obtained in the listening and overt speech conditions (mean = 89% and 86%, respectively; p < 0.0001), where speech stimuli were directly presented. The results provide evidence for a neural representation for imagined words in the temporal lobe, frontal lobe and sensorimotor cortex, consistent with previous ndings in speech perception and production. These data represent a proof of concept study for basic decoding of speech imagery, and delineate a number of key challenges to usage of speech imagery neural representations for clinical applications.




AUTHORS

  • Stéphanie Martin

  • Peter Brunner

  • Iñaki Iturrate

  • José del R. Millán

  • Gerwin Schalk

  • Robert T. Knight

  • Brian Pasley

Date: 2016

DOI: 10.1038/srep25803

View PDF


Decoding spectrotemporal features of overt and covert speech from the human cortex

Authors:

  • Stéphanie Martin

  • Peter Brunner

  • Chris Holdgraf

  • Hans-Jochen Heinze

  • Nathan E. Crone

  • Jochem W. Rieger

  • Gerwin Schalk

  • Robert T. Knight

  • Brian Pasley

Date: 2014

DOI: 10.3389/fneng.2014.00014

PubMed: 4034498

View PDF

Abstract:

Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70–150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10−5; paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate.

Spatial and temporal relationships of electrocorticographic alpha and gamma activity during auditory processing

Authors:

  • Cristhian Potes

  • Peter Brunner

  • Ayesegul Gunduz

  • Robert T. Knight

  • Gerwin Schalk

Date: 2014

DOI: Neuroimaging approaches have implicated multiple b

View PDF

Abstract:

Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8–12 Hz) and high gamma (70–110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p < 1e− 8). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing.

Proceedings of the Third International Workshop on Advances in Electrocorticography


Authors:

  • Anthony Ritaccio

  • Michael Beauchamp

  • Conrado Bosman

  • Peter Brunner

  • Edward F. Chang

  • Nathan E. Crone

  • Ayesegul Gunduz

  • Disha Gupta

  • Robert T. Knight

  • Eric Leuthardt

  • Brian Litt

  • Daniel Moran

  • Jeffrey G. Ojemann

  • Josef Parvizi

  • Nick F. Ramsey

  • Jochem W. Rieger

  • Jonathan Viventi

  • Bradley Voytek

  • Justin Williams

  • Gerwin Schalk

Date: 2012

DOI: 10.1016/j.yebeh.2012.09.016

PubMed: 23160096

View PDF

Abstract:

The Third International Workshop on Advances in Electrocorticography (ECoG) was convened in Washington, DC, on November 10–11, 2011. As in prior meetings, a true multidisciplinary fusion of clinicians, scientists, and engineers from many disciplines gathered to summarize contemporary experiences in brain surface recordings. The proceedings of this meeting serve as evidence of a very robust and transformative field but will yet again require revision to incorporate the advances that the following year will surely bring.