Jochem W. Rieger

Grasp-specific high-frequency broadband mirror neuron activity during reach-and-grasp movements in humans

Abstract:

Broadly congruent mirror neurons, responding to any grasp movement, and strictly congruent mirror neurons, responding only to specific grasp movements, have been reported in single-cell studies with primates. Delineating grasp properties in humans is essential to understand the human mirror neuron system with implications for behavior and social cognition. We analyzed electrocorticography data from a natural reach-and-grasp movement observation and delayed imitation task with 3 different natural grasp types of everyday objects. We focused on the classification of grasp types from high-frequency broadband mirror activation patterns found in classic mirror system areas, including sensorimotor, supplementary motor, inferior frontal, and parietal cortices. Classification of grasp types was successful during movement observation and execution intervals but not during movement retention. Our grasp type classification from combined and single mirror electrodes provides evidence for grasp-congruent activity in the human mirror neuron system potentially arising from strictly congruent mirror neurons.

Authros:

  • Alexander M. Dreyer

  • Leo Michalke

  • Anat Perry

  • Edward F. Chang

  • Jack J. Lin

  • Robert T. Knight

  • Jochem W. Rieger

Date: 2022

DOI: https://doi.org/10.1093/cercor/bhac504

View PDF

Rapid tuning shifts in human auditory cortex enhance speech intelligibility

ABSTRACT

Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement’ in understanding speech.



AUTHORS

  • Chris Holdgraf

  • Wendy de Heer

  • Brian Pasley

  • Jochem W. Rieger

  • Nathan E. Crone

  • Jack J. Lin

  • Robert T. Knight

  • Frédéric E. Theunissen

Date: 2016

DOI: 10.1038/ncomms13654

View PDF


Frontal and motor cortex contributions to response inhibition: evidence from electrocorticography

ABSTRACT

Changes in the environment require rapid modification or inhibition of ongoing behavior. We used the stop-signal paradigm and intracranial recordings to investigate response preparation, inhibition, and monitoring of task-relevant information. Electrocorticographic data were recorded in eight patients with electrodes covering frontal, temporal, and parietal cortex, and time-frequency analysis was used to examine power differences in the beta (13–30 Hz) and high-gamma bands (60 –180 Hz). Over motor cortex, beta power decreased, and high-gamma power increased during motor preparation for both go trials (Go) and unsuccessful stops (US). For successful stops (SS), beta increased, and high-gamma was reduced, indexing the cancellation of the prepared response. In the middle frontal gyrus (MFG), stop signals elicited a transient high-gamma increase. The MFG response occurred before the estimated stop-signal reaction time but did not distinguish between SS and US trials, likely signaling attention to the salient stop stimulus. A postresponse high-gamma increase in MFG was stronger for US compared with SS and absent in Go, supporting a role in behavior monitoring. These results provide evidence for differential contributions of frontal subregions to response inhibition, including motor preparation and inhibitory control in motor cortex and cognitive control and action evaluation in lateral prefrontal cortex.






AUTHORS

  • Y.M. Fonken

  • Jochem W. Rieger

  • Elinor Tzvi

  • Nathan E. Crone

  • Edward F. Chang

  • Josef Parvizi

  • Robert T. Knight

  • Ulrike M. Krämer

Date: 2016

DOI: 10.1038/srep25803

View PDF


Support vector machine and hidden Markov model based decoding of finger movements using electrocorticography

Authors:

  • Tobias Wissel

  • Tim Pfeiffer

  • Robert Frysch

  • Robert T. Knight

  • Edward F. Chang

  • Hermann Hinrichs

  • Jochem W. Rieger

  • Georg Rose

Date: 2013

DOI: 10.1088/1741-2560/10/5/056020

View PDF

Abstract:

Objective. Support vector machines (SVM) have developed into a gold standard for accurate classification in brain–computer interfaces (BCI). The choice of the most appropriate classifier for a particular application depends on several characteristics in addition to decoding accuracy. Here we investigate the implementation of hidden Markov models (HMM) for online BCIs and discuss strategies to improve their performance. Approach. We compare the SVM, serving as a reference, and HMMs for classifying discrete finger movements obtained from electrocorticograms of four subjects performing a finger tapping experiment. The classifier decisions are based on a subset of low-frequency time domain and high gamma oscillation features. Main results. We show that decoding optimization between the two approaches is due to the way features are extracted and selected and less dependent on the classifier. An additional gain in HMM performance of up to 6% was obtained by introducing model constraints. Comparable accuracies of up to 90% were achieved with both SVM and HMM with the high gamma cortical response providing the most important decoding information for both techniques. Significance. We discuss technical HMM characteristics and adaptations in the context of the presented data as well as for general BCI applications. Our findings suggest that HMMs and their characteristics are promising for efficient online BCIs.

Decoding spectrotemporal features of overt and covert speech from the human cortex

Authors:

  • Stéphanie Martin

  • Peter Brunner

  • Chris Holdgraf

  • Hans-Jochen Heinze

  • Nathan E. Crone

  • Jochem W. Rieger

  • Gerwin Schalk

  • Robert T. Knight

  • Brian Pasley

Date: 2014

DOI: 10.3389/fneng.2014.00014

PubMed: 4034498

View PDF

Abstract:

Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70–150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10−5; paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate.

Proceedings of the Third International Workshop on Advances in Electrocorticography


Authors:

  • Anthony Ritaccio

  • Michael Beauchamp

  • Conrado Bosman

  • Peter Brunner

  • Edward F. Chang

  • Nathan E. Crone

  • Ayesegul Gunduz

  • Disha Gupta

  • Robert T. Knight

  • Eric Leuthardt

  • Brian Litt

  • Daniel Moran

  • Jeffrey G. Ojemann

  • Josef Parvizi

  • Nick F. Ramsey

  • Jochem W. Rieger

  • Jonathan Viventi

  • Bradley Voytek

  • Justin Williams

  • Gerwin Schalk

Date: 2012

DOI: 10.1016/j.yebeh.2012.09.016

PubMed: 23160096

View PDF

Abstract:

The Third International Workshop on Advances in Electrocorticography (ECoG) was convened in Washington, DC, on November 10–11, 2011. As in prior meetings, a true multidisciplinary fusion of clinicians, scientists, and engineers from many disciplines gathered to summarize contemporary experiences in brain surface recordings. The proceedings of this meeting serve as evidence of a very robust and transformative field but will yet again require revision to incorporate the advances that the following year will surely bring.

Single trial discrimination of individual finger movements on one hand: A combined MEG and EEG study

Authors:

  • Fanny Quandt

  • Christoph Reichert

  • Hermann Hinrichs

  • Hans-Jochen Heinze

  • Robert T. Knight

  • Jochem W. Rieger

Date: 2011

DOI: 10.1016/j.neuroimage.2011.11.053

PubMed: 22155040

View PDF

Abstract:

It is crucial to understand what brain signals can be decoded from single trials with different recording techniques for the development of Brain-Machine Interfaces. A specific challenge for non-invasive recording methods are activations confined to small spatial areas on the cortex such as the finger representation of one hand. Here we study the information content of single trial brain activity in non-invasive MEG and EEG recordings elicited by finger movements of one hand. We investigate the feasibility of decoding which of four fingers of one hand performed a slight button press. With MEG we demonstrate reliable discrimination of single button presses performed with the thumb, the index, the middle or the little finger (average over all subjects and fingers 57%, best subject 70%, empirical guessing level: 25.1%). EEG decoding performance was less robust (average over all subjects and fingers 43%, best subject 54%, empirical guessing level 25.1%). Spatiotemporal patterns of amplitude variations in the time series provided best information for discriminating finger movements. Non-phase-locked changes of mu and beta oscillations were less predictive. Movement related high gamma oscillations were observed in average induced oscillation amplitudes in the MEG but did not provide sufficient information about the finger's identity in single trials. Importantly, pre-movement neuronal activity provided information about the preparation of the movement of a specific finger. Our study demonstrates the potential of non-invasive MEG to provide informative features for individual finger control in a Brain-Machine Interface neuroprosthesis.

Categorical speech representation in the human superior temporal gyrus

Authors:

  • Edward F. Chang

  • Jochem W. Rieger

  • Keith Johnson

  • Mitchel S. Berger

  • Nicholas M. Barbaro

  • Robert T. Knight

Date: 2010

DOI: 10.1038/nn.264

PubMed: 20890293

View PDF

Abstract:

Speech perception requires the rapid and effortless extraction of meaningful phonetic information from a highly variable acoustic signal. A powerful example of this phenomenon is categorical speech perception, in which a continuum of acoustically varying sounds is transformed into perceptually distinct phoneme categories. We found that the neural representation of speech sounds is categorically organized in the human posterior superior temporal gyrus. Using intracranial high-density cortical surface arrays, we found that listening to synthesized speech stimuli varying in small and acoustically equal steps evoked distinct and invariant cortical population response patterns that were organized by their sensitivities to critical acoustic features. Phonetic category boundaries were similar between neurometric and psychometric functions. Although speech-sound responses were distributed, spatially discrete cortical loci were found to underlie specific phonetic discrimination. Our results provide direct evidence for acoustic-to–higher order phonetic level encoding of speech sounds in human language receptive cortex.

Categorical speech representation in human superior temporal gyrus

Authors:

  • Edward F. Chang

  • Jochem W. Rieger

  • Keith Johnson

  • Mitchel S. Berger

  • Nicholas M. Barbaro

  • Robert T. Knight

Date: 2010

PubMed: 20890293

View PDF

Abstract:

Speech perception requires the rapid and effortless extraction of meaningful phonetic information from a highly variable acoustic signal. A powerful example of this phenomenon is categorical speech perception, in which a continuum of acoustically varying sounds is transformed into perceptually distinct phoneme categories. We found that the neural representation of speech sounds is categorically organized in the human posterior superior temporal gyrus. Using intracranial high-density cortical surface arrays, we found that listening to synthesized speech stimuli varying in small and acoustically equal steps evoked distinct and invariant cortical population response patterns that were organized by their sensitivities to critical acoustic features. Phonetic category boundaries were similar between neurometric and psychometric functions. Although speech-sound responses were distributed, spatially discrete cortical loci were found to underlie specific phonetic discrimination. Our results provide direct evidence for acoustic-to–higher order phonetic level encoding of speech sounds in human language receptive cortex.

Categorical speech representation in human superior temporal gyrus

Authors:

  • Edward F. Chang

  • Jochem W. Rieger

  • Keith Johnson

  • Mitchel S. Berger

  • Nicholas M. Barbaro

  • Robert T. Knight

Date: 2010

PubMed: 20890293

View PDF

Abstract:

Speech perception requires the rapid and effortless extraction of meaningful phonetic information from a highly variable acoustic signal. A powerful example of this phenomenon is categorical speech perception, in which a continuum of acoustically varying sounds is transformed into perceptually distinct phoneme categories. We found that the neural representation of speech sounds is categorically organized in the human posterior superior temporal gyrus. Using intracranial high-density cortical surface arrays, we found that listening to synthesized speech stimuli varying in small and acoustically equal steps evoked distinct and invariant cortical population response patterns that were organized by their sensitivities to critical acoustic features. Phonetic category boundaries were similar between neurometric and psychometric functions. Although speech-sound responses were distributed, spatially discrete cortical loci were found to underlie specific phonetic discrimination. Our results provide direct evidence for acoustic-to–higher order phonetic level encoding of speech sounds in human language receptive cortex.