Shihab A. Shamma

Reconstructing Speech from Human Auditory Cortex

Authors:

  • Brian Pasley

  • Stephen V. David

  • Nima Mesgarani

  • Adeen Flinker

  • Shihab A. Shamma

  • Nathan E. Crone

  • Robert T. Knight

  • Edward F. Chang

Date: 2012

DOI: 10.1371/journal.pbio.1001251

PubMed: 22303281

View PDF

Abstract:

How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.