Scientific Reports

Human hippocampal pre-activation predicts behavior

ABSTRACT

The response to an upcoming salient event is accelerated when the event is expected given the preceding events – i.e. a temporal context effect. For example, naming a picture following a strongly constraining temporal context is faster than naming a picture after a weakly constraining temporal context. We used sentences as naturalistic stimuli to manipulate expectations on upcoming pictures without prior training. Here, using intracranial recordings from the human hippocampus we found more power in the high-frequency band prior to high-expected pictures than weakly expected ones. We applied pattern similarity analysis on the temporal pattern of hippocampal high-frequency band activity in single hippocampal contacts. We found that greater similarity in the pattern of hippocampal field potentials between pre-picture interval and expected picture interval in the high-frequency band predicted picture-naming latencies. Additional pattern similarity analysis indicated that the hippocampal representations follow a semantic map. The results suggest that hippocampal pre- activation of expected stimuli is a facilitating mechanism underlying the powerful contextual behavioral effect.

AUTHORS

  • Robert T. Knight

  • Anna Jafarpour

  • Vitoria Piai

  • Jack J. Lin

Date: 2017

DOI: 10.1038/s41598-017-06477-5

View PDF


Word pair classification during imagined speech using direct brain recordings

ABSTRACT

People that cannot communicate due to neurological disorders would bene t from an internal speech decoder. Here, we showed the ability to classify individual words during imagined speech from electrocorticographic signals. In a word imagery task, we used high gamma (70–150 Hz) time features with a support vector machine model to classify individual words from a pair of words. To account for temporal irregularities during speech production, we introduced a non-linear time alignment into the SVM kernel. Classi cation accuracy reached 88% in a two-class classi cation framework (50% chance level), and average classi cation accuracy across fteen word-pairs was signi cant across ve subjects (mean = 58%; p < 0.05). We also compared classi cation accuracy between imagined speech, overt speech and listening. As predicted, higher classi cation accuracy was obtained in the listening and overt speech conditions (mean = 89% and 86%, respectively; p < 0.0001), where speech stimuli were directly presented. The results provide evidence for a neural representation for imagined words in the temporal lobe, frontal lobe and sensorimotor cortex, consistent with previous ndings in speech perception and production. These data represent a proof of concept study for basic decoding of speech imagery, and delineate a number of key challenges to usage of speech imagery neural representations for clinical applications.




AUTHORS

  • Stéphanie Martin

  • Peter Brunner

  • Iñaki Iturrate

  • José del R. Millán

  • Gerwin Schalk

  • Robert T. Knight

  • Brian Pasley

Date: 2016

DOI: 10.1038/srep25803

View PDF