Chris Holdgraf

Gender bias in academia: A lifetime problem that needs solutions

Summary:

Despite increased awareness of the lack of gender equity in academia and a growing number of initiatives to address issues of diversity, change is slow, and inequalities remain. A major source of inequity is gender bias, which has a substantial negative impact on the careers, work-life balance, and mental health of underrepresented groups in science. Here, we argue that gender bias is not a single problem but manifests as a collection of distinct issues that impact researchers’ lives. We disentangle these facets and propose concrete solutions that can be adopted by individuals, academic institutions, and society.

Authors:

  • Anaïs Llorens

  • Athina Tzovara

  • Ludovic Bellier

  • Ilina Bhaya-Grossman

  • Aurélie Bidet-Caulet

  • William K Chang

  • Zachariah R Cross

  • Rosa Dominguez-Faus

  • Adeen Flinker

  • Yvonne Fonken

  • Mark A Gorenstein

  • Chris Holdgraf

  • Colin W Hoy

  • Maria V Ivanova

  • Richard T Jimenez

  • Soyeon Jun

  • Julia WY Kam

  • Celeste Kidd

  • Enitan Marcelle

  • Deborah Marciano

  • Stephanie Martin

  • Nicholas E Myers

  • Karita Ojala

  • Anat Perry

  • Pedro Pinheiro-Chagas

  • Stephanie K Riès

  • Ignacio Saez

  • Ivan Skelin

  • Katarina Slama

  • Brooke Staveland

  • Danielle S Bassett

  • Elizabeth A Buffalo

  • Adrienne L Fairhall

  • Nancy J Kopell

  • Laura J Kray

  • Jack J Lin

  • Anna C Nobre

  • Dylan Riley

  • Anne-Kristin Solbakk

  • Joni D Wallis

  • Xiao-Jing Wang

  • Shlomit Yuval-Greenberg

  • Sabine Kastner

  • Robert T Knight

  • Nina F Dronkers

Date: 2021

DOI: https://doi.org/10.1016/j.neuron.2021.06.002

View PDF


Rapid tuning shifts in human auditory cortex enhance speech intelligibility

ABSTRACT

Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement’ in understanding speech.



AUTHORS

  • Chris Holdgraf

  • Wendy de Heer

  • Brian Pasley

  • Jochem W. Rieger

  • Nathan E. Crone

  • Jack J. Lin

  • Robert T. Knight

  • Frédéric E. Theunissen

Date: 2016

DOI: 10.1038/ncomms13654

View PDF


Decoding spectrotemporal features of overt and covert speech from the human cortex

Authors:

  • Stéphanie Martin

  • Peter Brunner

  • Chris Holdgraf

  • Hans-Jochen Heinze

  • Nathan E. Crone

  • Jochem W. Rieger

  • Gerwin Schalk

  • Robert T. Knight

  • Brian Pasley

Date: 2014

DOI: 10.3389/fneng.2014.00014

PubMed: 4034498

View PDF

Abstract:

Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70–150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10−5; paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate.