Chapter 10 Haemodynamics

Written by Vitória Piai and Stéphanie Riès

Neuronal activity demands oxygen and glucose, which is provided by blood flowing to the brain (termed “neurovascular coupling”). By measuring the haemodynamic response, one indirectly measures brain activity. The brain’s hemodynamic signal is predominantly measured with functional magnetic resonance imaging (fMRI), but also with positron emission tomography (PET) and near-infrared spectroscopy (NIRS). Temporal resolution is poor due to the properties of the hemodynamic response (in the range of seconds to minutes), but spatial resolution is excellent for fMRI and relatively good for PET and NIRS.

10.1 Conceptually driven word production

The first PET and fMRI studies investigating word production had participants generate verbs to nouns (e.g., say “eat” to the stimulus apple) versus repeating the word apple (Petersen et al. 1988; McCarthy et al. 1993). It was found that activity in left inferior frontal regions was increased for word generation relative to word repetition.

One can also examine activity that is common to both picture naming and word generation as both tasks require all the core stages of word production we saw in Part I: These tasks differ only in their task-specific processes due to for example stimulus modality, that is, visual vs auditory word, etc. Such an exercise reveals the involvement of regions in left frontal and left temporal lobes. More specifically, the posterior IFG, ventral precentral gyrus, supplementary motor area, mid and posterior STG and MTG, posterior fusiform gyrus, and anterior insula have been found to commonly activate for picture naming and verb generation (Indefrey and Levelt 2004). The exact function of each one of these regions is, however, still debated. Careful experimentation and comparisons across tasks are needed to elucidate which processes of those we saw in Part I they underpin. Nevertheless, the current evidence indicates that the mid-to-posterior parts of the left MTG are involved in lexical selection and the left posterior MTG and supramarginal gyrus are involved in phonological retrieval.

10.2 Overt production

When we produce words out loud, regardless of whether it is via a concept (e.g., picture naming, verb generation) or via converting letters to sounds (as in reading aloud), the phonological code we saw in Section 2 is transformed into a motor plan to move the articulators, that is, the process of phonetic encoding (Section 3). The lowest (“ventral”) part of the precentral gyrus and primary and secondary motor areas are associated with phonetic encoding and the movement of the articulators (e.g., jaw, lips, tongue).

When speaking aloud, you also hear yourself. With respect to speech monitoring (see Section 4), one of the ways in which neuroscientists have sought to identify the brain regions associated with it is by manipulating auditory feedback during speech production by, for example, delaying when participants hear themselves compared to when they speak, distorting their voice as they hear it, or masking the sound of their voice so that they cannot hear themselves (McGuire, Silbersweig, and Frith 1996; Hashimoto and Sakai 2003; Fu et al. 2005; Tourville, Reilly, and Guenther 2008). Doing so has shown that the left posterior STG is responsive to these manipulations such that there is generally more activity in this brain region in situations of abnormal auditory feedback compared to normal auditory feedback during speech production. These findings have led researchers to propose that this brain region is important for the outer loop of speech monitoring. When people hear themselves speak normally though, this region is not more active, in fact it even tends to be less active than when we are not speaking (Flinker et al. 2010). Other brain regions have been associated with the inner loop of speech monitoring including regions in the medial prefrontal cortex, the insula, the left inferior frontal cortex, and cerebellum (Gauvin et al. 2016; Runnqvist et al. 2021; van de Ven, Esposito, and Christoffels 2009; Christoffels, Formisano, and Schiller 2007). These regions have been found to be more active when different responses compete for selection compared to when the response to be made is easier to select. Importantly, electroencephalography (see Section 9) has revealed that the medial frontal cortex is more active when an error is about to be made compared to correct responses (Riès et al. 2011). This differential activity starts before we actually start speaking, meaning that it cannot reflect outer speech monitoring and instead more likely reflects the inner loop of speech monitoring.

Take-home messages

  • Haemodynamic methods have great to excellent spatial resolution but poor temporal resolution
  • There is a general division of labour in the hemisphere: Left temporal lobe regions are mainly associated with conceptual, lexical, and phonological stages, whereas frontal regions are mostly associated with phonetic encoding and articulation
  • The outer loop of speech monitoring has been associated with left posterior STG whereas the inner loop has been associated with the medial prefrontal cortex, the insula, the left inferior frontal cortex, and cerebellum
  • The different aspects of conceptually driven word production have not (yet) been uniquely associated with specific brain regions

Suggestions for further reading
The interested reader is referred to additional literature for reviews on language production using haemodynamic methods (Price 2012; de Zubicaray 2022; Kemmerer 2019; Indefrey and Levelt 2004; Indefrey 2011).

References

Christoffels, Ingrid K., Elia Formisano, and Niels O. Schiller. 2007. “Neural Correlates of Verbal Feedback Processing: An fMRI Study Employing Overt Speech.” Human Brain Mapping 28 (9): 868–79. https://doi.org/10.1002/hbm.20315.
de Zubicaray, Greig I. 2022. “The Neural Organisation of Language Production: Evidence from Neuroimaging and Neuromodulation.” In Cognitive Processes of Language Production, edited by K Strijkers and R Hartsuiker. "Psychology Press/Routledge".
Flinker, A., E. F. Chang, H. E. Kirsch, N. M. Barbaro, N. E. Crone, and R. T. Knight. 2010. “Single-Trial Speech Suppression of Auditory Cortex Activity in Humans.” Journal of Neuroscience 30 (49): 16643–50. https://doi.org/10.1523/jneurosci.1809-10.2010.
Fu, Cynthia H. Y., Goparlen N. Vythelingum, Michael J. Brammer, Steve C. R. Williams, Edson Amaro, Chris M. Andrew, Lidia Yágüez, Neeltje E. M. van Haren, Kazunori Matsumoto, and Philip K. McGuire. 2005. “An fMRI Study of Verbal Self-Monitoring: Neural Correlates of Auditory Verbal Feedback.” Cerebral Cortex 16 (7): 969–77. https://doi.org/10.1093/cercor/bhj039.
Gauvin, Hanna S., Wouter De Baene, Marcel Brass, and Robert J Hartsuiker. 2016. “Conflict Monitoring in Speech Processing: An fMRI Study of Error Detection in Speech Production and Perception.” NeuroImage 126 (February): 96–105. https://doi.org/10.1016/j.neuroimage.2015.11.037.
Hashimoto, Yasuki, and Kuniyoshi L. Sakai. 2003. “Brain Activations During Conscious Self-Monitoring of Speech Production with Delayed Auditory Feedback: An fMRI Study.” Human Brain Mapping 20 (1): 22–28. https://doi.org/10.1002/hbm.10119.
Indefrey, Peter. 2011. “The Spatial and Temporal Signatures of Word Production Components: A Critical Update.” Frontiers in Psychology 2. https://doi.org/10.3389/fpsyg.2011.00255.
Indefrey, Peter, and W. J. M. Levelt. 2004. “The Spatial and Temporal Signatures of Word Production Components.” Cognition 92 (1-2): 101–44. https://doi.org/10.1016/j.cognition.2002.06.001.
Kemmerer, David. 2019. “From Blueprints to Brain Maps: The Status of the Lemma Model in Cognitive Neuroscience.” Language, Cognition and Neuroscience 34 (9): 1085–1116. https://doi.org/10.1080/23273798.2018.1537498.
McCarthy, G., A. M. Blamire, D. L. Rothman, R. Gruetter, and R. G. Shulman. 1993. “Echo-Planar Magnetic Resonance Imaging Studies of Frontal Cortex Activation During Word Generation in Humans.” Proceedings of the National Academy of Sciences 90 (11): 4952–56. https://doi.org/10.1073/pnas.90.11.4952.
McGuire, P. K., D. A. Silbersweig, and C. D. Frith. 1996. “Functional Neuroanatomy of Verbal Self-Monitoring.” Brain 119 (3): 907–17. https://doi.org/10.1093/brain/119.3.907.
Petersen, S. E., P. T. Fox, M. I. Posner, M. Mintun, and M. E. Raichle. 1988. “Positron Emission Tomographic Studies of the Cortical Anatomy of Single-Word Processing.” Nature 331 (6157): 585–89. https://doi.org/10.1038/331585a0.
Price, Cathy J. 2012. “A Review and Synthesis of the First 20years of PET and fMRI Studies of Heard Speech, Spoken Language and Reading.” NeuroImage 62 (2): 816–47. https://doi.org/10.1016/j.neuroimage.2012.04.062.
Riès, Stéphanie, Niels Janssen, Stéphane Dufau, F.-Xavier Alario, and Borı́s Burle. 2011. “General-Purpose Monitoring During Speech Production.” Journal of Cognitive Neuroscience 23 (6): 1419–36. https://doi.org/10.1162/jocn.2010.21467.
Runnqvist, Elin, Valérie Chanoine, Kristof Strijkers, Chotiga Pattamadilok, Mireille Bonnard, Bruno Nazarian, Julien Sein, et al. 2021. “Cerebellar and Cortical Correlates of Internal and External Speech Error Monitoring.” Cerebral Cortex Communications 2 (2). https://doi.org/10.1093/texcom/tgab038.
Tourville, Jason A., Kevin J. Reilly, and Frank H. Guenther. 2008. “Neural Mechanisms Underlying Auditory Feedback Control of Speech.” NeuroImage 39 (3): 1429–43. https://doi.org/10.1016/j.neuroimage.2007.09.054.
van de Ven, V., Fabrizio Esposito, and Ingrid K. Christoffels. 2009. “Neural Network of Speech Monitoring Overlaps with Overt Speech Production and Comprehension Networks: A Sequential Spatial and Temporal ICA Study.” NeuroImage 47 (4): 1982–91. https://doi.org/10.1016/j.neuroimage.2009.05.057.