Key findings
The work of this thesis has provided several novel findings and theoretical contributions, extending the body of knowledge in the domains of conceptual processing and embodied cognition.
First, Chapter 2 (Study 1) addressed the study of S.-C. Chen et al. (2018), to which I have contributed. This study revisited the object orientation effect, which has been influential in the study of sensorimotor simulation. The design comprised the classic sentence-picture verification task, with objects matching or mismatching the orientation implied in the preceding sentence on each trial. Furthermore, the study tracked the effect across 18 languages, and in Chapter 2 I offered some suggestions for future crosslinguistic studies in conceptual processing. The study also included an individual difference measuring participants’ ability to mentally rotate objects, as previous research had suggested that this ability could act as a confound. The results did not present a main effect of orientation match, nor an interaction between the latter and language or mental rotation. Taken together, and considered in light of previous non-replications, the present study supports the absence of the object orientation effect, even though—arguably—the jury always remains out in psychological science. In my view, both original results and replications are subject to questioning, and it is only through the accumulation of consistent findings that we can increase our certainty. Future research should examine the reasons why certain effects provide stronger support for the embodied cognition theory than other effects. I suggest that one reason may be the nature of the independent variables: specifically, categorical variables such as those used in the present study may offer less statistical power than continuous variables (Cohen, 1983; Petilli et al., 2021). In the present thesis, we can compare two operationalisations of the embodied cognition theory. In Study 1, the object orientation effect was used, which implements a factorial design. That is, the main independent variable is made up of categorical levels. Interestingly, the action-sentence compatibility effect—which could not be replicated recently (Morey et al., 2022)—also involves a factorial design. In contrast, in Study 2, we used continuous variables capturing the degree of visual information associated with words (among other variables). Thus, one of the questions that should be examined in future research is whether the nature of the independent variables—e.g., categorical versus continuous—could account for the replication success. In addition, future studies would benefit from the availability of power analyses to estimate adequate sample sizes, from a language typology background to guide crosslinguistic comparisons, and from the balancing of sample sizes across all languages examined in a study to allow the interpretation of any crosslinguistic differences. Three of the topics discussed in Chapter 2 were also addressed in Chapter 3. The first of these topics was the role of sensorimotor simulation in conceptual processing. The second topic was the role of individual differences. The third topic was the importance of statistical power. In addition to these topics, Study 2 incorporated the study of language-based information. Whereas sensorimotor simulation is characterised by detailed representations that tend to be linked to physical experience, language is characterised by abstract associations across networks of words. Research has suggested that language and simulation are compatible and complementary (Banks et al., 2021; Kiela & Bottou, 2014; Lam et al., 2015; Louwerse et al., 2015; Pecher et al., 1998; Petilli et al., 2021). Furthermore, Study 2 investigated what sample sizes are necessary to reliably examine several effects of interest in conceptual processing. The findings on all these topics are addressed below.
Second, in Study 2, we investigated the effects of language-based and vision-based information at the levels of individuals, words and tasks. The findings suggested that both language-based information and perceptual simulation contribute to the comprehension of words, consistent with a hybrid theory of conceptual processing centred on the interplay between language and embodiment (Barsalou et al., 2008; Connell & Lynott, 2014a; Louwerse, 2011). Importantly, language was far more influential than vision overall, consistent with previous research. The analyses implemented conservative models containing a comprehensive array of fixed effects, including covariates that competed against our variables of interest. Furthermore, the models included a maximal random-effects structure, consisting of random intercepts and slopes that accounted for far more variance than the fixed effects.
Third, a ‘task-relevance advantage’ was identified in higher-vocabulary participants. Specifically, in lexical decision (Studies 2.1 and 2.3), higher-vocabulary participants were more sensitive to language-based information than lower-vocabulary participants. In contrast, in semantic decision (Study 2.2), higher-vocabulary participants were more sensitive to word concreteness. Crucially, to isolate the confounding influence of general cognitive abilities different from vocabulary, the analyses in Studies 2.1 (semantic priming) and 2.2 (semantic decision) included general cognition covariates—i.e., attentional control and information uptake. In summary, the present findings suggest that greater linguistic experience may be associated with greater task adaptabiity during conceptual processing (Lim et al., 2020; Pexman & Yap, 2018).
Fourth, the semantic priming paradigm analysed in Study 2.1 revealed that both language and vision were more important with the short SOA (200 ms) than with the long SOA (1,200 ms). This finding replicates some of the previous literature (Petilli et al., 2021) while highlighting the importance of the time course and the level of semantic processing. That is, although the finding seems to be at odds with the theory that perceptual simulation peaks after language-based associations (Barsalou et al., 2008; Louwerse & Connell, 2011), the long SOA may have been too long for perceptual simulation to be maintained in the lexical decision task that was performed by participants, which is semantically shallow (Petilli et al., 2021). A follow-up on this issue is outlined in the last section below.
Fifth, a human-based measure of visual information, created using ratings (Lynott et al., 2020) was found to be superior to a computational measure created using neural networks (Petilli et al., 2021). This finding is consistent with a body of literature suggesting human-based measures are superior to computational measures (De Deyne et al., 2016, 2019; Gagné et al., 2016; Schmidtke et al., 2018; cf. Michaelov et al., 2022; Snefjella & Blank, 2020). Furthermore, the difference between the results of the variables resonates with the finding that simpler variables sometimes outperform more complex variables (Wingfield & Connell, 2022b). Last, future experimental or theoretical work could investigate whether the use of human-based measures to predict human behaviour poses a major problem of circularity that could (in part) invalidate the conclusions of such research (Petilli et al., 2021).
Sixth, we discussed the influence of analytical parameters such as the operationalisation of variables—as reviewed above—and the degree of complexity of statistical models. ‘Accidents of history’ in a research field—that is, arbitrary circumstances—may influence important research conclusions. Consider the situation of a field in which some work has been devoted to improving the precision of certain variables over years. This has been the case in computational psycholinguistics, with the creation of text-based variables such as Latent Semantic Analysis, Hyperspace Analog to Language and ‘word2vec’ (De Deyne et al., 2013, 2016; Günther et al., 2016b, 2016a; M. N. Jones et al., 2006; Lund & Burgess, 1996; Mandera et al., 2017; Mikolov et al., 2013; Wingfield & Connell, 2022b). All else being equal, a competition between one of the research-based supervariables and any non-engineered variable would have a likely winner—the supervariable. Nowadays, in conceptual processing, it is necessary to compare the role of text-based semantic variables, such as the aforementioned ones, to embodiment variables measuring the perceptual, motor, emotional or social information in words. Whereas text-based variables boast a history of steady incremental improvement over time, embodiment variables are more recent and have not undergone such a process. While we do not think that this accident of history fully explains the superiority of language-based information in the present analyses and in many previous analyses, we think it would be valuable to continue reflecting on the confounding role of measurement instruments, and indeed to consider whether some engineering work—as it were—should be applied to embodiment variables too.
Seventh, we delved into the issue of statistical power, and reviewed recent findings suggesting that the sample size required for adequate analyses in topics of cognitive psychology and neuroscience far exceed the sample sizes we are used to. Furthermore, we calculated the sample size required to reliably approach several effects of interest in conceptual processing. For this purpose, we performed Monte Carlo simulations using the models from our main analyses. The results suggested that 300 participants were sufficient to examine the effect of language-based information contained in words, whereas more than 1,000 participants were necessary for the effect of vision-based information and for the interactions of both former variables with vocabulary size, gender and presentation speed (i.e., SOA). These analyses of sample size requirements have ramifications for future research in conceptual processing. While the general pattern is that sample sizes should be increased, the findings also highlight how important it is to consider the specific main effects or interactions that are of greatest theoretical importance. For example, while it would likely take thousands of participants to examine the interaction between gender and language or vision, a much smaller sample would suffice to examine the interaction between vocabulary size and language or vision. Importantly, this power analysis was validated by the varying results obtained for the various effects. Whereas language-based information required feasible sample sizes, the other effects required far larger samples. This puts the results into perspective, refuting the possibility that the analysis is overly conservative across the board. Furthermore, the large figures required for some effects are not entirely unprecedented. In a power analysis recently conducted in neuroscience, Marek et al. (2022) found that, to investigate the mapping between structural and functional individual differences, the appropriate sample size was around 10,000 participants, rather than the average 25 participants. The results of our current power analysis call for more power analyses address a range of effects in conceptual processing. If large sample sizes reappear in those results, the onus will be on scientists to decide whether we can and should invest the required funding to achieve a sufficient sample size, or whether we must accept a limited statistical power, with the reduced reliability it entails (Vasishth & Gelman, 2021). In this regard, the pioneering efforts invested in the design of citizen science studies are to be commended. A prime example is the Small World of Words project (De Deyne et al., 2019), which involves the (ongoing) collection of word association data. There are many challenges associated with citizen science designs—that is, experiments that are open to wide audiences that are not directly recruited or registered—, such as the limited control of who participates and how they do it. On the other hand, the strength of very large numbers might compensate for those challenges. Another difficulty of this approach of the scarcity of precedents. Most cognitive psychologists would currently require a lot of external support to set up a study such as the Small World of Words, especially due to the computational server(s) required to run the experiment and to store the data, as well as the large-scale hardware required to analyse such a large amount of data (namely, a high-performance computing cluster).
References
Thesis: https://doi.org/10.17635/lancaster/thesis/1795.
Online book created using the R package bookdown.