HearingHands | How Hands Help Us Hear

Summary
Human communication in face-to-face conversations is inherently multimodal, combining spoken language with a plethora of multimodal cues including hand gestures. Although most of our understanding of human language comes from unimodal research, the multimodal literature suggests that hand gestures are produced in close synchrony to speech prosody, aligning for instance with stressed syllables in free-stress languages like English. Furthermore, prosody plays a vital role in spoken word recognition in many languages, influencing core cognitive processes involved in speech perception, such as lexical activation, segmentation, and recognition. Consequently, viewing gestural timing as an audiovisual prosody cue raises the possibility that the temporal alignment of hand gestures to speech directly influences what we hear (e.g., distinguishing OBject vs. obJECT). However, research to date has largely overlooked the functional contribution of gestural timing to human communication. Therefore, HearingHands aims to uncover how gesture-speech coupling contributes to audiovisual communication in human interaction. Its objectives are [WP1] to chart the PREVALENCE of the use of gesture-speech coupling as a multimodal prominence cue in production and perception across a typologically diverse set of languages; [WP2] to capture the VARIABILITY in production and perception of gesture-speech coupling in both neurotypical and atypical populations; [WP3] to determine the CONSTRAINTS that govern gestural timing effects in more naturalistic communicative settings. These objectives will be achieved through cross-linguistic comparisons of gesture-speech production and perception, neuroimaging of multimodal integration in autistic and neurotypical individuals, and psychoacoustic tests of gestural timing effects employing eye-tracking and virtual reality. Thus, HearingHands has the potential to revolutionize models of multimodal human communication, delineating how hands help us hear.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101040276
Start date: 01-09-2022
End date: 31-08-2027
Total budget - Public funding: 1 499 988,00 Euro - 1 499 988,00 Euro
Cordis data

Original description

Human communication in face-to-face conversations is inherently multimodal, combining spoken language with a plethora of multimodal cues including hand gestures. Although most of our understanding of human language comes from unimodal research, the multimodal literature suggests that hand gestures are produced in close synchrony to speech prosody, aligning for instance with stressed syllables in free-stress languages like English. Furthermore, prosody plays a vital role in spoken word recognition in many languages, influencing core cognitive processes involved in speech perception, such as lexical activation, segmentation, and recognition. Consequently, viewing gestural timing as an audiovisual prosody cue raises the possibility that the temporal alignment of hand gestures to speech directly influences what we hear (e.g., distinguishing OBject vs. obJECT). However, research to date has largely overlooked the functional contribution of gestural timing to human communication. Therefore, HearingHands aims to uncover how gesture-speech coupling contributes to audiovisual communication in human interaction. Its objectives are [WP1] to chart the PREVALENCE of the use of gesture-speech coupling as a multimodal prominence cue in production and perception across a typologically diverse set of languages; [WP2] to capture the VARIABILITY in production and perception of gesture-speech coupling in both neurotypical and atypical populations; [WP3] to determine the CONSTRAINTS that govern gestural timing effects in more naturalistic communicative settings. These objectives will be achieved through cross-linguistic comparisons of gesture-speech production and perception, neuroimaging of multimodal integration in autistic and neurotypical individuals, and psychoacoustic tests of gestural timing effects employing eye-tracking and virtual reality. Thus, HearingHands has the potential to revolutionize models of multimodal human communication, delineating how hands help us hear.

Status

SIGNED

Call topic

ERC-2021-STG

Update Date

09-02-2023
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.1 European Research Council (ERC)
HORIZON.1.1.0 Cross-cutting call topics
ERC-2021-STG ERC STARTING GRANTS
HORIZON.1.1.1 Frontier science
ERC-2021-STG ERC STARTING GRANTS