Summary
With the imminent introduction of artificial agents into our digital future, robots will become teammates, companions and counterparts. However, it remains unclear whether people would interact with humanoid robots as social partners or use them as mere tools. NeuroMarkerHRI seeks to identify neural markers of the activation of social cognitive mechanisms with robots based on hemodynamic responses using supervised ML models. This will determine whether the human brain interprets the behavior from a humanoid robot using social (e.g., mirror neuron system, theory of mind) or general domain cognitive mechanisms (e.g., attention, cognitive control). With a hypothesis-driven and stepwise approach, the project will measure hemodynamic responses with functional near-infrared spectroscopy (fNIRS) in brain regions associated with social cognition, theory of mind and perception of mental states during collaborative interactions with humanoid robots. Hemodynamic signal features will be included in the machine learning models to identify predictors of social or general domain cognitive mechanisms. The project's interdisciplinary nature integrates well-documented methods of cognitive neuroscience, advancements in robotics, and state-of-the-art machine learning techniques to thoroughly evaluate the factors that modulate the activation of the social brain exposed to humanoid robots. Outcomes will set standards for future research in human-robot interaction, offer a reliable tool for designers to measure the effect of robot behavior on the users, and boost the development and improvement of efficient human-robot collaboration. The current project contributes to creating a human-centered development of technology and would help towards the digital transition in Europe, allowing to unlock the potential of social, industrial, and commercial human-robot collaboration.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101063504 |
Start date: | 01-06-2023 |
End date: | 31-05-2025 |
Total budget - Public funding: | - 173 847,00 Euro |
Cordis data
Original description
With the imminent introduction of artificial agents into our digital future, robots will become teammates, companions and counterparts. However, it remains unclear whether people would interact with humanoid robots as social partners or use them as mere tools. NeuroMarkerHRI seeks to identify neural markers of the activation of social cognitive mechanisms with robots based on hemodynamic responses using supervised ML models. This will determine whether the human brain interprets the behavior from a humanoid robot using social (e.g., mirror neuron system, theory of mind) or general domain cognitive mechanisms (e.g., attention, cognitive control). With a hypothesis-driven and stepwise approach, the project will measure hemodynamic responses with functional near-infrared spectroscopy (fNIRS) in brain regions associated with social cognition, theory of mind and perception of mental states during collaborative interactions with humanoid robots. Hemodynamic signal features will be included in the machine learning models to identify predictors of social or general domain cognitive mechanisms. The project's interdisciplinary nature integrates well-documented methods of cognitive neuroscience, advancements in robotics, and state-of-the-art machine learning techniques to thoroughly evaluate the factors that modulate the activation of the social brain exposed to humanoid robots. Outcomes will set standards for future research in human-robot interaction, offer a reliable tool for designers to measure the effect of robot behavior on the users, and boost the development and improvement of efficient human-robot collaboration. The current project contributes to creating a human-centered development of technology and would help towards the digital transition in Europe, allowing to unlock the potential of social, industrial, and commercial human-robot collaboration.Status
SIGNEDCall topic
HORIZON-MSCA-2021-PF-01-01Update Date
09-02-2023
Images
No images available.
Geographical location(s)