Summary
PROBLEM: Despite decades of research on human-computer interaction (HCI), the problem of how to predict human performance with a given user interface (UI) is unsolved. Existing computational models are limited in scope or require large datasets or expert input. Instead, empirical methods are used that are costly and error-prone.
OBJECTIVE: This project establishes the foundations of simulation intelligence in HCI through computational rationality. Given a design and a task environment, a simulator generates human-like moment-to-moment behavior autonomously, from which key metrics can be computed, i.e. for learning, performance, and ergonomics. ”Artificial users” can be commanded using natural language without modeling expertise.
APPROACH: We seek a breakthrough through the theory of computational rationality that would dramatically expand the models’ scope and actionability. The theory posits that interactive behavior is an emergent consequence of a control policy adapted to internal bounds (cognition) and rewards. While previous work has demonstrated progress, scope has been limited to simple sensorimotor tasks and required reward engineering. This project will study the principles of computationally rational agents that learn skills like humans, can operate autonomously and yet be commanded via natural language. We design a workflow for building dramatically larger models via self-supervised pretraining.
IMPACT: We develop a strong complement to existing evaluation methods in HCI. The pushes forward generative modeling in HCI by combining theory-driven causal assumptions about people and modern ML, while being deployed directly in high-fidelity simulators. This allows dealing with novel task environments with higher accuracy. This will be a leap forward in applications of design and engineering (rapid evaluation) and ML (realistic synthetic data) in HCI.
OBJECTIVE: This project establishes the foundations of simulation intelligence in HCI through computational rationality. Given a design and a task environment, a simulator generates human-like moment-to-moment behavior autonomously, from which key metrics can be computed, i.e. for learning, performance, and ergonomics. ”Artificial users” can be commanded using natural language without modeling expertise.
APPROACH: We seek a breakthrough through the theory of computational rationality that would dramatically expand the models’ scope and actionability. The theory posits that interactive behavior is an emergent consequence of a control policy adapted to internal bounds (cognition) and rewards. While previous work has demonstrated progress, scope has been limited to simple sensorimotor tasks and required reward engineering. This project will study the principles of computationally rational agents that learn skills like humans, can operate autonomously and yet be commanded via natural language. We design a workflow for building dramatically larger models via self-supervised pretraining.
IMPACT: We develop a strong complement to existing evaluation methods in HCI. The pushes forward generative modeling in HCI by combining theory-driven causal assumptions about people and modern ML, while being deployed directly in high-fidelity simulators. This allows dealing with novel task environments with higher accuracy. This will be a leap forward in applications of design and engineering (rapid evaluation) and ML (realistic synthetic data) in HCI.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101141916 |
Start date: | 01-10-2024 |
End date: | 30-09-2029 |
Total budget - Public funding: | 2 499 208,00 Euro - 2 499 208,00 Euro |
Cordis data
Original description
PROBLEM: Despite decades of research on human-computer interaction (HCI), the problem of how to predict human performance with a given user interface (UI) is unsolved. Existing computational models are limited in scope or require large datasets or expert input. Instead, empirical methods are used that are costly and error-prone.OBJECTIVE: This project establishes the foundations of simulation intelligence in HCI through computational rationality. Given a design and a task environment, a simulator generates human-like moment-to-moment behavior autonomously, from which key metrics can be computed, i.e. for learning, performance, and ergonomics. ”Artificial users” can be commanded using natural language without modeling expertise.
APPROACH: We seek a breakthrough through the theory of computational rationality that would dramatically expand the models’ scope and actionability. The theory posits that interactive behavior is an emergent consequence of a control policy adapted to internal bounds (cognition) and rewards. While previous work has demonstrated progress, scope has been limited to simple sensorimotor tasks and required reward engineering. This project will study the principles of computationally rational agents that learn skills like humans, can operate autonomously and yet be commanded via natural language. We design a workflow for building dramatically larger models via self-supervised pretraining.
IMPACT: We develop a strong complement to existing evaluation methods in HCI. The pushes forward generative modeling in HCI by combining theory-driven causal assumptions about people and modern ML, while being deployed directly in high-fidelity simulators. This allows dealing with novel task environments with higher accuracy. This will be a leap forward in applications of design and engineering (rapid evaluation) and ML (realistic synthetic data) in HCI.
Status
SIGNEDCall topic
ERC-2023-ADGUpdate Date
26-11-2024
Images
No images available.
Geographical location(s)