HAMLU | How does human agency shape machine learning understanding?

Summary
Human agency over machine learning (ML) models refers to individuals' ability to monitor and act on the behavior of a trained ML model. In practice, ML experts have strong agency on ML systems, from choosing training data to monitoring performance metrics. Beyond them, few other stakeholders are granted the capacity to act on the system. HAMLU investigates how human agency over ML systems shapes our understanding of ML models. Do users better understand an ML model when they can actively explore the model's predictions and shape training data rather than passively review data or explanations? Addressing this challenge is crucial, as overlooking faulty behaviors from deployed ML models can harm European citizens in decision-making processes (e.g., recruitment, justice) and high-stake applications (e.g., self-driving vehicles, anomaly detection).

At the intersection of cognitive psychology, HCI, and ML, I will conduct behavioral experiments engaging human participants interacting with ML systems through varying levels of agency on the model and underlying training data. The experiments will systematically measure participants' understanding of the model under consideration. I will critically scrutinize factors likely mediating the agency-understanding relationship, namely users' prior ML knowledge, task complexity, and data modality. In addition, I will explore how user agency and understanding translate into a sense of responsibility regarding model deployment, a pivotal stance in the AI ethics discourse.

With two years' training at the Cognition Value Behavior (CVBE) lab at Ludwig Maximilian University (LMU) of Munich under Prof. Deroy's supervision, who possesses extensive and complementary experience, I will be able to consider crucial cognitive processes and test fine-grained hypotheses in cognitive psychology, boosting my prospects of becoming a human-computer interaction (HCI) research group leader in Europe.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101153987
Start date: 01-01-2025
End date: 31-12-2026
Total budget - Public funding: - 173 847,00 Euro
Cordis data

Original description

Human agency over machine learning (ML) models refers to individuals' ability to monitor and act on the behavior of a trained ML model. In practice, ML experts have strong agency on ML systems, from choosing training data to monitoring performance metrics. Beyond them, few other stakeholders are granted the capacity to act on the system. HAMLU investigates how human agency over ML systems shapes our understanding of ML models. Do users better understand an ML model when they can actively explore the model's predictions and shape training data rather than passively review data or explanations? Addressing this challenge is crucial, as overlooking faulty behaviors from deployed ML models can harm European citizens in decision-making processes (e.g., recruitment, justice) and high-stake applications (e.g., self-driving vehicles, anomaly detection).

At the intersection of cognitive psychology, HCI, and ML, I will conduct behavioral experiments engaging human participants interacting with ML systems through varying levels of agency on the model and underlying training data. The experiments will systematically measure participants' understanding of the model under consideration. I will critically scrutinize factors likely mediating the agency-understanding relationship, namely users' prior ML knowledge, task complexity, and data modality. In addition, I will explore how user agency and understanding translate into a sense of responsibility regarding model deployment, a pivotal stance in the AI ethics discourse.

With two years' training at the Cognition Value Behavior (CVBE) lab at Ludwig Maximilian University (LMU) of Munich under Prof. Deroy's supervision, who possesses extensive and complementary experience, I will be able to consider crucial cognitive processes and test fine-grained hypotheses in cognitive psychology, boosting my prospects of becoming a human-computer interaction (HCI) research group leader in Europe.

Status

SIGNED

Call topic

HORIZON-MSCA-2023-PF-01-01

Update Date

22-11-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon Europe
HORIZON.1 Excellent Science
HORIZON.1.2 Marie Skłodowska-Curie Actions (MSCA)
HORIZON.1.2.0 Cross-cutting call topics
HORIZON-MSCA-2023-PF-01
HORIZON-MSCA-2023-PF-01-01 MSCA Postdoctoral Fellowships 2023