Summary
Today, most social media networks use automated tools to recommend content or products and to rank, curate and moderate posts. Recommender systems (RSs), and in particular Group recommender systems (GRSs), -a specific kind of RSs used to recommend items to a group of users-, are likely to become more ubiquitous, with expected market forecast to reach USD 16.13 billion by 2026.
These automated content governance tools are receiving emerging interest as both algorithms and decision-making processes behind the platforms are not sufficiently transparent, with a negative impact on domains such as fair job opportunities, fair e-commerce or news exposure.
Two of the key requirements that have to be fulfilled to build and keep users’ trust in AI systems while guaranteeing transparency are Fairness and Explainability. But, aside from some previous attempts to enhance both aspects in traditional-individual RS, they have hardly been explored in GRSs.
FIDELITY addresses this challenge by developing novel algorithms and computational tools in GRS to boost explanation, fairness, and synergy between them through a disruptive multidisciplinary research approach that: 1) extensively brings SHAP and LIME, as state-of-the-art post-hoc explanation approaches in AI, into RS and GRS contexts, 2) bridges explanation and fairness in RS and GRS, introducing an explanation paradigm shift moving from “why are the recommendations generated?” to “how fair are the generated recommendations?” and, 3) transversally evaluates the new methods through real-world GRSs and user studies. The ultimate goal is to guarantee greater user trust, and independence of RS output from any of the sociodemographic characteristics of users. The training programme, designed with the aim to fill the existing gaps between computing science, social research and business development reality, will provide the candidate with a multidisciplinary background that will boost his innovation potential and career prospects.
These automated content governance tools are receiving emerging interest as both algorithms and decision-making processes behind the platforms are not sufficiently transparent, with a negative impact on domains such as fair job opportunities, fair e-commerce or news exposure.
Two of the key requirements that have to be fulfilled to build and keep users’ trust in AI systems while guaranteeing transparency are Fairness and Explainability. But, aside from some previous attempts to enhance both aspects in traditional-individual RS, they have hardly been explored in GRSs.
FIDELITY addresses this challenge by developing novel algorithms and computational tools in GRS to boost explanation, fairness, and synergy between them through a disruptive multidisciplinary research approach that: 1) extensively brings SHAP and LIME, as state-of-the-art post-hoc explanation approaches in AI, into RS and GRS contexts, 2) bridges explanation and fairness in RS and GRS, introducing an explanation paradigm shift moving from “why are the recommendations generated?” to “how fair are the generated recommendations?” and, 3) transversally evaluates the new methods through real-world GRSs and user studies. The ultimate goal is to guarantee greater user trust, and independence of RS output from any of the sociodemographic characteristics of users. The training programme, designed with the aim to fill the existing gaps between computing science, social research and business development reality, will provide the candidate with a multidisciplinary background that will boost his innovation potential and career prospects.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101106164 |
Start date: | 01-05-2024 |
End date: | 30-04-2026 |
Total budget - Public funding: | - 181 152,00 Euro |
Cordis data
Original description
Today, most social media networks use automated tools to recommend content or products and to rank, curate and moderate posts. Recommender systems (RSs), and in particular Group recommender systems (GRSs), -a specific kind of RSs used to recommend items to a group of users-, are likely to become more ubiquitous, with expected market forecast to reach USD 16.13 billion by 2026.These automated content governance tools are receiving emerging interest as both algorithms and decision-making processes behind the platforms are not sufficiently transparent, with a negative impact on domains such as fair job opportunities, fair e-commerce or news exposure.
Two of the key requirements that have to be fulfilled to build and keep users’ trust in AI systems while guaranteeing transparency are Fairness and Explainability. But, aside from some previous attempts to enhance both aspects in traditional-individual RS, they have hardly been explored in GRSs.
FIDELITY addresses this challenge by developing novel algorithms and computational tools in GRS to boost explanation, fairness, and synergy between them through a disruptive multidisciplinary research approach that: 1) extensively brings SHAP and LIME, as state-of-the-art post-hoc explanation approaches in AI, into RS and GRS contexts, 2) bridges explanation and fairness in RS and GRS, introducing an explanation paradigm shift moving from “why are the recommendations generated?” to “how fair are the generated recommendations?” and, 3) transversally evaluates the new methods through real-world GRSs and user studies. The ultimate goal is to guarantee greater user trust, and independence of RS output from any of the sociodemographic characteristics of users. The training programme, designed with the aim to fill the existing gaps between computing science, social research and business development reality, will provide the candidate with a multidisciplinary background that will boost his innovation potential and career prospects.
Status
SIGNEDCall topic
HORIZON-MSCA-2022-PF-01-01Update Date
31-07-2023
Images
No images available.
Geographical location(s)