HUMANads | Human Ads: Towards Fair Advertising in Content Monetization on Social Media

Summary
Human ads are Internet influencers who earn revenue by creating and monetizing authentic and relatable advertising content for their armies of followers, by relying on business models such as influencer and affiliate marketing. This content often results not only in commercial but also political (hidden) ads, which look the same, are posted by the same persons, are displayed in the same digital space, to the same audiences, and raise the same transparency issues. In this environment, consumers and citizens can no longer distinguish between ads and non-ads, and between commercial and political communications. They are faced with a double transparency problem: (i) human ads have incentives to hide commercial interests, and (ii) platforms have incentives to algorithmically amplify human ads engagement in opaque ways. This reflects a general good faith and fair dealing problem: the social media economy is increasingly based on deceit, which leads to new forms of vulnerability for consumers and citizens on digital markets. Given its complexity and relative novelty, this phenomenon has not yet been the object of sustained academic or regulatory inquiry. HUMANads tackles this comprehensive research gap by exploring why a general European legal regime on fair advertising by human ads on social media platforms is necessary, and what it would entail. First, it articulates new theory of fair advertising in EU consumer law, in the context of content monetization by human ads across commercial and political speech. Second, it gathers evidence relating to business models, advertising prevalence and legal uncertainty through innovative interdisciplinary methods including digital ethnography, comparative law and natural language processing (NLP). Third, it proposes criteria for the assessment of resulting consumer harms, and translates them into a new normative governance model mandating more stringent transparency obligations on social media platforms.
Results, demos, etc. Show all and search (0)
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/101041824
Start date: 01-07-2022
End date: 30-06-2027
Total budget - Public funding: 1 500 000,00 Euro - 1 500 000,00 Euro
Cordis data

Original description

Human ads are Internet influencers who earn revenue by creating and monetizing authentic and relatable advertising content for their armies of followers, by relying on business models such as influencer and affiliate marketing. This content often results not only in commercial but also political (hidden) ads, which look the same, are posted by the same persons, are displayed in the same digital space, to the same audiences, and raise the same transparency issues. In this environment, consumers and citizens can no longer distinguish between ads and non-ads, and between commercial and political communications. They are faced with a double transparency problem: (i) human ads have incentives to hide commercial interests, and (ii) platforms have incentives to algorithmically amplify human ads engagement in opaque ways. This reflects a general good faith and fair dealing problem: the social media economy is increasingly based on deceit, which leads to new forms of vulnerability for consumers and citizens on digital markets. Given its complexity and relative novelty, this phenomenon has not yet been the object of sustained academic or regulatory inquiry. HUMANads tackles this comprehensive research gap by exploring why a general European legal regime on fair advertising by human ads on social media platforms is necessary, and what it would entail. First, it articulates new theory of fair advertising in EU consumer law, in the context of content monetization by human ads across commercial and political speech. Second, it gathers evidence relating to business models, advertising prevalence and legal uncertainty through innovative interdisciplinary methods including digital ethnography, comparative law and natural language processing (NLP). Third, it proposes criteria for the assessment of resulting consumer harms, and translates them into a new normative governance model mandating more stringent transparency obligations on social media platforms.

Status

SIGNED

Call topic

ERC-2021-STG

Update Date

09-02-2023
Images
No images available.
Geographical location(s)