Summary
Interpreting text in the context of other texts is very hard: it requires understanding the fine-grained semantic relationships between documents called intertextual relationships. This is critical in many areas of human activity, including research, business, journalism, and others. However, finding and interpreting intertextual relationships and tracing information throughout heterogeneous sources remains a tedious manual task. Natural language processing (NLP) fails to adequately support it: mainstream NLP considers texts as static, isolated entities, and existing approaches to cross-document understanding focus on narrow use cases and lack a common, theoretical foundation. Data is scarce and difficult to create, and the field lacks a principled framework for modelling intertextuality.
InterText breaks new ground by proposing the first general framework for studying intertextuality in NLP. We instantiate our framework in three intertextuality types: inline commentary, implicit linking, and semantic versioning. We produce new datasets and generalizable models for each of them. Rather than treating text as a sequence of words, we introduce a new data model that naturally reflects document structure and cross-document relationships. We use this data model to create novel, intertextuality-aware neural representations of text. While prior work ignores similarities between different types of intertextuality, we target their synergies. Thus, we offer solutions that scale to a wide range of tasks and across domains. To enable modular and efficient transfer learning, we propose new document-level adapter-based architectures. We investigate integrative properties of our framework in two case studies: academic peer review and conspiracy theory debunking. InterText creates a solid research platform for intertextuality-aware NLP crucial for managing the dynamic, interconnected digital discourse of today.
InterText breaks new ground by proposing the first general framework for studying intertextuality in NLP. We instantiate our framework in three intertextuality types: inline commentary, implicit linking, and semantic versioning. We produce new datasets and generalizable models for each of them. Rather than treating text as a sequence of words, we introduce a new data model that naturally reflects document structure and cross-document relationships. We use this data model to create novel, intertextuality-aware neural representations of text. While prior work ignores similarities between different types of intertextuality, we target their synergies. Thus, we offer solutions that scale to a wide range of tasks and across domains. To enable modular and efficient transfer learning, we propose new document-level adapter-based architectures. We investigate integrative properties of our framework in two case studies: academic peer review and conspiracy theory debunking. InterText creates a solid research platform for intertextuality-aware NLP crucial for managing the dynamic, interconnected digital discourse of today.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101054961 |
Start date: | 01-04-2023 |
End date: | 31-03-2028 |
Total budget - Public funding: | 2 499 721,00 Euro - 2 499 721,00 Euro |
Cordis data
Original description
Interpreting text in the context of other texts is very hard: it requires understanding the fine-grained semantic relationships between documents called intertextual relationships. This is critical in many areas of human activity, including research, business, journalism, and others. However, finding and interpreting intertextual relationships and tracing information throughout heterogeneous sources remains a tedious manual task. Natural language processing (NLP) fails to adequately support it: mainstream NLP considers texts as static, isolated entities, and existing approaches to cross-document understanding focus on narrow use cases and lack a common, theoretical foundation. Data is scarce and difficult to create, and the field lacks a principled framework for modelling intertextuality.InterText breaks new ground by proposing the first general framework for studying intertextuality in NLP. We instantiate our framework in three intertextuality types: inline commentary, implicit linking, and semantic versioning. We produce new datasets and generalizable models for each of them. Rather than treating text as a sequence of words, we introduce a new data model that naturally reflects document structure and cross-document relationships. We use this data model to create novel, intertextuality-aware neural representations of text. While prior work ignores similarities between different types of intertextuality, we target their synergies. Thus, we offer solutions that scale to a wide range of tasks and across domains. To enable modular and efficient transfer learning, we propose new document-level adapter-based architectures. We investigate integrative properties of our framework in two case studies: academic peer review and conspiracy theory debunking. InterText creates a solid research platform for intertextuality-aware NLP crucial for managing the dynamic, interconnected digital discourse of today.
Status
SIGNEDCall topic
ERC-2021-ADGUpdate Date
09-02-2023
Images
No images available.
Geographical location(s)