INTERACT | Interactive Machine Learning for Compositional Models of Natural Language

Summary
INTERACT will develop new Interactive Learning Algorithms (ILA), motivated by applications in Natural Language Understanding (NLU). The main assumptions behind supervised approaches are unrealistic because most NLU applications have unique information needs, and large collections of annotated data are necessary to achieve good performance. INTERACT follows a collaborative machine learning paradigm that breaks the distinction between annotation and training. We focus on compositional latent-state models (CLSMs) because natural language is rich, complex and compositional. To reduce the amount of human feedback necessary for learning CLSMs we must eliminate annotation redundancy. We argue that to achieve this in the context of CLSMs we must combine: (1) An optimal human feedback strategy, with (2) inducing a latent structure of parts in the compositional domain. Annotation effort will be minimized because the method will only request representative feedback from each latent class. INTERACT marries representation learning (i.e. of parts) and active learning for CLSMs.

Our approach goes beyond classical active learning where the ILA asks labels for samples chosen from a pool of unlabeled data. We empower the ILA with the ability to ask for labels for any complete or partial structure in the domain, i.e. the ILA will be able to generate samples.

We work under the framework of spectral learning of weighted automata and grammars and use ideas from query learning. A key idea is reducing the problem of interactive learning of CLSMs to a form of interactive low-rank matrix completion. Our concrete goals are: (1) Develop ILAs for CLSMs based on spectral learning techniques; and (2) Investigate optimal strategies to leverage human feedback, taking into account what is optimal for the ILA and what is easy for the teacher.

We will experiment with NLU tasks of increasing complexity, from sequence and tree classification to parsing problems where the outputs are trees.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/853459
Start date: 01-03-2020
End date: 28-02-2026
Total budget - Public funding: 1 499 375,00 Euro - 1 499 375,00 Euro
Cordis data

Original description

INTERACT will develop new Interactive Learning Algorithms (ILA), motivated by applications in Natural Language Understanding (NLU). The main assumptions behind supervised approaches are unrealistic because most NLU applications have unique information needs, and large collections of annotated data are necessary to achieve good performance. INTERACT follows a collaborative machine learning paradigm that breaks the distinction between annotation and training. We focus on compositional latent-state models (CLSMs) because natural language is rich, complex and compositional. To reduce the amount of human feedback necessary for learning CLSMs we must eliminate annotation redundancy. We argue that to achieve this in the context of CLSMs we must combine: (1) An optimal human feedback strategy, with (2) inducing a latent structure of parts in the compositional domain. Annotation effort will be minimized because the method will only request representative feedback from each latent class. INTERACT marries representation learning (i.e. of parts) and active learning for CLSMs.

Our approach goes beyond classical active learning where the ILA asks labels for samples chosen from a pool of unlabeled data. We empower the ILA with the ability to ask for labels for any complete or partial structure in the domain, i.e. the ILA will be able to generate samples.

We work under the framework of spectral learning of weighted automata and grammars and use ideas from query learning. A key idea is reducing the problem of interactive learning of CLSMs to a form of interactive low-rank matrix completion. Our concrete goals are: (1) Develop ILAs for CLSMs based on spectral learning techniques; and (2) Investigate optimal strategies to leverage human feedback, taking into account what is optimal for the ILA and what is easy for the teacher.

We will experiment with NLU tasks of increasing complexity, from sequence and tree classification to parsing problems where the outputs are trees.

Status

SIGNED

Call topic

ERC-2019-STG

Update Date

27-04-2024
Images
No images available.
Geographical location(s)
Structured mapping
Unfold all
/
Fold all
Horizon 2020
H2020-EU.1. EXCELLENT SCIENCE
H2020-EU.1.1. EXCELLENT SCIENCE - European Research Council (ERC)
ERC-2019
ERC-2019-STG