Summary
This project aims to overcome the major hurdles that prevent current state-of-the-art models for natural language generation (NLG) from real-world deployment.
While deep learning and neural networks brought considerable progress in many areas of natural language processing, neural approaches to NLG remain confined to experimental use and production NLG systems are handcrafted. The reason for this is that despite the very natural and fluent outputs of recent neural systems, neural NLG still has major drawbacks: (1) the behavior of the systems is not transparent and hard to control (the internal representation is implicit), which leads to incorrect or even harmful outputs, (2) the models require a lot of training data and processing power do not generalize well, and are mostly English-only. On the other hand, handcrafted models are safe, transparent and fast, but produce less fluent outputs and are expensive to adapt to new languages and domains (topics). As a result, usefulness of NLG models in general is limited. In addition, current methods for automatic evaluation of NLG outputs are unreliable, hampering system development.
The main aims of this project, directly addressing the above drawbacks, are:
1) Develop new approaches for NLG that combine neural approaches with explicit symbolic semantic representations, thus allowing greater control over the outputs and explicit logical inferences over the data.
2) Introduce approaches to model compression and adaptation to make models easily portable across domains and languages.
3) Develop reliable neural-symbolic approaches for evaluation of NLG systems.
We will test our approaches on multiple NLG applications—data-to-text generation (e.g., weather or sports reports), summarization, and dialogue response generation. For example, our approach will make it possible to deploy a new data reporting system for a given domain based on a few dozen example input-output pairs, compared to thousands needed by current methods.
While deep learning and neural networks brought considerable progress in many areas of natural language processing, neural approaches to NLG remain confined to experimental use and production NLG systems are handcrafted. The reason for this is that despite the very natural and fluent outputs of recent neural systems, neural NLG still has major drawbacks: (1) the behavior of the systems is not transparent and hard to control (the internal representation is implicit), which leads to incorrect or even harmful outputs, (2) the models require a lot of training data and processing power do not generalize well, and are mostly English-only. On the other hand, handcrafted models are safe, transparent and fast, but produce less fluent outputs and are expensive to adapt to new languages and domains (topics). As a result, usefulness of NLG models in general is limited. In addition, current methods for automatic evaluation of NLG outputs are unreliable, hampering system development.
The main aims of this project, directly addressing the above drawbacks, are:
1) Develop new approaches for NLG that combine neural approaches with explicit symbolic semantic representations, thus allowing greater control over the outputs and explicit logical inferences over the data.
2) Introduce approaches to model compression and adaptation to make models easily portable across domains and languages.
3) Develop reliable neural-symbolic approaches for evaluation of NLG systems.
We will test our approaches on multiple NLG applications—data-to-text generation (e.g., weather or sports reports), summarization, and dialogue response generation. For example, our approach will make it possible to deploy a new data reporting system for a given domain based on a few dozen example input-output pairs, compared to thousands needed by current methods.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/101039303 |
Start date: | 01-04-2022 |
End date: | 31-03-2027 |
Total budget - Public funding: | 1 420 375,00 Euro - 1 420 375,00 Euro |
Cordis data
Original description
This project aims to overcome the major hurdles that prevent current state-of-the-art models for natural language generation (NLG) from real-world deployment.While deep learning and neural networks brought considerable progress in many areas of natural language processing, neural approaches to NLG remain confined to experimental use and production NLG systems are handcrafted. The reason for this is that despite the very natural and fluent outputs of recent neural systems, neural NLG still has major drawbacks: (1) the behavior of the systems is not transparent and hard to control (the internal representation is implicit), which leads to incorrect or even harmful outputs, (2) the models require a lot of training data and processing power do not generalize well, and are mostly English-only. On the other hand, handcrafted models are safe, transparent and fast, but produce less fluent outputs and are expensive to adapt to new languages and domains (topics). As a result, usefulness of NLG models in general is limited. In addition, current methods for automatic evaluation of NLG outputs are unreliable, hampering system development.
The main aims of this project, directly addressing the above drawbacks, are:
1) Develop new approaches for NLG that combine neural approaches with explicit symbolic semantic representations, thus allowing greater control over the outputs and explicit logical inferences over the data.
2) Introduce approaches to model compression and adaptation to make models easily portable across domains and languages.
3) Develop reliable neural-symbolic approaches for evaluation of NLG systems.
We will test our approaches on multiple NLG applications—data-to-text generation (e.g., weather or sports reports), summarization, and dialogue response generation. For example, our approach will make it possible to deploy a new data reporting system for a given domain based on a few dozen example input-output pairs, compared to thousands needed by current methods.
Status
SIGNEDCall topic
ERC-2021-STGUpdate Date
09-02-2023
Images
No images available.
Geographical location(s)