Summary
Why is machine translation between English and Portuguese significantly better than machine translation between Dutch and Spanish? Why do speech recognizers work better in German than Finnish? The main problem is the insufficient amount of labelled data for training in both cases. Although the world is multimodal and highly multilingual, speech and language technology is not keeping up with the demand in all languages. We need better learning methods that exploit the advancements of a few modalities and languages for the benefit of others. This proposal addresses the low-resources problem and the expensive approach to multilingual machine translation since systems for all translation pairs are required.
LUNAR proposes to jointly learn a multilingual and multimodal model that builds upon a lifelong universal language representation. This model will compensate for the lack of supervised data and significantly increase the system capacity of generalization from training data given the unconventional variety of employed resources. This model will reduce the number of required translation systems from quadratic to linear as well as allowing for an incremental adaptation of new languages and data.
The high-risk/high-gain relies on automatically training a universal language representation by specifically designed deep learning algorithms. LUNAR will employ an encoder-decoder architecture. The encoder represents an abstraction of the input by reducing its dimensionality,which will become the proposed universal language representation; from this abstraction, the decoder generates the output. The encoder-decoder internal architecture will be designed for learning the universal language representation,which will be appropriately integrated as an objective of the architecture.
LUNAR will impact multidisciplinary communities of specialists in computer science, mathematics, engineering and linguistics who work on natural language understanding and speech processing applications.
LUNAR proposes to jointly learn a multilingual and multimodal model that builds upon a lifelong universal language representation. This model will compensate for the lack of supervised data and significantly increase the system capacity of generalization from training data given the unconventional variety of employed resources. This model will reduce the number of required translation systems from quadratic to linear as well as allowing for an incremental adaptation of new languages and data.
The high-risk/high-gain relies on automatically training a universal language representation by specifically designed deep learning algorithms. LUNAR will employ an encoder-decoder architecture. The encoder represents an abstraction of the input by reducing its dimensionality,which will become the proposed universal language representation; from this abstraction, the decoder generates the output. The encoder-decoder internal architecture will be designed for learning the universal language representation,which will be appropriately integrated as an objective of the architecture.
LUNAR will impact multidisciplinary communities of specialists in computer science, mathematics, engineering and linguistics who work on natural language understanding and speech processing applications.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/947657 |
Start date: | 01-12-2020 |
End date: | 30-11-2025 |
Total budget - Public funding: | 1 498 723,00 Euro - 1 498 723,00 Euro |
Cordis data
Original description
Why is machine translation between English and Portuguese significantly better than machine translation between Dutch and Spanish? Why do speech recognizers work better in German than Finnish? The main problem is the insufficient amount of labelled data for training in both cases. Although the world is multimodal and highly multilingual, speech and language technology is not keeping up with the demand in all languages. We need better learning methods that exploit the advancements of a few modalities and languages for the benefit of others. This proposal addresses the low-resources problem and the expensive approach to multilingual machine translation since systems for all translation pairs are required.LUNAR proposes to jointly learn a multilingual and multimodal model that builds upon a lifelong universal language representation. This model will compensate for the lack of supervised data and significantly increase the system capacity of generalization from training data given the unconventional variety of employed resources. This model will reduce the number of required translation systems from quadratic to linear as well as allowing for an incremental adaptation of new languages and data.
The high-risk/high-gain relies on automatically training a universal language representation by specifically designed deep learning algorithms. LUNAR will employ an encoder-decoder architecture. The encoder represents an abstraction of the input by reducing its dimensionality,which will become the proposed universal language representation; from this abstraction, the decoder generates the output. The encoder-decoder internal architecture will be designed for learning the universal language representation,which will be appropriately integrated as an objective of the architecture.
LUNAR will impact multidisciplinary communities of specialists in computer science, mathematics, engineering and linguistics who work on natural language understanding and speech processing applications.
Status
TERMINATEDCall topic
ERC-2020-STGUpdate Date
27-04-2024
Images
No images available.
Geographical location(s)