DYMO | Dynamic dialogue modelling

Summary
With the prevalence of information technology in our daily lives, our ability to interact with machines in increasingly simplified and more human-like ways has become paramount. Information is becoming ever more abundant but our access to it is limited not least by technological restraints. Spoken dialogue systems address this issue by providing an intelligent speech interface that facilitates swift, human-like acquisition of information.

The advantages of speech interfaces are already evident from the rise of personal assistants such as Siri, Google Assistant, Cortana or Amazon Alexa. In these systems, however, the user is limited to a simple query, and the systems attempt to provide an answer within one or two turns of dialogue. To date, significant parts of these systems are rule-based and do not readily scale to changes in the domain of operation. Furthermore, rule-based systems can be brittle when speech recognition errors occur.

The vision of this project is to develop novel dialogue models that provide natural human-computer interaction beyond simple information-seeking dialogues and that continuously evolve as they are being used by exploiting both dialogue and non-dialogue data. Building such robust and intelligent spoken dialogue systems poses serious challenges in artificial intelligence and machine learning. The project will tackle four bottleneck areas that require fundamental research: automated knowledge acquisition, optimisation of complex behaviour, realistic user models and sentiment awareness. Taken together, the proposed solutions have the potential to transform the way we access information in areas as diverse as e-commerce, government, healthcare and education.
Results, demos, etc. Show all and search (0)
Unfold all
/
Fold all
More information & hyperlinks
Web resources: https://cordis.europa.eu/project/id/804636
Start date: 01-09-2019
End date: 31-01-2026
Total budget - Public funding: 1 499 956,00 Euro - 1 499 956,00 Euro
Cordis data

Original description

With the prevalence of information technology in our daily lives, our ability to interact with machines in increasingly simplified and more human-like ways has become paramount. Information is becoming ever more abundant but our access to it is limited not least by technological restraints. Spoken dialogue systems address this issue by providing an intelligent speech interface that facilitates swift, human-like acquisition of information.

The advantages of speech interfaces are already evident from the rise of personal assistants such as Siri, Google Assistant, Cortana or Amazon Alexa. In these systems, however, the user is limited to a simple query, and the systems attempt to provide an answer within one or two turns of dialogue. To date, significant parts of these systems are rule-based and do not readily scale to changes in the domain of operation. Furthermore, rule-based systems can be brittle when speech recognition errors occur.

The vision of this project is to develop novel dialogue models that provide natural human-computer interaction beyond simple information-seeking dialogues and that continuously evolve as they are being used by exploiting both dialogue and non-dialogue data. Building such robust and intelligent spoken dialogue systems poses serious challenges in artificial intelligence and machine learning. The project will tackle four bottleneck areas that require fundamental research: automated knowledge acquisition, optimisation of complex behaviour, realistic user models and sentiment awareness. Taken together, the proposed solutions have the potential to transform the way we access information in areas as diverse as e-commerce, government, healthcare and education.

Status

SIGNED

Call topic

ERC-2018-STG

Update Date

27-04-2024
Images
No images available.
Geographical location(s)