Summary
Deep neural networks have caused lasting change in the fields of natural language processing and computer vision. More recently, much effort has been directed towards devising machine learning models that bridge the gap between vision and language (V&L). In IMAGINE, I propose to lead this even further and to integrate world knowledge into natural language generation models of V&L. Such knowledge is easily taken for granted and is necessary to perform even simple human-like reasoning tasks. For example, in order to properly answer the question “What are the children doing?” about an image which shows parents with children playing in a park, a model should be able to (a) tell children from parents (e.g. children are considerably shorter), and infer that (b) because they are in a park, laughing, and with other children, they are very likely playing.
Much of this knowledge is presently available in large-scale machine-friendly multi-modal knowledge bases (KBs) and I will leverage these to improve multiple natural language generation (NLG) tasks that require human-like reasoning abilities. I will investigate (i) methods to learn representations for KBs that incorporate text and images, as well as (ii) methods to incorporate these KB representations to improve multiple NLG tasks that reason upon V&L. In (i) I will research how to train a model that learns KB representations (e.g. learning that children are young adults and likely do not work) jointly with the component that understands the image content (e.g. identifies people, animals, objects and events in an image). In (ii) I will investigate how to jointly train NLG models for multiple tasks together with the KB entity linking, so that these models benefit from one another by sharing parameters (e.g. a model that answers questions about an image benefits from the training data of a model that describes the contents of an image), and also benefit from the world knowledge representations in the KB.
Much of this knowledge is presently available in large-scale machine-friendly multi-modal knowledge bases (KBs) and I will leverage these to improve multiple natural language generation (NLG) tasks that require human-like reasoning abilities. I will investigate (i) methods to learn representations for KBs that incorporate text and images, as well as (ii) methods to incorporate these KB representations to improve multiple NLG tasks that reason upon V&L. In (i) I will research how to train a model that learns KB representations (e.g. learning that children are young adults and likely do not work) jointly with the component that understands the image content (e.g. identifies people, animals, objects and events in an image). In (ii) I will investigate how to jointly train NLG models for multiple tasks together with the KB entity linking, so that these models benefit from one another by sharing parameters (e.g. a model that answers questions about an image benefits from the training data of a model that describes the contents of an image), and also benefit from the world knowledge representations in the KB.
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/838188 |
Start date: | 14-06-2019 |
End date: | 23-04-2022 |
Total budget - Public funding: | 232 393,92 Euro - 232 393,00 Euro |
Cordis data
Original description
Deep neural networks have caused lasting change in the fields of natural language processing and computer vision. More recently, much effort has been directed towards devising machine learning models that bridge the gap between vision and language (V&L). In IMAGINE, I propose to lead this even further and to integrate world knowledge into natural language generation models of V&L. Such knowledge is easily taken for granted and is necessary to perform even simple human-like reasoning tasks. For example, in order to properly answer the question “What are the children doing?” about an image which shows parents with children playing in a park, a model should be able to (a) tell children from parents (e.g. children are considerably shorter), and infer that (b) because they are in a park, laughing, and with other children, they are very likely playing.Much of this knowledge is presently available in large-scale machine-friendly multi-modal knowledge bases (KBs) and I will leverage these to improve multiple natural language generation (NLG) tasks that require human-like reasoning abilities. I will investigate (i) methods to learn representations for KBs that incorporate text and images, as well as (ii) methods to incorporate these KB representations to improve multiple NLG tasks that reason upon V&L. In (i) I will research how to train a model that learns KB representations (e.g. learning that children are young adults and likely do not work) jointly with the component that understands the image content (e.g. identifies people, animals, objects and events in an image). In (ii) I will investigate how to jointly train NLG models for multiple tasks together with the KB entity linking, so that these models benefit from one another by sharing parameters (e.g. a model that answers questions about an image benefits from the training data of a model that describes the contents of an image), and also benefit from the world knowledge representations in the KB.
Status
CLOSEDCall topic
MSCA-IF-2018Update Date
28-04-2024
Images
No images available.
Geographical location(s)