Summary
"While language is a natural way to interact with artificial characters
or agents in video games, communication with agents currently tends to
be limited to menu systems. To achieve smooth linguistic
communication, utterances need to be grounded in the situation in
which they occur. That is, the meanings of utterances must be learned
from observing their use in some naturally occurring perceptual
context. Recent years have seen much progress in the development of
visually- or auditorily-grounded language understanding using novel
machine learning techniques such as deep learning. At the same time,
companies like Google DeepMind have introduced deep learning models
that can learn to play games at super-human levels. We propose to take
this research to the next step, by grounding natural language in video
games.
Grounding natural language in video games yields two main
benefits. The first benefit is commercial in nature: with the global
market for video games expected to reach $100 billion by 2017, there
is clearly a large demand for more sophisticated interaction with
in-game agents. Secondly, video games are a natural way to explore
artificial intelligence techniques in a ""simulated"" world that is
easier to understand computationally than the extremely complicated
""real"" world.
The current project will explore natural language grounding in a small
number of appropriate games. Once we are capable of grounding natural
language in these games, we can translate utterances into
straightforward actions for artificial agents. An example might be
telling your team members to follow you, to take the left flank, or to
duck when they are being shot at. Given the recent developments in
machine learning and grounded language understanding, we believe that
now is the perfect moment to explore these possibilities further.
"
or agents in video games, communication with agents currently tends to
be limited to menu systems. To achieve smooth linguistic
communication, utterances need to be grounded in the situation in
which they occur. That is, the meanings of utterances must be learned
from observing their use in some naturally occurring perceptual
context. Recent years have seen much progress in the development of
visually- or auditorily-grounded language understanding using novel
machine learning techniques such as deep learning. At the same time,
companies like Google DeepMind have introduced deep learning models
that can learn to play games at super-human levels. We propose to take
this research to the next step, by grounding natural language in video
games.
Grounding natural language in video games yields two main
benefits. The first benefit is commercial in nature: with the global
market for video games expected to reach $100 billion by 2017, there
is clearly a large demand for more sophisticated interaction with
in-game agents. Secondly, video games are a natural way to explore
artificial intelligence techniques in a ""simulated"" world that is
easier to understand computationally than the extremely complicated
""real"" world.
The current project will explore natural language grounding in a small
number of appropriate games. Once we are capable of grounding natural
language in these games, we can translate utterances into
straightforward actions for artificial agents. An example might be
telling your team members to follow you, to take the left flank, or to
duck when they are being shot at. Given the recent developments in
machine learning and grounded language understanding, we believe that
now is the perfect moment to explore these possibilities further.
"
Unfold all
/
Fold all
More information & hyperlinks
Web resources: | https://cordis.europa.eu/project/id/693579 |
Start date: | 01-10-2016 |
End date: | 31-03-2018 |
Total budget - Public funding: | 149 867,00 Euro - 149 867,00 Euro |
Cordis data
Original description
"While language is a natural way to interact with artificial charactersor agents in video games, communication with agents currently tends to
be limited to menu systems. To achieve smooth linguistic
communication, utterances need to be grounded in the situation in
which they occur. That is, the meanings of utterances must be learned
from observing their use in some naturally occurring perceptual
context. Recent years have seen much progress in the development of
visually- or auditorily-grounded language understanding using novel
machine learning techniques such as deep learning. At the same time,
companies like Google DeepMind have introduced deep learning models
that can learn to play games at super-human levels. We propose to take
this research to the next step, by grounding natural language in video
games.
Grounding natural language in video games yields two main
benefits. The first benefit is commercial in nature: with the global
market for video games expected to reach $100 billion by 2017, there
is clearly a large demand for more sophisticated interaction with
in-game agents. Secondly, video games are a natural way to explore
artificial intelligence techniques in a ""simulated"" world that is
easier to understand computationally than the extremely complicated
""real"" world.
The current project will explore natural language grounding in a small
number of appropriate games. Once we are capable of grounding natural
language in these games, we can translate utterances into
straightforward actions for artificial agents. An example might be
telling your team members to follow you, to take the left flank, or to
duck when they are being shot at. Given the recent developments in
machine learning and grounded language understanding, we believe that
now is the perfect moment to explore these possibilities further.
"
Status
CLOSEDCall topic
ERC-PoC-2015Update Date
27-04-2024
Images
No images available.
Geographical location(s)