Skills bootstrapping toolchain

Summary
Tools will be delivered that enable the simplification of development for sophisticated skills by analyzing & learning from sample interactions between humans (in the domain of the application). To create expressive conversational content for a robot, it is necessary to orchestrate speech, eye & head movements, facial gestures & other parameters to communicate complex & believable behaviors. As this is difficult to do via explicit programming, we will implement the necessary tooling to learn from human interaction. A 1st tool will analyze head pose, eye gaze, word choice & other parameters, as well as the dynamics of interactions to construct the basic skeleton of a particular robot’s skill automatically. The developer will then be able to further extend & tune this skill skeleton as needed. A 2nd tool will simplify gesture creation through real-time human face capture and using the captured data directly in our content creation software, where it can be further tuned & then used in skill development. A 3rd tool will speed up & improve the quality of dialog flow definition, based on observations & learnings from sets of real dialogs drawn from the sample domain of the skill.