Computing Reviews

Learning from human-robot interactions in modeled scenes
Murnane M., Breitmeyer M., Ferraro F., Matuszek C., Engel D.  SIGGRAPH 2019 (ACM SIGGRAPH 2019 Posters, Los Angeles, CA, Jul 28, 2019)1-2,2019.Type:Proceedings
Date Reviewed: 09/02/20

Usually robots are checked and tested in virtual environments before delivering them to the real world. In this poster, however, the robot is only virtual, and users interacting with it are also rendered in virtual reality.

The authors hope to improve human-robot interaction through learning, and so the virtual game is a way to generate and collect large quantities of sensor data and sequences of human actions. This data could be given to learning algorithms, which cannot work in the absence of data, as for most human-robot interactions. Moreover, adding a microphone and speech recognition would aid in collecting training data for grounded natural language systems.

The interesting point is that the described system can allow learning in new domains. However, no dataset is yet available, and learning is not yet there.

Robotics researchers can work on similar ideas to create datasets, and cognitive scientists could evaluate how the simulated environment can generate the same behaviors in real life.

Reviewer:  G. Gini Review #: CR147051 (2012-0298)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy