Multimodal control for human-robot cooperation (IROS’13)
For intuitive human-robot collaboration, the robot must quickly adapt to the human behavior. To this end, we propose a multimodal sensor-based control framework, enabling a robot to recognize human intention, and consequently adapt its control strategy. Our approach is marker-less, relies on a Kinect and on an on-board camera, and is based on a unified task formalism. Moreover, we validate it in a mock-up industrial scenario, where human and robot must collaborate to insert screws in a flank.
- A. Cherubini, R. Passama, A. Meline, and P. Fraisse. Multimodal control for human-robot cooperation. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), nov 2013.
[Bibtex]@INPROCEEDINGS{CPM2013, author={Cherubini, A. and Passama, R. and Meline, A. and Fraisse, P.}, booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, title={Multimodal control for human-robot cooperation}, year={2013}, month={nov}, volume={}, number={}, pages={}, keywords={iros, Human-Robot Interaction}, doi={}, ISSN={}}