{"id":499,"date":"2013-08-20T15:52:10","date_gmt":"2013-08-20T13:52:10","guid":{"rendered":"http:\/\/janela2.lirmm.fr\/~fraisse\/?p=499"},"modified":"2014-05-03T15:44:53","modified_gmt":"2014-05-03T13:44:53","slug":"iros-2013-2","status":"publish","type":"post","link":"https:\/\/www.lirmm.fr\/~fraisse\/archives\/499","title":{"rendered":"IROS 2013"},"content":{"rendered":"<p style=\"text-align: justify;\"><strong><span style=\"color: #333333; font-family: Arial; font-size: 20px; line-height: 16px;\">Multimodal control for human-robot cooperation <\/span><\/strong>(IROS&#8217;13)<\/p>\n<p><span style=\"text-align: justify;\"><span style=\"font-family: Arial; font-size: 16px; text-align: justify;\"> For intuitive human-robot collaboration, the robot\u00a0must quickly adapt to the human behavior. To this end, we propose a multimodal sensor-based control framework, enabling a robot to recognize human intention, and consequently adapt its control strategy. Our approach is marker-less, relies on a Kinect and on an on-board camera, and is based on a unified task formalism. Moreover, we validate it in a mock-up industrial scenario, where human and robot must collaborate to insert screws in a flank. <\/p>\n<ul class=\"papercite_bibliography\">\n<li>                      A. Cherubini, R. Passama, A. Meline, and P. Fraisse. Multimodal control for human-robot cooperation. In <em>IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS)<\/em>, nov 2013. <br \/>    <a href=\"javascript:void(0)\" id=\"papercite_0\" class=\"papercite_toggle\">[Bibtex]<\/a>\n<div class=\"papercite_bibtex\" id=\"papercite_0_block\">\n<pre><code class=\"tex bibtex\">@INPROCEEDINGS{CPM2013,\nauthor={Cherubini, A. and Passama, R. and Meline, A. and Fraisse, P.},\nbooktitle={IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS)},\ntitle={Multimodal control for human-robot cooperation},\nyear={2013},\nmonth={nov},\nvolume={},\nnumber={},\npages={},\nkeywords={iros, Human-Robot Interaction},\ndoi={},\nISSN={}}<\/code><\/pre>\n<\/div>\n<\/li>\n<\/ul>\n<p><\/span><iframe loading=\"lazy\" src=\"http:\/\/www.youtube.com\/embed\/1Ei8uS9hgnQ?feature=player_detailpage\" width=\"600\" height=\"400\" frameborder=\"0\"><\/iframe><br \/>\n<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Multimodal control for human-robot cooperation (IROS&#8217;13) For intuitive human-robot collaboration, the robot\u00a0must quickly adapt to the human behavior. To this end, we propose a multimodal sensor-based control framework, enabling a robot to recognize human intention, and consequently adapt its control &hellip; <a href=\"https:\/\/www.lirmm.fr\/~fraisse\/archives\/499\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"aside","meta":[],"categories":[10],"tags":[],"_links":{"self":[{"href":"https:\/\/www.lirmm.fr\/~fraisse\/wp-json\/wp\/v2\/posts\/499"}],"collection":[{"href":"https:\/\/www.lirmm.fr\/~fraisse\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.lirmm.fr\/~fraisse\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.lirmm.fr\/~fraisse\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.lirmm.fr\/~fraisse\/wp-json\/wp\/v2\/comments?post=499"}],"version-history":[{"count":58,"href":"https:\/\/www.lirmm.fr\/~fraisse\/wp-json\/wp\/v2\/posts\/499\/revisions"}],"predecessor-version":[{"id":727,"href":"https:\/\/www.lirmm.fr\/~fraisse\/wp-json\/wp\/v2\/posts\/499\/revisions\/727"}],"wp:attachment":[{"href":"https:\/\/www.lirmm.fr\/~fraisse\/wp-json\/wp\/v2\/media?parent=499"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.lirmm.fr\/~fraisse\/wp-json\/wp\/v2\/categories?post=499"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.lirmm.fr\/~fraisse\/wp-json\/wp\/v2\/tags?post=499"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}