See and Touch: 1st Workshop on multimodal sensor-based robot control for HRI and soft manipulation

 

Organizers: A. Cherubini, Y. Mezouar, D. Navarro-Alarcon, M. Prats, J. A. Corrales Ramon

Objectives

Recent technological developments on bio-inspired sensors have made them affordable and lightweight, and therefore eased their use on robots, in particular on anthropomorphic ones (e.g., humanoids and dexterous hands). These sensors include e.g. RGB-D cameras, tactile skins, force/moment transducers, and capacitive proximity sensors.

Historically, heterogeneous sensor data was fed to fusion algorithms (e.g., Kalman or Bayesian-based methods) to provide state estimation for modeling the environment. However, since these sensors generally measure different physical phenomena, it is preferable to use them directly in the low-level servo controller rather than to apply multi-sensory fusion, or to design complex state machines. This idea, originally proposed in the hybrid position-force control paradigm, when extended to feedback from multiple sensors, brings new challenges to the controller design; e.g. related to the sensors characteristics (synchronization, hybrid control, task compatibility, etc.) or to the task representation.

However, this approach represents at best many of our cognitive processes (which directly link perception and action), and is fundamental in many innovative robotic applications, such as soft material manipulation, and human-robot interaction. Whole-body control is another field of research that would greatly profit from the discussed methods. In fact, multiple tasks (manipulation, self collision avoidance, etc.) can be simultaneously realized by exploiting the diverse sensing capabilities of the robot body.

The purpose of this workshop is to bring together researchers with common interests in the area of multimodal servo-control, based on a variety of feedback signals, including vision (2D and 3D), touch (haptics), position, force, proximity (from capacitive measurements) etc.

Content

Multimodal robot control, based on the concurrent use of various sensors directly at the control level, is crucial in many applications. For instance, human-robot interaction for collaborative tasks often relies on force/tactile feedback to transmit the user intention to the robot. However, the robot should be capable of recognizing the intention even without direct contact between the two. A possible solution comes from visual data, which should then be combined with haptics to obtain the best result. This is of particular interest in whole-body control of humanoid robots, since their actuators and sensors are generally bio-inspired, to facilitate interaction with the human.

The automatic manipulation of soft materials (e.g., in the food industry) represents a second important case study. The natural evolution of recent works on vision-based servoing of soft objects, is the integration of haptics and force feedback.

For all these reasons, we believe that adaptive sensor-based methods directly linking perception to action, as in the above-mentioned approaches, can provide better solutions in unpredictable scenarios, than traditional planning and model-based techniques, which require a priori models of the environment.

We propose a half-day workshop to enhance active collaboration and discuss formal methods for sensor-based control. The invited speakers will share their experience and will give an insight into the evolution and current status of multimodal control. The workshop will also be opened to paper submission, and the final schedule will be adapted depending on the quantity and quality of the submissions. We will organize a poster session of the submitted papers, to ease interaction and discussion between participants.

Submission information

Prospective participants are required to submit an extended abstract (maximum 2 pages in length), but videos are also welcome!
All submissions will be reviewed using a single-blind review process.
Accepted contributions will be presented during the workshop as posters.
Submissions must be sent in pdf, following the IEEE conference style (two-columns), to:

cherubini_AT_lirmm_DOT_fr

indicating [IROS 2015 Workshop] in the e-mail subject.
After the workshop, the organising committee will consider pursuing publication of a special issue in a journal or book including extended versions of the best papers.

Submission Deadline: August 27
Notification of acceptance: September 2
Camera-ready deadline: September 4
Workshop day: September 28

Program Committee

João Bimbo, King's College London, United Kingdom
Jeannette Bohg, Max-Planck Institute for Intelligent Systems, Tübingen, Germany
Gianni Borghesan, University of Leuven, Belgium
Giorgio Cannata, Università degli Studi di Genova, Italy
Eris Chinellato, University of Leeds, United Kingdom
Juan Antonio Corrales Ramòn, Institut Pascal, Clermont Ferrand, France
Joris De Schutter, University of Leuven, Belgium
Robert Haschke, Univeristät Bielefeld, Germany
Björn Hein, Karlsruhe Institute of Technology, Germany
Norman Hendrich - TAMS - Universität Hamburg
Wang Hesheng, Shanghai Jiao Tong University, China
Olivier Kermorgant, Université de Strasbourg, France
Jun Kinugawa, Tohoku University, Japan
James Kuffner, Google Research, USA
Zheng Li, The Chinese University of Hong Kong, China
Philip Edward Long, IRT Jules Verne, France
Philipp Mittendorfer, TUM, Germany
Benjamin Navarro, Université d'Orléans / Université de Montpellier, France
Stefan Escaida Navarro, Karlsruhe Institute of Technology, Germany
Veronique Perdereau, ISIR, Paris, France
Mohamed Sorour, PSA / Université de Montpellier, France
 

Proceedings

Program

Time Talk
14:00 - 14:10 Opening
14:10 - 14:40

Joris De Schutter - A constraint-based approach for multi-modal servo-control
First, a short overview is given of 40 years of (multi)sensor-based robot control at KU Leuven. Starting from the hybrid force/position control approach, applications were extended to include model information and multi-modal sensor feedback, such as distance control, collision avoidance and visual servoing, demonstrated on various robotic platforms in a constraint-based task specification and control framework. In a second part, a generic approach is given to model the underlying geometric sensor environment, derive the appropriate constraint specification, and estimate uncertainties in this model from the sensor measurements. A low-level control approach is presented to simultaneously comply with all the multi-modal constraints in an optimal way. Conflicting constraints are handled by a mid-level state controller, using priorities and/or constraint weighting.

14:40 - 15:10

Anh-Van Ho - Multimodal Sense of Touch: Complexity and Feasibility in HMI
Haptic or the sense of touch is the most favorable sensation that human may want to feel when interacting with the outside world, specifically a robot. Interaction between human and robot has been researched for years, and recent development of vision and audition techniques have brought robot efficient means to communicate with human. Nonetheless, human still does not feel very much comfortable interacting with robot in term of touch, thanks to an on-going challenge for integration haptic modality to robot's sensation system. Many haptic sensing systems have been proposed as counterparts for above issue with delicate, complicated integration of hardware system in order to provide robot with multimodal sense of touch. However, there yet exists a trade-off between the complexity of design and the feasibility of application in robotics.
In this talk, we will revise some basic principles of touch sensation in human and its reflection in development of multimodal robotic skin. We also would like to discuss about the relevant level of each modality as an efficient afferent mean to convey related perception to robotic control system. In addition, an example of providing a flexible fabric sensor with multimodal sensing based on signal processing techniques will be mentioned for discussion about the trade-off between the complexity and feasibility in development of multimodal sensing systems. Finally, a prediction on combination of multimodal sensing and morphology of soft object as an efficient HMI will be introduced for further discussion.

15:10 - 15:40

Jaeheung Park - Active sensing strategies for contact using constraints between the robot and environment
When robots are operating in human complex environments, they often require to deal with contacts. The contact between the robot and environment inevitably introduces uncertainties because of the relatively less precise sensing technology for contact and the modeling error of the environment. Therefore, it is an important issue how to perform tasks in contact situations with uncertainties. In this talk, first, the basic concept of active sensing is explained through a simple example. Then, we demonstrate the use of the concept “see and touch” in peg-in-hole task and box-packing task using a dual-arm robot. In the peg-in-hole task, the peg and hole are located using a vision sensor, but the positions of the objects are not precise enough for peg-in-hole task once they are grasped by the robot-hands. Therefore, we use active motions to locate contact position or settle the robot into a desired state of the task. Finally, the active sensing strategy is applied to locate the contact position of the unknown object on the ground during walking. This can be especially effective when the lower-body occludes the vision sensors. The experimental result demonstrates its performance and possibility to other applications.

15:40 - 15:50 Poster teasers
15:50 - 16:30 Poster session / Coffee break with refreshments
16:30 - 17:00

Eris Chinellato - Multimodal Integration in Nature: Lessons for Robotics Research
The availability of multiple sensory modalities allows animals, and humans among them, to detect more quickly and more reliably features and events in their environment, even when only weak and noisy signals are available.
Despite the growing interest in multimodal processing in robotics, there is a lack of research on the general rules regarding how various modalities can be integrated, and the possible advantages of different types of integration on the interaction of an autonomous agent with its environment.
Studying the solutions found in natural systems we can understand how it is possible to perform a continuous, seamless, integration of various sensory modalities for different systems and in different contexts, and how to take the most advantage of such integration.
We will provide in this talk with a survey of the general principle of multimodal integration in nature, including multiple species and sensing modalities. We will discuss the relevance of such principle for robotics, and propose strategies for achieving novel sensory and sensorimotor skills through bio-inspired multimodal integration, including under-threshold feature detection and multimodal active sensing.

17:00 - 17:30

Stefan Escaida Navarro - Multi-Modal Robot Skins: Proximity Servoing and further Applications
The main focus of this talk will be what we call proximity servoing. In general terms, it means to use proximity sensors to perform closed-loop control with position or velocity controllers. On the one hand, the literature shows several examples of proximity servoing for applications such as collision avoidance and grasping. On the other hand, through our own work, we try to contribute to the field by systematically considering proximity sensors arranged as arrays. Here, ties are established to computer a vision methods, which have already been proven useful in the processing of tactile sensing (imprints).
In parallel, we are interested in the development of multi-modal modular robot skins that include proximity and tactile sensing. Since no touch event can occur without a previous proximity event, these modalities are a good complement to each other. Here lies the potential to improve the performance of established methods as well as to develop new approaches. Therefore, the performance of collision prediction schemes in proximity mode can be corroborated by the tactile modality. A gripper can, after contactless preshaping, supervise the contact forces, achieving a complete monitoring of the grasping procedure. Novel schemes for haptic exploration can use proximity- and tactile-based exploration steps. And novel methods for safe HRI, such as intuitive control of robots are possible, among others.

Support of IEEE RAS Technical Committees

This workshop is supported by the IEEE RAS Technical Committees on:
- Robotic Hands, Grasping and Manipulation,
- Human-Robot Interaction and Coordination,
- Whole-Body Control,
- Computer & Robot Vision,
- Humanoid Robotics,
- Haptics.

Last update on 01/07/2016