Abstract: Human-robot collaboration is a challenging task that requires developing artificial perceptual skills to interpret human behaviors and exquisite timing to react seamlessly to such behaviors. One of the main goals of human-robot collaboration is to achieve a shared understanding of the task and the environment, as well as a mutual adaptation of behaviors and goals. In this talk, I will present two aspects of human-robot collaboration that can be generically understood as the “social” and the “physical” domain of interaction. All experiments are carried out on the iCub humanoid robot, a platform that offers rich perceptual and motor capabilities. In the social domain, I describe experiments that aim to understand how humans interactants interpret the behavior of the iCub in well-defined and controllable scenarios. On the other hand, in the physical domain, we develop direct sensing of human movements to integrate them in the iCub controllers.
Abstract: In recent years, robotic technologies have been providing definite advances to assist people in need of physical help, including rehabilitation and prosthetics. Working in fields were humans are placed right at the center of the technology, on the other hand, is helping refocus our robotics research itself. In prosthetics, the goal is to have an artificial limb to move naturally and intelligently enough to perform the task that users intend, without requiring their attention. By abstracting this idea, a robot of the future can be thought as a physical ``prosthesis'' of its user, with sensors, actuators, and intelligence enough to interpret and execute the user intention, translating it in a sensible action of which the user remains the owner. In the talk I will present examples of human-robot integration, as in prosthetics and rehabilitation, augmentation with exoskeletons and supernumerary limbs, and shared-autonomy robotic avatars, with the robot executing the human's intended actions and the human perceiving the context of his/her actions and their consequences.
Abstract: Observing living beings means observing complex systems, in complex dynamic environments. Obtaining similar behaviours has been challenging roboticists since long. What we can learn from nature is how to simplify perception-action loops, without simplifying the systems, the environment, or the task, but taking advantage of their interplay. According to this embodied intelligence paradigm, control is simplified and efficiency is increased, as we see in a few examples of octopus-inspired soft robots. Octopus locomotion uses soft legs that shorten and elongate to walk underwater, avoiding the need to move rigid limbs against water drag. Octopus arm movements also reduce water drag by avoiding rigid translations and instead leveraging the arm softness and extreme deformability to unfold in water. Among other simplifying principles, those strategies can help roboticist develop effective and efficient robots, for marine operations, and for many more application fields.
Abstract: It is a widespread opinion that Robotics still need much more robustness, safety, lower manufacturing costs, and reduced control complexity and effort; while it aims to more and more complex and adaptive behaviors in open ended environments. Many see ‘Embodied AI’ and ‘Soft Robotics’ as powerful – if not ‘silver bullet’ - approaches to achieve those objectives. However, controlling soft robots is hard and by itself adding compliance to robots does not solve all the problems. A foundational approach to Soft Robotics, is often referred to as ‘Morphological Computation’, i.e., the outsourcing of computation from the controller to body-environment interactions of the system. The concept is often used to describe rather different phenomena in the literature. While there seems to be a consensus about the importance of embodiment, there is still no clear definition of how the embodiment of an agent – and typically of an intelligent robot – should actually be defined. Many of the conceptual definitions of embodiment that have been proposed so far do not provide much more than the common sense understanding of embodiment, which is why some researchers believe that an operational and quantitative approach - i.e. ‘Morphological Computation’ is important. There are already different approaches to quantify embodiment. Examples from the field of information theory have been developed by the author and by a handful of other researchers. The keynote will review the existing different concepts and quantifications of embodiment and will show how they overlap and differ, thereby leading to a better understanding and a clearer picture of what actually is meant as ‘embodiment’ and ‘Morphological Computation’. It is thought that a clearer understanding of the basic quantitative physical aspects of embodiment may pave the way to radically new and significantly more effective approaches to the modeling and control of intelligent robots (with rigid and soft body parts) perceiving and acting stochastically in unstructured and partially known open ended environments. Moreover, the research methodology requires to be improve if we want to ground paradigm and modeling choices on experimental evidence. This is particularly interesting in the context of the bold yet failed FET-Flagship proposal on Next Generation Robotics in which a large part of the Robotics and AI communities were involved. To what extent, under which conditions – and critically in which timeframe - will it be possible to achieve the necessary advancements to be able to exploit robots to dramatically increase productivity and for elder care?