Category: Neurorobotics

Functional components for control and behavioural models

Gaze stabilization experiment

In this work, we focused on reflexes used by humans for gaze stabilization. A model of gaze stabilization, based on the coordination of the vestibulo-collic reflex (VCR) and vestibulo-ocular reflex (VOR) has been designed and implemented on humanoid robots. The model, inspired on neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models.

In a first phase, we designed experiments to assess the model’s response to disturbances, validating the model both with the NRP and with a real humanoid robot (SABIAN). In this phase, we mounted the SABIAN head on an oscillating platform (shown below) able to rotate along the pitch axis, in order to produce a disturbance.

The oscillating platform. In (a) the SABIAN head mounted on the platform, with its inertial reference frame is shown. The transmission of motion from the DC motor to the oscillating platform is depicted in (b).

In a second phase, we carried out experiments for testing the gaze stabilization capability of the model, during a locomotion task. We gathered human data of torso displacement while walking and running. The data has been used to animate a virtual iCub while the gaze stabilization model was active.

Balancing experiment

Using the same principles of the gaze stabilization experiment, we carried out a balancing experiment for a simulated iCub. In this experiment, the simulated iCub is holding up a red tray with a green ball on top. The goal of the experiment is to control the robot’s roll and pitch joints for the wrist, in order to keep the ball in the center of the tray. The control model for the wrist joints is provided with learning and adaptation capabilities based on internal models.

Visual segmentation experiment

A cortical model for visual segmentation (Laminart) has been built with the aim of integrating it in the neurorobotics platform. The goal is to see how the model behaves in a realistic visual environment. A second goal is to connect it to another model for the retina.
The model consists of a biologically plausible network containing hundreds of thousands of neurons and several millions connections embedded in about 50 cortical layers. It is built functionnaly in order to link objects that are likely to group together with illusory contours, and to segment disctinct perceptual groups in separate segmentation layers.
Up to now, the Laminart model has been successfully integrated in the NRP and first expriments are being built to check the behaviour of the model and discover what has to be added to it to ensure it can coherently segment objects from each other in a realistic environment. Besides, the Laminart model is almost connected to the retina model.
In the future, the model will be connected to other models for saliency detection, learning, predictive coding, decision making, on the NRP, to create a closed loop experiment. It will also take into account some experimental data about texture segmentation and contour integration.
segmentation

Visual perception experiment

In this work, we evaluated the construction of neural models for visual perception. The validation scenario chosen for the models is an end-to-end controller capable of lane following for an self-driving vehicle. We developed a visual encoder from camera images to spikes inspired by the silicon retina (i.e., the DVS Dynamic Vision Sensor). The veichle controller embeds a wheel decoder based on a virtual agonist antagonist muscle model.

visual-perception-jacques-1

Grasping experiment

During the first 12 month of SGA1, we investigated methods for representing and executing grasping motions with spiking neural networks that can be simulated in the NEST simulator and therefore, the Neurorobotics Platform. For grasping in particular, humans can remember motions and modify them while executing based on the shape and the interaction with objects. We developed a spiking neural network with a biologically inspired architecture to perform different grasping motions, that first learns with plasticity from human demonstration in simulation and then is used to control a humanoid robotic hand. The network is made with two types of associative networks trained independently: One represents single fingers and learns joint synergies as motion primitives; and another represents the hand and coordinates multiple finger networks to execute a specific grasp. Both receive the joint states as proprioception using population encoding, and the finger networks also receives tactile feedback to inhibit the output neurons and stop the motion if a contact with an object is detected.

grasping-camilo-1

grasping-camilo-4

Multimodal sensory representation for invariant object recognition

This functional component integrates multisensory information -namely tactile, visual and auditory- to form an object representation. Although we firstly target invariant object recognition problem using the only visual information, the component is capable of combining other sensory modalities. The model is based on computational phases of the Hierarchical Temporal Memory which is inspired by operating principles of the mammalian neocortex. The model was adapted and modified to extract a multimodal sensory representation of an object. The representation can be interpreted as a cortical representation of perceived inputs. To test the model, we perform object recognition in COIL-20 and COIL-100 datasets in which consist of 20 and 100 different objects (see Figure 1). In details, each object rotated 5 degrees on a turntable and object image was captured by the camera (see Figure2). In addition to image acquisition steps, a number post-processing procedures such as background elimination and size normalization were performed on the images.

multimodal-murat-1

Figure 1 Selected images from different categories.

multimodal-murat-2

Figure 2 A duck object under various rotational transformations.

To obtain object representations, the standard image processing algorithms were performed to binarize and downsize available images in datasets. Then, the model was fed with the processed image data to generate sparsely distributed representation of the perceived images. A sample processed image and cortical representation of the same visual pattern are illustrated in Figure 3 and Figure 4, respectively. Note that, the representation of an object with different sensory inputs can be achieved by same procedure and concatenating the obtained representations for each modality.

Figure 3 A processed visual pattern.                            Figure 4 Cortical representation of a visual pattern

After obtaining representation for all images, we perform recognition operations by grouping the datasets into two categories which are memory representation (or training set) and unseen object patterns (or test set). The representation similarity metric defined as the number of same active cortical columns (the same active bits in the same location) between existing and unseen patterns. The recognition accuracies are shown in Table below. and were derived via splitting training and testing dataset by 10% to 90% and each time incremented by 10.

Training percent

COIL-20

COIL-100

10

90.4

89.0

20

94.3

91.2

30

96.9

94.9

40

97.2

95.6

50

98.3

96.5

60

98.2

97.0

70

98.4

97.3

80

98.6

97.0

90

98.7

96.8

The obtained results indicate that the modal performs well with single modality. Our ongoing studies focus on integrating multiple sensory information (e.g. tactile) to represent multimodal representation to achieve a grasping task.

Advertisements

Integrating Nengo into the NRP?

On 11th March we had the honor of welcoming Terrence Stewart from the University of Waterloo (http://compneuro.uwaterloo.ca/people/terrence-c-stewart.html) at the Technical University of Munich. During these two days, he first gave a fascinating presentation on Nengo and neural engineering in general.
This was followed by extensive discussions with our developers to investigate a possible integration of Nengo into our platform after it had been installed on his laptop. To this extent, we discussed what overlaps already exist and identified missing parts to make this integration happen.
This yields the opportunity for our NRP to offer additional spiking neuron simulators aside from NEST.
This collaboration would be benefitial for both sides, with us offereing a platform to interface Nengo with Roboy or other muscle based simulations.

20170411_14453920170411_164538

Gazebo DVS plugin – towards a sensor library

On the NRP, we already  support any sensor included by Gazebo. Mostly, they consist of classical robotic sensors such as laser scanner and camera.

However, Gazebo does not include recent biologically inspired sensor, neither does it include neuroscience’s models of organic sensors. Those type of sensors are important for the NRP. To keep the workflow identical for classical robotic sensors and newly developed sensors, we decided to implement the later as gazebo plugins. Essentially, our sensor library will consist of a list of gazebo plugins simulating various biologically inspired sensors.

So far, we implemented a simulation of the Dynamic Vision Sensor (DVS) which is open-source and available on our  SP10 github. In the coming month, we will also adapt our implementation of COREM, retina simulation framework [1,2] and wrap it in a Gazebo plugin.

DVS_generic_image_viewer

[1] Martínez-Cañada, P., Morillas, C., Pino, B., Ros, E., & Pelayo, F. (2016). A Computational Framework for Realistic Retina Modeling. International Journal of Neural Systems, 26(07), 1650030.

[2] Ambrosano A. et al. (2016). Retina Color-Opponency Based Pursuit Implemented Through Spiking Neural Networks in the Neurorobotics Platform. Biomimetic and Biohybrid Systems. Living Machines 2016. 

Publication in a Supplement to Science on Brain-Inspired Intelligent Robotics

The article “Neurorobotics: A strategic pillar of the Human Brain Project” was released in a Science Supplement on “Brain-inspired intelligent robotics: The intersection of robotics and neuroscience”, explaining the importance of our subproject and its research.

science.jpg

To give you an overview, you can find the first section below:

“Neurorobotics is an emerging science that studies the interaction of brain, body, and environment in closed perception–action loops where a robot’s actions affect its future sensory input. At the core of this field are robots controlled by simulated nervous systems that model the structure and function of biological brains at varying levels of detail (1). In a typical neurorobotics experiment, a robot or agent will perceive its current environment through a set of sensors that will transmit their signals to a simulated brain. The brain model may then produce signals that will cause the robot to move, thereby changing the agent’s perception of the environment. Observing how the robot then interacts with its environment and how the robot’s actions influence its future sensory input allows scientists to study how brain and body have to work together to produce the appropriate response to a given stimulus. Thus, neurorobotics links robotics and neuroscience, enabling a seamless exchange of knowledge between these two disciplines. Here, we provide an introduction to neurorobotics and report on the current state of development of the European Union–funded Human Brain Project’s (HBP’s) Neurorobotics Platform (2, 3). HBP is Europe’s biggest project in information communication technologies (ICT) to date (www.humanbrainproject.eu) and is one of two large-scale, long-term flagship research initiatives selected by the European Commission to promote disruptive scientific advance in future key technologies. It will have a duration of 10 years and deliver six open ICT platforms for future research in neuroscience, medicine, and computing, aimed at unifying the understanding of the human brain and translating this knowledge into commercial products.”

Read the entire paper here on page 25:
http://www.sciencemag.org/sites/default/files/custom-publishing/documents/Brain-inspired-robotics-supplement_final.pdf?_ga=1.158217660.785230381.1481986150

(image source: http://www.sciencemag.org/sites/all/themes/science/images/facebook-share.jpg)