Category: Neurorobotics

Successful NRP User Workshop

Date: 24.07.2017
Venue: FZI, Karlsruhe, Germany

Thanks to all of the 17 participants for making this workshop a great time.

Last week, we held a successful Neurorobotics Platform (NRP) User Workshop in FZI, Karlsruhe.  We welcomed 17 attendants over three days, coming from various sub-projects (such as Martin Pearson, SP3) and HBP outsiders (Carmen Peláez-Moreno and  Francisco José Valverde Albacete). We focused on hands-on sessions so that users got comfortable using the NRP themselves.

IMG_1183IMG_1185

Thanks to our live boot image with the NRP pre-installed, even users who did not follow the local installation steps beforehand could run the platform locally in no time. During the first day, we provided a tutorial experiment, exclusively developed for the event, which walked the users through the many features of the NRP. This tutorial experiment is inspired from the baby playing ping pong video, which is here simulated with an iCub robot. This tutorial experiment will soon get released with the official build of the platform.

IMG_20170724_120245.jpg

On the second and third days, more freedom was given to the users so that they could implement their own experiments. We had short hands-on sessions on the Robot Designer as well as Virtual Coach, for offline optimization and analysis. Many new experiments were successfully integrated into the platform: the Miro robot from Consequential Robotics,  a snake-like robot moving with Central Patterns Generators (CPG), revival of the Lauron experiment, …

IMG_20170725_094219.jpg

We received great feedback from the users. We are looking forward for the organization of the next NRP User Workshop!

 

Paper accepted: Towards Grasping with Spiking Neural Networks for Anthropomorphic Robot Hands

We got a paper about grasping with spiking neural networks accepted for ICANN 2017!

The complete architecture is shown in the figure. The hand network (left ) receives the proprioception of all fingers and a grasp type signal to generate fingertip targets. Each finger network (middle) receives its proprioception and fingertip target to generate motor commands.

Screen Shot 2017-07-10 at 11.00.48

Abstract:

Representation and execution of movement in biology is an active field of research relevant to neurorobotics. Humans can remember grasp motions and modify them during execution based on the shape and the intended interaction with objects. We present a hierarchical spiking neural network with a biologically inspired architecture for representing different grasp motions. We demonstrate the ability of our network to learn from human demonstration using synaptic plasticity on two different exemplary grasp types (pinch and cylinder). We evaluate the performance of the network in simulation and on a real anthropomorphic robotic hand. The network exposes the ability of learning finger coordination and synergies between joints that can be used for grasping.

Keywords:

grasp motion representation, spiking networks, neurorobotics, motor primitives.

[1] J. C. Vasquez Tieck, H. Donat, J. Kaiser, I. Peric, S. Ulbrich, A. Roennau, Z. Marius, and R. Dillmann, “Towards Grasping with Spiking Neural Networks for Anthropomorphic Robot Hands,” ICANN, 2017.

Mid-Term Vision for HBP

This vision is for a seven-year time horizon: it is to be achieved by the end of the regular funding period of the HBP, i.e., by the time the HBP enters the status of a European Research Infrastructure. So, by 2023, we expect our current research in “Future Computing and Robotics” to have produced a number of unique, tangible results in the form of “products” and a number of ground breaking “base technologies and methods” that will significantly facilitate and accelerate future research in the European Infrastructure in a diverse range of fields.

In conjunction with future computing, HBP’s robotics research plays multiple, significant roles in the HBP:

  • (Closed Loop Studies): it links the real world with the “virtual world of simulation” by connecting physical sensors (e.g., cameras) in the real world to a simulated brain. This brain controls a body which, in turn, can impact and alter the real world environment. Robotics, therefore, provides the potential to perform realistic “closed-loop-studies”: perception – cognition – action. This will establish a whole new field of robot design: virtual prototyping of robots that can then be readily built as real machines and function like the simulated ones. This will not only speed up robot development by orders of magnitude, it will also dramatically improve the testing and verification of their behaviour within a wide variety of circumstances.
  • (Brain-Derived Products): it links brain research to information technology by using scientific results (e.g., data, and models of behaviour) obtained in brain research and refining it to a readiness level where it can be used by commercial companies and easily taken up and rapidly turned into new categories of products, e.g., using specialized neuromorphic hardware, also currently being developed by HBP. This will allow novel control technologies that achieve robustness and adaptivity far beyond todays algorithmic controls… and ones that actually rival biologic systems.
  • (Virtualised Brain Research): it links information technology to brain research by designing new tools for brain researchers, with which they can design experiments and then carry them out in simulation. For example, one can study a completely simulated animal’s navigation or sensorimotor skills as it operates in a completely simulated environment (e.g., a maze or a straight or sinusoidal vertical path), and the signals of the simulated brain will be recorded in real-time for immediate analysis. These same principles can be applied to humans and humanoid avatars, allowing bold and fruitful research on degenerative brain diseases, for example.

We envision that the unique integration of the above three paths will lead to widespread mutually beneficial fertilization and research acceleration through the two-way inspiration of the involved disciplines. The vehicle for bi-directional translation (brain science « robotics) is the HBP’s neurorobotics platform.

At this point, we can see the following vision  taking shape: we have taken the first steps towards the design of a virtual mouse. This animal, which only exists in a computer, has eyes, whiskers, skin, a brain, and a body with bones and muscles that function exactly like its natural counterpart. Clearly, all of these elements are still far from being perfect, i.e., from exhibiting behaviour and function corresponding to the original creature. However, the more brain scientists learn about these functions and the more data become available, the more we can integrate said results into the virtual mouse, and the faster we can improve the “mouse fidelity”. In parallel, we will apply the same principles to the simulation of human embodiment. The possibilities are endless.

Using the virtual mouse (or humans, or any other animals) in the future, brain scientists can not only copy traditional design experiments into the computer and study the results immediately, they can also modify the mouse any way they want, e.g., introduce lesions into the brain or cut muscles and study the impact it has. Moreover, they can place as many electrodes or other sensors in the body as they want. But perhaps the most astounding benefits of these new possibilities are that scientists can perform experiments that are very, complex – if not impossible to perform in the real world. This includes very long-term studies with permanent recordings (and these can be done 10,000 times faster than in real-time!), animal swarms with parallel recordings, and plasticity and learning effects over many years.

On the technology side, we can envision a number of brain-derived base technologies that result from our work. One straightforward example is robot-based prostheses that have myo-electric interfaces and which can not only be developed in simulation, but which can be tailor-made or personalized to the properties of one specific person – because every single aspect can be simulated. This is a rather simple example; the disruptive products will most likely involve a complex artificial brain running on neuromorphic hardware and capable of super-fast learning, which, for the first time, would make highly intelligent household robots possible that can adapt their behaviour to various tasks.

Substantial progress towards both a comprehensive understanding of the brain and technologies that are derived from the brain’s working principles can only be made by advancing theory and methodology at the system level. While the fields of artificial intelligence and machine learning in particular have recently gained unprecedented momentum that is primarily driven by the success of big data and deep neural networks, the resulting tools, models, and methods are still highly domain-specific. With the ubiquitous availability of cheap storage, massive processing power, and large-scale datasets, the actual challenge no longer lies in the design of a system that performs a specific task, but in the integration of the wealth of different narrow-scoped models from machine learning and neuroscience channelled into a coherent cognitive model. The platform infrastructure of HBP enables the design and implementation of such a model by integrating different tools, methods and theories in a common research environment. For the very first time, different brain theories, neural network architectures and learning algorithms can be directly comparable to both each other and to experimental ground truth. In this context, neurorobotics serves as a central “workbench” for the study of integrated cognitive models in real-world tasks and as a prototyping tool that enables the seamless transfer of these models into new products and services.

To achieve these goals, we need to reinforce the “input side”, i.e., brain scientists need to talk to roboticists much more intensively than they have done up to now. Then, really new concepts can emerge. One particularly attractive concept could be the automatic generation of models from data: data driven model generation. This would make it possible to use every new data collection to improve the virtual models with a minimum of human intervention and hence keep the virtual robot permanently and synergetically coupled to developments in brain science. Of central importance is the permanent adjustment and calibration of these data models with the corresponding cognitive brain system, which in itself is a complex and long-term endeavour. This goal can only be achieved on the basis of a very close interaction between theorists, data/computer scientists and engineers – and as such, could be a perfect example of a synergistic transdisciplinary cooperation that can only be performed in a European Research Infrastructure.

 

 

 

 

Short-term visual prediction – published

Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently.

In collaboration with Prof. Maass group (IGI, TU Graz, SP9) we proposed a biologically inspired functional model. This model is based on liquid state machines and can learn to predict visual stimuli from address events provided by a Dynamic Vision Sensor (DVS).

Fig_1_rescaled

We validated this model on various experiments both with simulated and real DVS. The results were accepted for publication in [1]. We are now currently working on using those short-term visual predictions to control robots.

[1] “Scaling up liquid state machines to predict over address events from dynamic vision sensors”, Jacques Kaiser, Rainer Stal, Anand Subramoney et al., Special issue in Bioinspiration & Biomimetics, 2017.

Functional components for control and behavioural models

Gaze stabilization experiment

In this work, we focused on reflexes used by humans for gaze stabilization. A model of gaze stabilization, based on the coordination of the vestibulo-collic reflex (VCR) and vestibulo-ocular reflex (VOR) has been designed and implemented on humanoid robots. The model, inspired on neuroscientific cerebellar theories, is provided with learning and adaptation capabilities based on internal models.

In a first phase, we designed experiments to assess the model’s response to disturbances, validating the model both with the NRP and with a real humanoid robot (SABIAN). In this phase, we mounted the SABIAN head on an oscillating platform (shown below) able to rotate along the pitch axis, in order to produce a disturbance.

The oscillating platform. In (a) the SABIAN head mounted on the platform, with its inertial reference frame is shown. The transmission of motion from the DC motor to the oscillating platform is depicted in (b).

In a second phase, we carried out experiments for testing the gaze stabilization capability of the model, during a locomotion task. We gathered human data of torso displacement while walking and running. The data has been used to animate a virtual iCub while the gaze stabilization model was active.

Balancing experiment

Using the same principles of the gaze stabilization experiment, we carried out a balancing experiment for a simulated iCub. In this experiment, the simulated iCub is holding up a red tray with a green ball on top. The goal of the experiment is to control the robot’s roll and pitch joints for the wrist, in order to keep the ball in the center of the tray. The control model for the wrist joints is provided with learning and adaptation capabilities based on internal models.

Visual segmentation experiment

A cortical model for visual segmentation (Laminart) has been built with the aim of integrating it in the neurorobotics platform. The goal is to see how the model behaves in a realistic visual environment. A second goal is to connect it to another model for the retina.
The model consists of a biologically plausible network containing hundreds of thousands of neurons and several millions connections embedded in about 50 cortical layers. It is built functionnaly in order to link objects that are likely to group together with illusory contours, and to segment disctinct perceptual groups in separate segmentation layers.
Up to now, the Laminart model has been successfully integrated in the NRP and first expriments are being built to check the behaviour of the model and discover what has to be added to it to ensure it can coherently segment objects from each other in a realistic environment. Besides, the Laminart model is almost connected to the retina model.
In the future, the model will be connected to other models for saliency detection, learning, predictive coding, decision making, on the NRP, to create a closed loop experiment. It will also take into account some experimental data about texture segmentation and contour integration.
segmentation

Visual perception experiment

In this work, we evaluated the construction of neural models for visual perception. The validation scenario chosen for the models is an end-to-end controller capable of lane following for an self-driving vehicle. We developed a visual encoder from camera images to spikes inspired by the silicon retina (i.e., the DVS Dynamic Vision Sensor). The veichle controller embeds a wheel decoder based on a virtual agonist antagonist muscle model.

visual-perception-jacques-1

Grasping experiment

During the first 12 month of SGA1, we investigated methods for representing and executing grasping motions with spiking neural networks that can be simulated in the NEST simulator and therefore, the Neurorobotics Platform. For grasping in particular, humans can remember motions and modify them while executing based on the shape and the interaction with objects. We developed a spiking neural network with a biologically inspired architecture to perform different grasping motions, that first learns with plasticity from human demonstration in simulation and then is used to control a humanoid robotic hand. The network is made with two types of associative networks trained independently: One represents single fingers and learns joint synergies as motion primitives; and another represents the hand and coordinates multiple finger networks to execute a specific grasp. Both receive the joint states as proprioception using population encoding, and the finger networks also receives tactile feedback to inhibit the output neurons and stop the motion if a contact with an object is detected.

grasping-camilo-1

grasping-camilo-4

Multimodal sensory representation for invariant object recognition

This functional component integrates multisensory information -namely tactile, visual and auditory- to form an object representation. Although we firstly target invariant object recognition problem using the only visual information, the component is capable of combining other sensory modalities. The model is based on computational phases of the Hierarchical Temporal Memory which is inspired by operating principles of the mammalian neocortex. The model was adapted and modified to extract a multimodal sensory representation of an object. The representation can be interpreted as a cortical representation of perceived inputs. To test the model, we perform object recognition in COIL-20 and COIL-100 datasets in which consist of 20 and 100 different objects (see Figure 1). In details, each object rotated 5 degrees on a turntable and object image was captured by the camera (see Figure2). In addition to image acquisition steps, a number post-processing procedures such as background elimination and size normalization were performed on the images.

multimodal-murat-1

Figure 1 Selected images from different categories.

multimodal-murat-2

Figure 2 A duck object under various rotational transformations.

To obtain object representations, the standard image processing algorithms were performed to binarize and downsize available images in datasets. Then, the model was fed with the processed image data to generate sparsely distributed representation of the perceived images. A sample processed image and cortical representation of the same visual pattern are illustrated in Figure 3 and Figure 4, respectively. Note that, the representation of an object with different sensory inputs can be achieved by same procedure and concatenating the obtained representations for each modality.

Figure 3 A processed visual pattern.                            Figure 4 Cortical representation of a visual pattern

After obtaining representation for all images, we perform recognition operations by grouping the datasets into two categories which are memory representation (or training set) and unseen object patterns (or test set). The representation similarity metric defined as the number of same active cortical columns (the same active bits in the same location) between existing and unseen patterns. The recognition accuracies are shown in Table below. and were derived via splitting training and testing dataset by 10% to 90% and each time incremented by 10.

Training percent

COIL-20

COIL-100

10

90.4

89.0

20

94.3

91.2

30

96.9

94.9

40

97.2

95.6

50

98.3

96.5

60

98.2

97.0

70

98.4

97.3

80

98.6

97.0

90

98.7

96.8

The obtained results indicate that the modal performs well with single modality. Our ongoing studies focus on integrating multiple sensory information (e.g. tactile) to represent multimodal representation to achieve a grasping task.

Integrating Nengo into the NRP?

On 11th March we had the honor of welcoming Terrence Stewart from the University of Waterloo (http://compneuro.uwaterloo.ca/people/terrence-c-stewart.html) at the Technical University of Munich. During these two days, he first gave a fascinating presentation on Nengo and neural engineering in general.
This was followed by extensive discussions with our developers to investigate a possible integration of Nengo into our platform after it had been installed on his laptop. To this extent, we discussed what overlaps already exist and identified missing parts to make this integration happen.
This yields the opportunity for our NRP to offer additional spiking neuron simulators aside from NEST.
This collaboration would be benefitial for both sides, with us offereing a platform to interface Nengo with Roboy or other muscle based simulations.

20170411_14453920170411_164538