Successful NRP User Workshop

Date: 24.07.2017
Venue: FZI, Karlsruhe, Germany

Thanks to all of the 17 participants for making this workshop a great time.

Last week, we held a successful Neurorobotics Platform (NRP) User Workshop in FZI, Karlsruhe.  We welcomed 17 attendants over three days, coming from various sub-projects (such as Martin Pearson, SP3) and HBP outsiders (Carmen Peláez-Moreno and  Francisco José Valverde Albacete). We focused on hands-on sessions so that users got comfortable using the NRP themselves.


Thanks to our live boot image with the NRP pre-installed, even users who did not follow the local installation steps beforehand could run the platform locally in no time. During the first day, we provided a tutorial experiment, exclusively developed for the event, which walked the users through the many features of the NRP. This tutorial experiment is inspired from the baby playing ping pong video, which is here simulated with an iCub robot. This tutorial experiment will soon get released with the official build of the platform.


On the second and third days, more freedom was given to the users so that they could implement their own experiments. We had short hands-on sessions on the Robot Designer as well as Virtual Coach, for offline optimization and analysis. Many new experiments were successfully integrated into the platform: the Miro robot from Consequential Robotics,  a snake-like robot moving with Central Patterns Generators (CPG), revival of the Lauron experiment, …


We received great feedback from the users. We are looking forward for the organization of the next NRP User Workshop!


Paper accepted: Towards Grasping with Spiking Neural Networks for Anthropomorphic Robot Hands

We got a paper about grasping with spiking neural networks accepted for ICANN 2017!

The complete architecture is shown in the figure. The hand network (left ) receives the proprioception of all fingers and a grasp type signal to generate fingertip targets. Each finger network (middle) receives its proprioception and fingertip target to generate motor commands.

Screen Shot 2017-07-10 at 11.00.48


Representation and execution of movement in biology is an active field of research relevant to neurorobotics. Humans can remember grasp motions and modify them during execution based on the shape and the intended interaction with objects. We present a hierarchical spiking neural network with a biologically inspired architecture for representing different grasp motions. We demonstrate the ability of our network to learn from human demonstration using synaptic plasticity on two different exemplary grasp types (pinch and cylinder). We evaluate the performance of the network in simulation and on a real anthropomorphic robotic hand. The network exposes the ability of learning finger coordination and synergies between joints that can be used for grasping.


grasp motion representation, spiking networks, neurorobotics, motor primitives.

[1] J. C. Vasquez Tieck, H. Donat, J. Kaiser, I. Peric, S. Ulbrich, A. Roennau, Z. Marius, and R. Dillmann, “Towards Grasping with Spiking Neural Networks for Anthropomorphic Robot Hands,” ICANN, 2017.

Mid-Term Vision for HBP

This vision is for a seven-year time horizon: it is to be achieved by the end of the regular funding period of the HBP, i.e., by the time the HBP enters the status of a European Research Infrastructure. So, by 2023, we expect our current research in “Future Computing and Robotics” to have produced a number of unique, tangible results in the form of “products” and a number of ground breaking “base technologies and methods” that will significantly facilitate and accelerate future research in the European Infrastructure in a diverse range of fields.

In conjunction with future computing, HBP’s robotics research plays multiple, significant roles in the HBP:

  • (Closed Loop Studies): it links the real world with the “virtual world of simulation” by connecting physical sensors (e.g., cameras) in the real world to a simulated brain. This brain controls a body which, in turn, can impact and alter the real world environment. Robotics, therefore, provides the potential to perform realistic “closed-loop-studies”: perception – cognition – action. This will establish a whole new field of robot design: virtual prototyping of robots that can then be readily built as real machines and function like the simulated ones. This will not only speed up robot development by orders of magnitude, it will also dramatically improve the testing and verification of their behaviour within a wide variety of circumstances.
  • (Brain-Derived Products): it links brain research to information technology by using scientific results (e.g., data, and models of behaviour) obtained in brain research and refining it to a readiness level where it can be used by commercial companies and easily taken up and rapidly turned into new categories of products, e.g., using specialized neuromorphic hardware, also currently being developed by HBP. This will allow novel control technologies that achieve robustness and adaptivity far beyond todays algorithmic controls… and ones that actually rival biologic systems.
  • (Virtualised Brain Research): it links information technology to brain research by designing new tools for brain researchers, with which they can design experiments and then carry them out in simulation. For example, one can study a completely simulated animal’s navigation or sensorimotor skills as it operates in a completely simulated environment (e.g., a maze or a straight or sinusoidal vertical path), and the signals of the simulated brain will be recorded in real-time for immediate analysis. These same principles can be applied to humans and humanoid avatars, allowing bold and fruitful research on degenerative brain diseases, for example.

We envision that the unique integration of the above three paths will lead to widespread mutually beneficial fertilization and research acceleration through the two-way inspiration of the involved disciplines. The vehicle for bi-directional translation (brain science « robotics) is the HBP’s neurorobotics platform.

At this point, we can see the following vision  taking shape: we have taken the first steps towards the design of a virtual mouse. This animal, which only exists in a computer, has eyes, whiskers, skin, a brain, and a body with bones and muscles that function exactly like its natural counterpart. Clearly, all of these elements are still far from being perfect, i.e., from exhibiting behaviour and function corresponding to the original creature. However, the more brain scientists learn about these functions and the more data become available, the more we can integrate said results into the virtual mouse, and the faster we can improve the “mouse fidelity”. In parallel, we will apply the same principles to the simulation of human embodiment. The possibilities are endless.

Using the virtual mouse (or humans, or any other animals) in the future, brain scientists can not only copy traditional design experiments into the computer and study the results immediately, they can also modify the mouse any way they want, e.g., introduce lesions into the brain or cut muscles and study the impact it has. Moreover, they can place as many electrodes or other sensors in the body as they want. But perhaps the most astounding benefits of these new possibilities are that scientists can perform experiments that are very, complex – if not impossible to perform in the real world. This includes very long-term studies with permanent recordings (and these can be done 10,000 times faster than in real-time!), animal swarms with parallel recordings, and plasticity and learning effects over many years.

On the technology side, we can envision a number of brain-derived base technologies that result from our work. One straightforward example is robot-based prostheses that have myo-electric interfaces and which can not only be developed in simulation, but which can be tailor-made or personalized to the properties of one specific person – because every single aspect can be simulated. This is a rather simple example; the disruptive products will most likely involve a complex artificial brain running on neuromorphic hardware and capable of super-fast learning, which, for the first time, would make highly intelligent household robots possible that can adapt their behaviour to various tasks.

Substantial progress towards both a comprehensive understanding of the brain and technologies that are derived from the brain’s working principles can only be made by advancing theory and methodology at the system level. While the fields of artificial intelligence and machine learning in particular have recently gained unprecedented momentum that is primarily driven by the success of big data and deep neural networks, the resulting tools, models, and methods are still highly domain-specific. With the ubiquitous availability of cheap storage, massive processing power, and large-scale datasets, the actual challenge no longer lies in the design of a system that performs a specific task, but in the integration of the wealth of different narrow-scoped models from machine learning and neuroscience channelled into a coherent cognitive model. The platform infrastructure of HBP enables the design and implementation of such a model by integrating different tools, methods and theories in a common research environment. For the very first time, different brain theories, neural network architectures and learning algorithms can be directly comparable to both each other and to experimental ground truth. In this context, neurorobotics serves as a central “workbench” for the study of integrated cognitive models in real-world tasks and as a prototyping tool that enables the seamless transfer of these models into new products and services.

To achieve these goals, we need to reinforce the “input side”, i.e., brain scientists need to talk to roboticists much more intensively than they have done up to now. Then, really new concepts can emerge. One particularly attractive concept could be the automatic generation of models from data: data driven model generation. This would make it possible to use every new data collection to improve the virtual models with a minimum of human intervention and hence keep the virtual robot permanently and synergetically coupled to developments in brain science. Of central importance is the permanent adjustment and calibration of these data models with the corresponding cognitive brain system, which in itself is a complex and long-term endeavour. This goal can only be achieved on the basis of a very close interaction between theorists, data/computer scientists and engineers – and as such, could be a perfect example of a synergistic transdisciplinary cooperation that can only be performed in a European Research Infrastructure.





Collaboration between scientists and developers towards integration work in the NRP

Visual-motor coordination is a key research field for understanding our brain and for developing new brain-like technologies.

To address the development and evaluation of bio-inspired control architectures based on cerebellar features, SP10 scientists and developers are collaborating in the implementation of several experiments in the Neurorobotics Platform.

Ismael Baira Ojeda from the Technical University of Denmark (DTU) visited the Scuola Superiore Sant’Anna (Pisa, Italy) to integrate the Adaptive Feedback Error Learning architecture [1] into the Neurorobotics Platform using the iCub humanoid robot. This control architecture uses a combination of Machine Learning techniques and cerebellar-like microcircuits in order to give an optimized input space [2], a fast learning and accuracy for the motor control of robots. In the experiment, the iCub was commanded to balance a ball towards the center of a board, which the iCub held in its hand.

The experiment was later refined and finished during the Install Party hosted by Fortiss (April 2017).

Next, the AFEL architecture could be scaled up and combined with vision and motor control breakthroughs within the different SPs.

Thanks to all the scientists and developers for your support, especially Lorenzo Vannucci, Alessandro Ambrosano and Kenny Sharma!

iCub ball balancing
The prototype experiment running on the Neurorobotics Platform.


[1] Tolu, S., Vanegas, M., Luque, N. R., Garrido, J. A., & Ros, E. (2012). Bio-inspired adaptive feedback error learning architecture for motor control. Biological Cybernetics, 1-16.

[2] Vijayakumar, S., D’souza, A., & Schaal, S. (2005). Incremental online learning in high dimensions. Neural Computation, 17(12), 2602-2634.

Short-term visual prediction – published

Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently.

In collaboration with Prof. Maass group (IGI, TU Graz, SP9) we proposed a biologically inspired functional model. This model is based on liquid state machines and can learn to predict visual stimuli from address events provided by a Dynamic Vision Sensor (DVS).


We validated this model on various experiments both with simulated and real DVS. The results were accepted for publication in [1]. We are now currently working on using those short-term visual predictions to control robots.

[1] “Scaling up liquid state machines to predict over address events from dynamic vision sensors”, Jacques Kaiser, Rainer Stal, Anand Subramoney et al., Special issue in Bioinspiration & Biomimetics, 2017.

Sensory models for the simulated mouse in the NRP

A biologically inspired translation model for proprioceptive sensory information was developed. The translation is achieved implementing a computational model of neural activity of type Ia and type II sensory fibers connected to muscle spindles. The model also includes activity of both static and dynamic gamma-motoneurons, that provide fusimotor activation capable of regulating the sensitivity of the proprioceptive feedback, through the contraction of specific intrafusal fibers (Proske, 19971).

Figure 1 Intrafusal fibers

The proposed model is an extension of a state-of-the art computational models of muscle spindle activity (Mileusnic, 20062). The model developed by Mileusnic and colleagues, albeit complete and validated against neuroscientific data, was completely rate based, thus it was modified in order to be integrated in a spiking neural network simulation. In particular, a spike integration technique was employed to compute fusimotor activation and the generated rate was used to generate spike trains.

The proprioceptive model is implemented on NEST, in order to provide an easy integration inside the NRP, and on SpiNNaker, for supporting real-time robotic applications. The proposed component can be coupled to both biomechanical models, like musculo-skeletal systems, and common robotic platforms (via suitable conversions from encoder values to simulated muscle length). In particular, this model will be used, as part of CDP1, to provide sensory feedback from the virtual mouse body.

1 Proske, U. (1997). The mammalian muscle spindle. Physiology, 12(1), 37-42.

2 Mileusnic, M. P., Brown, I. E., Lan, N., & Loeb, G. E. (2006). Mathematical models of proprioceptors. I. Control and transduction in the muscle spindle. Journal of neurophysiology, 96(4), 1772-1788.