CDP4 at the HBP Summit: integrating deep models for visual saliency in the NRP

Back in the beginning of 2017, we had a great NRP Hackathon @FZI in Karlsruhe, where Alexander Kroner (SP4) presented his deep learning model for computing visual saliency.

We now presented this integration at the Human Brain Summit 2017 in Glasgow as a collaboration in CDP4 – visuo-motor integration. During this presentation we also shown how to integrate any deep learning models in the Neurorobotics Platform, as was already presented in the Young Researcher Event by Kenny Sharma.

We will continue this collaboration with SP4 by connecting the saliency model to eye movements and memory modules.



Mouse modeling for robotics and neuroscience…

… or why we are building a zoo of artificial mice.

Neurorobotics is about connecting simulated brains to virtual and physical robot bodies. Differently from other approaches in robotics or machine learning, the focus is on high biological plausibility, i.e. a neurorobotic system is designed to capture and predict the quantitative behavior of its biological counterpart as closely as possible. However, what is exactly meant by “close” depends on the granularity of the brain model. Clearly, simple neural networks with only a few neurons can be studied on an equally simple robot. In case of the Braitenberg vehicle experiment on the Neurorobotics Platform, a mobile robot platform with four wheels and a camera is perfectly sufficient. By contrast, brain simulations that are comprised of millions of neurons require realistic body models to simulate and reproduce data from neuroscience as accurately as possible. In this context, standard robots are no longer a viable choice. Neurorobotics is therefore not only about connecting a robot body to a brain but also about the design, simulation, and construction of that body.

The brain models developed in the Human Brain Project are among the most complex and realistic ones ever built and therefore it is only logical that they require the most realistic body models ever built. But how does the perfect body model look like? The answer is both simple and tricky: Since most of the data in neuroscience is obtained from rodents, particularly mice, the perfect choice for the body model is to simulate a mouse body. The tricky part is to determine the level of detail that is necessary to provide meaningful embodiment for the brain models. We are therefore currently designing and building a zoo of different mouse models, each of which serves a specific purpose.

The maximum level of biological detail can only be achieved in simulation. For this reason, we are developing a virtual mouse body that not only looks like a real mouse but that also has the same biomechanical properties. Every bone of the skeleton was modeled individually based on bones of real mice. Combined with the musculoskeletal simulation that will soon be available in the Neurorobotics Platform, the skeleton will enable realistic biomechanical simulations.

Rendering of the completed mouse skeleton

The latest version of the virtual mouse got a soft skin that is fitted to the skeleton. Together with the recently added simulation of the fur, our mouse is almost indistinguishable from its biological colleagues!

Rendering of the mouse model with skin and fur

Unlike simulation, the real world imposes many constraints on the types of robots that can be built. However, having a physical counterpart to our virtual mouse is beneficial for many reasons. It not only enables direct interaction with the robot but is in particular also a first step to applying results from neurorobotics research in real-world applications. Our first prototype of the mouse robot was built with a focus on small size and biomimetic leg design for robust locomotion. Upcoming releases will not only feature improved mechanics but in particular also include more sensors. Follow our blog to see how our mouse is slowly growing up!

Completed initial prototype of the mouse robot

Many thanks to Matthias Clostermann, Eva Siehmann, and Peer Lucas for their contributions!

Florian Walter, Technical University of Munich
October 13, 2017

Customized design of musculoskeletal robots with the Robot Designer

In a recent blogpost we introduced the integration of muscle simulations in the Neurorobotics Platform, technically integrating the musculoskeletal simulator OpenSim into the robotic simulator Gazebo. This will enable researchers to conduct experiments with biologically validated muscle actuation. A variety of body models can be studied, either highly biomimetic or of rather technical nature. These studies becomes even more important considering the concept of embodiment, a brain is always embedded in the body and hereby the morphology gets a crucial role in any behavior learning task. For neurorobotics researchers the investigation of this direct coupling of the brain to a morphology in terms of the skeleton structure, body shape, joint assembly as well as muscle attachment points, will give rise to multiple experimental opportunities.

To foster morphological experiments in the Neurorobotics Platform, a fast and user-friendly way for adaptation of the skeleton and muscles is required. Hence, we enhanced our Blender Robot Designer plugin for interactive muscle definition. After creating of a robot by defining the kinematic structure and geometry characteristics, one can now define muscle attachments and paths in a graphical way. As demonstrated in the figure 1 with the mouse skeleton of our CDP1 mouse model, you can select an arbitrary number of pathpoints on the robot model itself. As pathpoints get listed in the user interface you can delete, change the order or refine the location of every point at any time. Afterwards muscle characteristics such as the muscle force and fiber length can be adapted and you can choose between different muscle types provided by OpenSim from biological measurements.  Defined muscles get directly exported with the robot model as an additional file in the .osim format and hereby are ready for use in the Neurorobotics platform.



Figure 1: Graphical definition of muscles on a validated mouse skeleton in the Robot Designer


With the introduced muscle definition tool we hope to help researchers tackle arising questions from both a morphological and embodiment perspective: What is the effect of variation of muscle paths and characteristics on the agent’s behavior? How can a brain learn to act with a complex musculoskeletal body model and how does the musculoskeletal structure enhance learning of body motions?

For a quick start with the Robot Designer have a look at our documentation.


Benedikt Feldotto

Technical University of Munich






Optimising compliant robot locomotion using the HBP Neurorobotics platform

If we want robots to become a part of our everyday life, future robot platforms will have to be safe and much cheaper than most useful robots are now. Safety can be obtained by making robots compliant using passive elements (springs, soft elastic materials). Unfortunately, accurate mechanical (dynamic/kinematic) models of such robots are not available and in addition, especially when cheaper materials are used, their dynamical properties drift over time because of wear.

Therefore, cheap robots with passive compliance need adaptive control that is as robust as possible to mechanical and morphological variations. Adaptation training on each physical robot will still be necessary, but this should converge as quickly as possible.

The Tigrillo quadruped robot will be used to investigate neural closed loop motor control for locomotion to address these issues. In particular, we want to investigate how the NRP simulation framework can be used to develop such robust neural control.

As a first step, we implemented a parameterised Tigrillo simulation model generator. Using a simple script, a Gazebo simulation model with given body dimensions, mass distributions and spring constants can be generated to be simulated in the NRP. We then implemented evolutionary optimisation (CMA-ES) in the NRP’s Virtual coach to find efficient motor control patterns, which then generated with spiking population networks using a reservoir computing approach. Finally, these control patterns were transferred to the physical robot’s SpiNNaker board and the resulting gaits were compared to the simulation results.

These steps are illustrated in the video below.

Next steps are:

  • to tune the parameter ranges of  the Tigrillo generator to those that are realistic for the real robot;
  • to implement sensors on the physical robot and calibrate equivalent simulated sensors;
  • to use our setup to obtain the desired robust closed loop control and validate both qualitatively and quantitatively on the physical robot.

Many thanks to Gabriel Urbain, Alexander Vandesompele, Brecht Willems and prof. Francis wyffels for their input.


How we simplify your neurons ?

Reproducing complex behaviors of a musculoskeletal model such as rodent locomotion, requires the creation of a controller able to process high bandwidth of sensory input and compute the corresponding motor response.
This usually entails creating large scale neural networks which in turn result in high computational costs. To solve this issue, mathematical simplification methods are needed to capture the essential properties of these networks.

One of the most crucial steps in mouse brain reconstruction is the reduction of detailed neuronal morphologies to point neurons. This is however not trivial, as these morphologies are not only needed to determine the connectivity between neurons by providing contact points, but also by allowing the computation of the propagation of the current through your cell.
This requires however the computation of the potential of every dendritic and axonal sub-sections.

A new model is thus needed that us computationally lighter but generic enough to capture all possible dynamics observed in detailed models.
Recent work by Christian Pozzorini et al. [1] tried to address this issue by creating a General Integrate and Fire (or GIF) point neuron model. This was done by optimizing neuronal parameters by using activities, and input currents.
The GIF model captures more dynamics of biological neurons than the classical Integrate and Fire (or IaF) model, such as stochasticity of spiking or spike-triggered current. However, it still cannot reproduce all dendritic dynamics observed in detailed models.


As a result, Rössert and al. [2] created an algorithm to reduce the synaptic and dendritic processes, by creating cluster of receptors. Each receptor receives multiple currents and treats them using linear filtering. This point neuron model is therefore not only one of the most biologically accurate that exists, but is also faster than a detailed counterpart. This is crucial for large scale simulations.


Simplification of neuron models is a way to extract the base dynamics of your neurons to simulate only what is needed. It is also an important indicator of the information that get lost in the process. It will be therefore a required step in our project in order to simulate the whole mouse brain and indeed, we will use these models in our project of  closed-loop simulation with the rodent body.

[1] Pozzorini, C., Mensi, S., Hagens, O., Naud, R., Koch, C., & Gerstner, W. (2015). Automated High-Throughput Characterization of Single Neurons by Means of Simplified Spiking Models. PLOS Computational Biology PLoS Comput Biol, 11(6).

[2] Rössert, C., Pozzorini, C., Chindemi, G., Davison, A. P., Eroe, C., King, J., … Muller, E. (2016). Automated point-neuron simplification of data-driven microcircuit models.

OpenSim support in the Neurorobotics platform

A key area of research of the Neurorobotics Platform (NRP) is the in-silico study of sensormotor skills and locomotion of biological systems. To simulate the physical environment and system embodiments, the NRP uses the Gazebo robotics simulator.

To perform biologically significant experiments, Gazebo has however been lacking an important feature until now: The ability to model and simulate musco-skeletal kinematics.

Therefore researchers had to rely on ad-hoc implementations calculating effective joint torques for the system at hand, wich is time consuming, error prone and cumbersome.

The physics plugin we implemented provides OpenSim as an additional physics engine alongside the physics engines already supported by Gazebo (ODE, Bullet, SimBody and DART). OpenSim is using SimBody as its underlying framework, thus featuring a stable and accurate mechanical simulation. The OpenSim plugin supports many of SimBody’s kinematic constraint types and implements collision detection support for sphere, plane and triangle mesh shapes along with corresponding contact forces (as exposed by OpenSim’s API).

However, first and foremost it treats physiological models of muscles as first class citizens alongside rigid bodies and kinematic joints. OpenSim is shipped with a number of predefined muscle-tendon actuators. Currently, users of our plugin can use OpenSim’s native XML configuration file format to specify the structure and properties of muscle-tendon systems, which are created on top of Gazebo models specified in Gazebo’s own file format (SDF).

A ROS-based messaging interface provides accessors for excitations and other biophysical parameters allowing to control musco-skeletal systems from external applications such as the Neurorobotics platform.

As demonstration of the capabilities of our physics plugin, we augmented a simple four-legged walker with a set of eight muscles (one synergist-antagonist pair per leg).

The problem we address in this demo is the reinforcement learning task of deriving a controller that excites the muscles in a pattern such that the walker is driven forward. Our setup consists of a Python application (remote-controlling Gazebo via the ROS-based messaging interface for the OpenSim plugin) performing the high-level optimization procedure and running a neural network (NN) controller.

We employ a simple genetic optimization procedure based on Python’s DEAP package to find parameters of the NN that maximize the score the walker obtains in individual trial runs.

The walker is rewarded for moving forward and penalized for unwanted motion behaviour (e. g. ground contacts of the walker’s body, moving off-center).

During a trial run, the physics simulation is stepped in small time increments, and during each iteration the NN is fed with various state variables. The NN’s output is comprised of excitation levels for the muscles. For simplicity we stuck to well-known artificial neural networks, implemented via the Tensorflow package.

We also experimented with fully dynamic grasping simulation using SimBody’s collision detection system and contact force implementations. Although the simulation setup for the grasping tests only comprised a simple two-jaw gripper and a cubic shape (consisting of a triangle mesh shape), the SimBody engine as used in our plugin was able to maintain a stable grasp using fully dynamic contact forces, tackling a problem that is notoriously difficult to solve with other physics engines.

Another application using the OpenSim plugin for Gazebo features a simplified muscle model of a mouse’s foreleg actuated by a neuronal controller modelled according the spinal cord of a real mouse. The details of this experimental setup will be covered in a separate blog post.

The OpenSim plugin does not support all of the features implemented with other engines in Gazebo. For instance, some joint types are not implemented yet. Also, some features unique to OpenSim (like inverse dynamics simulation) are not yet available in the current implementation.

To simplify the design of kinematic models with muscle systems and custom acutator models, it is planned to provide researchers and users of the NRP with a consistent, simple way to specify muscles via a graphical interface using the NRP‘s Robot Designer application.

A one-day workshop during the last Performance Show in Ghent

Last week, we had the chance to organize the first edition of a SP10 Performance Show in the city of Ghent, Belgium. This two-days meeting between all the partners involved in the HBP Neurorobotics subproject (SP10) was an opportunity to discuss the latest progress of each research groups and ensure a convergence of views and efforts for the next events, researches and developments.


SP10 Performance Show September 2017
A discussion during the SP10 Performance Show


On the second day, we divided our work into two tracks. Whereas the Main Track dealt with administrative and research activities, the Secondary Track was organized as a workshop on the theme Thinking the NRP of the Future. It was formatted as short one-day hackaton where everyone started by summarizing one or several iconic research advances that had been done in the last year in his field, which helped us grouping into 4 different work teams :

  • Reinforcement Learning with the NRP
  • Integrating worms brains and soft bodies in the NRP
  • Real-time interaction between real and simulated robots in the NRP
  • Helping research on visuomotor learning with the child using simulations in the NRP


SP10 Performance Show September 2017
On Tuesday, a work group is brainstorming about integrating worms in the NRP


Each of those teams brainstormed to imagine and design an experiment that could help research to move forward and a list of requirements in term of developments it would need to be achieved. After lunch, the results of this brainstorm were presented to everyone to get feedback and comments before we started working on designing a first prototype in the NRP and coding some useful models that we would need in further work. To be continued then…