Real Robot Control with the Neurorobotics Platform


Thanks to its architecture, the NRP should be well suited for  directly controlling a real robotic platform with spiking neural networks. Indeed the closed-loop mechanism of the NRP software but also the use of ROS as a middleware enables developments in this direction.

A first motivation for such a project is to outsource the heavy computation load of simulating spiking neural networks on embedded hardware to a fixed server, which can itself interface with neuromorphic hardware like SpiNNaker if required. Consequently, it helps to reach real-time performance on small and low-energy robotic platforms where neuronal computation would have been impossible otherwise. A second motivation is the possibility to partially train the neural network in the NRP, to avoid mechanical and electrical wear of the physical robot. This, however, requires the transferability of neural control  from simulation to the real robot after the training; this challenging field is more known as transfer learning and requires a minimum level of accuracy in the simulation models and equations.


Our work is focused on real-time locomotion of a compliant quadruped robot using CPGs. To outsource the controller to the NRP as discussed above, we have designed both a robot and its 3D clone in simulation. In this setup, they both have four actuators (one for each “body-to-leg” joint) and four sensors (one for each unactuated “knee” joint). The motor position follows a simple open-loop CPG signal with the same amplitude and phase for each leg, such that the robot will alternate between standing-up and sitting-down periodically for fifty seconds. During this experiment, the sensor values are merely recorded,  and not used to regulate the control signal. Given the structure of the kinematic chain with springs and dampers in the knee joints, the system can be explicitly described with a Lagrangian equation. The latter is a function of the physical parameters of the robot which can only be evaluated with a large range of uncertainty as we work with laser-cut or 3D-printed parts assembled with a non-quantified amount of slack. However, the parameters of the simulation model can be set roughly and then optimized to maximize the similarity between the sensors signals output from the hardware and the physics engine. In this setup, we use CMA-ES for this job.

Screenshot from 2018-02-13 19-18-41


After optimization, we can proceed to a qualitative visual validation using different controllers. To this goal, the NRP is installed locally on a machine connected to the same network as the robot. ROS is configured on the robot and in the NRP to enable streaming of actuation and sensing topics between the different machines. A proper calibration of the robot sensors and actuators is also needed to provide meaningful results. A NRP experiment embedding the new optimized model robot is created and used to pilot the robot and the simulated physics. In the current progress, the process seemed to give encouraging results regarding the accuracy and reliability of the simulation, but further tuning is still necessary. An illustration is presented in the following animated figure. The small delay observed between the image on the screen and the robot can be due to the NRP visual rendering in the browser or to the motor PID values.


The aim of pre-training is to exclude actuation patterns that lead to instability of the physical robot (stumbling, falling) and to tune the controller into a good regime. As an interesting fact, our first experiments indicate that there is a good correlation between  failure in the simulator and failures in real observations.


Structuring your local NRP experiment – some tips

Structuring your local NRP experiment – some tips

Within the context of CDP4, we created a NRP experiment showcasing some functional models from SP1/4:

  • A trained deep network to compute bottom-up saliency
  • A saccade generation model

Since these models are generic, we want to package them so that they can easily be reused in other experiment, such as the WP10.2 strategic experiment. In this post, we quickly explain the structure of the CDP4 experiment on how modularity is achieved.

We decided to implement the functional modules from SP1/SP4 as ROS packages. Therefore, these modules can be used within the NRP (in the GazeboRosPackages folder), but also independently without the NRP, in any other catkin workspace. This has the advantage that the saliency model can be fed webcam images, and easily mounted on a real robot.

The main difference compared to implementing them as transfer function is synchronicity. When the user runs the saliency model on is CPU, processing a single camera image takes around 3 seconds. If the saliency model was implemented as a transfer function, the simulation would pause until the saliency output is ready. This causes the experiment to run slower but conserves reproducability. On the other hand, implemented as a ROS-node, the simulation does not wait for the saliency network to process an image, so the simulation runs faster.

The saliency model is a pre-trained deep network running on TensorFlow. The weights and topology of the network are saved in data files, loaded during the execution. Since these files are heavy and not interesting to version-control, we uploaded them on our owncloud, where they are automatically downloaded by the saliency model if not present. This also makes it simple for our collaborators in SP1/4 to provide us with new pre-trained weights/topology.

The CDP4 experiment itself has its own repo and is very lean, as it relies on these reusable modules. Additionally, an install script is provided to download the required modules in the GazeboRosPackages.

The topic of installing TensorFlow or other python libraries required by the CDP4 experiment, so that they do not collide with other experiment-specific libraries, will be covered in another blog post.


Implementing cerebellar learning rules for NEST simulator

The cerebellum is a relatively small center in the nervous system that accounts around half of the existing neurons. As we previously documented, the researches from the University of Granada are taking advantage of the NeuroRobotics Platform (NRP) in order to prove how cerebellar plasticity may contribute to vestibule-ocular reflex (VOR) adaptation.

Implementing neurorobotic experiments often requires some multidisciplinary efforts as:

  1. Establishing a neuroscience-relevant working hypothesis.
  2. Implementing an avatar or robot simulator to perform the task.
  3. Developing the brain model with the indicated level of detail.
  4. Transforming brain activity -spikes- into signals that can be used by the robot and viceversa.

The NRP provides useful tools in order to facilitate most of these steps. However, the definition of complex brain models might requires the implementation of  neuron and synapsis models for the brain simulation platform (NEST in our particular case). The cerebellar models that we are including involves plasticity at two different synaptic sites: the parallel fibers (PF) and the mossy fibers (MF, targeting the vestibular nuclei neurons).


Although we will go deeper into the equations (see the reference above for further details) each parallel fiber synapsis will be depressed (LTD) when a presynaptic spike occurs closely to the occurrence of a complex spike of the target Purkinje cell (PC, see figure). Similarly, the plasticity at the mossy fiber/vestibular nuclei (VN) synapsis will be driven by the inhibitory activity coming from the Purkinje neurons.

These learning rules have been previously implemented for EDLUT simulator and used for complex manipulation tasks in [1]. The neuron and synapsis models have been released in GitHub and also as part of the NRP source code. This work in the framework of the HBP will allow researchers to demonstrate the role that plasticity at the parallel fibers and mossy fibers play in vestibule-occular reflex movements.

[1] Luque, N. R., Garrido, J. A., Naveros, F., Carrillo, R. R., D’Angelo, E., & Ros, E. (2016). Distributed cerebellar motor learning: a spike-timing-dependent plasticity model. Frontiers in computational neuroscience, 10.

Using the NRP, a saliency computation model drives visual segmentation in the Laminart model

Recently, a cortical model for visual grouping and segmentation (the Laminart model) has been integrated to the NRP. From there, the goal was to build a whole visual system on the NRP, connecting many models for different functions of vision (retina processing, saliency computation, saccades generation, predictive coding, …) in a single virtual experiment, including the Laminart as a model for early visual processing. While this process is on-the-go (see here), some scientifically relevant progress already arose from the premises of this implementation. This is what is going to be explained right now.

The Laminart is one of the only models being able to satisfactorily explain how crowding occurs in the visual system. Crowding is a visual phenomenon that happens when perception of a target deteriorates in the presence of nearby elements. Crowding occurs in real life (for example when driving in the street, see fig. 1a) and is widely studied in many psychophysical experiments (see fig. 1b). Crowding happens ubiquitously in the visual system and must thus be accounted for by any complete model of the visual system.

While crowding was for a long time believed to be driven by local interactions in the brain (e.g. decremental feed-forward pooling of different receptive fields along the hierarchy of the visual system, jumbling the target’s visual features with the one from nearby elements), it recently appeared that adding remote contextual elements can still modulate crowding (see fig. 1c). The entire visual configuration is eligible to determine what happens at the very tiny scale of the target!

Fig. 1: a) Crowding in real life. If you look at the bull’s eye, the kid on the right will be easily identifiable. However, the one on the left will be harder to identify, because the nearby elements have similar features (yellow color, human shape). b) Crowding in psychophysical experiments. Top: the goal is to identify the letter in the center, while looking at the fixation cross. Neighbouring letters make the task more difficult, especially if they are very close to the target. Center and bottom: the goal here is to identify the offset of the target (small tilted lines). Again, the neighbouring lines make the task more difficult. c) The task is the same as before (visual stimuli on the x-axis are presented in the periphery of the visual field and observer must report the offset of the target). This time, squares try to decrease performance. What is plotted on the y-axis is the target offset at which observers give 75% of correct answers (low values indicate good performance). When the target is alone (dashed line), performance is very good. When only one square flanks the target, performance decreases dramatically. However, when more squares are added, the task becomes easier and easier.

To account for this exciting phenomenon (named uncrowding), Francis et al. (2017) proposed a model that parses the visual stimulus in several groups, using low-level, cortical dynamics (arising from a biologically plausible and laminarly structured network of spiking neurons, with fixed connectivity). Crucially, the Laminart is a 2-stage model in which the input image is segmented in different groups before any decremental interaction can happen between the target and nearby elements. In other words: how elements are grouped in the visual field determines how crowding occurs, making the latter a simple and behaviourally measurable phenomenon that unambiguously describes a central feature of human vision (grouping). In fig. 1c (right), the 7 squares form a group that frames the target, instead of interfering with it, hence enhancing performance. In the Laminart model, the 7 squares are grouped together by illusory contours and are segmented out, leaving a privileged access to the target left alone. However, in order to work, the Laminart model needs to start the segmentation spreading process somewhere (see fig. 2).

Fig. 2: Dynamics of the layer 2/3 of the area V2 of the Laminart model, for two different stimuli. The red/green lines correspond to the activity of the neurons that detect a vertical/horizontal contrast contour. The three different columns for each stimulus correspond to three segmentation layers, where the visual stimulus is parsed in different groups. The blue circles are spreading signals that start the segmentation process (one for each segmentation layer that is not SL0). Left: the flanker is close to the target. It is thus hard for the spreading signals to segment the flanking square from the target. Right: the flankers extend further and are linked by illusory contours. It is more easy for the the signals to segment them from the target. Thus, this condition produces less crowding than the other.

Up to now, the model was sending ad-hoc top-down signals, lacking an explicit process to generate them. Here, using the NRP, we could easily connect it to a model for saliency computation that was just integrated to the platform. Feeding the Laminart, the saliency computation model delivers its output as a bottom-up influence to where segmentation signals should arise. On the NRP, we created the stimuli appearing in the experimental results shown in fig. 1c, and presented them to the iCub robot. In this experiment, each time a segmentation signal is sent, its location is sampled from the saliency computation map, linking both models in an elegant manner. Notably, when only 1 squares flanks the target, the saliency map is merely a big blob around the target, whereas when 7 squares flank the target, the saliency map is more peaky around the outer squares (see fig. 3). Consequently, the more squares there are, the more probable it is that the segmentation signals succeed in creating 2 groups from the flankers and the target, releasing the target from crowding. This fits very well with the results of figure 1c. The next step for this project is to reproduce the results quantitatively on the NRP.

151626720039374009 (1)

Fig. 3: Coexistence of the Laminart network and the saliency network. Top: crowded condition. Bottom: uncrowded condition. In both situations, the saliency computation model drives the location of the segmentation signals in the Laminart model and explains very well how crowding and uncrowding can occur. The windows on the left display the saliency model. The ones on the right display the output of the Laminart model (up: V2 contrast borders activity ; down: V4 surface activity).

To sum up, building a visual system on the NRP, we could easily make the connection between a saliency computation model and our Laminart model. This connection greatly enhanced the latter model and gives it the opportunity to explain very well how uncrowding occurs in human vision and the low-level mechanisms of visual grouping. In the future, we will run psychophysical experiments in our lab, where it is possible to disentangle top-down from bottom-up influence on uncrowding, seeing whether a strong influence of saliency computation on visual grouping makes any sense.

Fable robot simulator

Fable is a 2 DoF modular robot arm that is being used by the group of DTU in order to develop the task of “Self-Adaptation in Modular Robotics”.

Thanks to the modularity provided by Fable, it is feasible to combine several modules together in order to create different robotic configurations increasing the complexity of the system. In this way, one is able to work on manipulation tasks as well as in locomotion tasks just by plugging a few modules together to form an arm, a worm, a spider,…

In the process to make the Fable robot as accessible as possible to the community, here at DTU we have been working on the implementation of the Fable v2.0 simulator.

We have created 3 different configurations:

A simple robotic arm, 2 DoF (1 Fable module)


A worm-like robot, 4 DoF (2 Fable module)


A quadruped-like robot, 8 DoF (4 Fable module)


This robot model has not been included to the NRP yet, but soon will be available for users. We will keep you updated.




Sensory driven hind-limb mouse locomotion model

In the paper on hind-limb locomotion of a cat in simulation [\textit{reference}], the authors studied the importance two main sensory feedbacks important swing-stance phase switching and which of the particular feedbacks are more important than the other for stable locomotion. In this preliminary work we set-up similar rules to produce locomotion in the mouse model developed in the Neuro-Robotics Platform(NRP). This work will be used to study the role of sensory feedback in locomotion and its integration with feed-forward components such as the Central Pattern Generator’s(CPG’s).In the paper on hind-limb locomotion of a cat in simulation [1], the authors studied the importance two main sensory feedbacks important swing-stance phase switching and which of the particular feedbacks are more important than the other for stable locomotion. In this preliminary work we set-up similar rules to produce locomotion in the mouse model developed in the Neuro-Robotics Platform(NRP). This work will be used to study the role of sensory feedback in locomotion and its integration with feed-forward components such as the Central Pattern Generator’s(CPG’s).

Bio-mechanical model :
We use the Neuro-Robotics platform (NRP) to develop the simulation model and its environment. The rigid body model of the mouse available in NRP was obtained from a high resolution 3D scan of a real mouse. Relationship between the segments are established via joints. For the purpose of this experiment only hind-limbs are actuated. Thus the current model has in total eight actuated joints, four in each hind-limb. Muscles are modeled as hill type muscles with passive and active dynamics. Muscle morphometry and related parameters were obtained from [2]. Each of the actuated joint consisted of at least one pair of antagonist muscle. Some joints also bi-articular muscles. In total the model consists of sixteen muscles. Proprioceptive feedback from muscles and rigid body and tactile information close the loop between the different components of locomotion.

This slideshow requires JavaScript.

Reflex controller :
The idea here is to break the motion of hind limb locomotion into four phases, namely (i) swing (ii) touch-down (iii) stance (iv) lift-off. Proprioceptive feedback and joint angles dictate the reflex conditions under which the phase transitions from one to another. Figure shows the four phases and their sequence of transition. For the hind limbs to change from one phase to another we optimize the muscle activation patterns as a function of proprioceptive feedback and joint angle. This ensures a smooth transition between one phase to another when a necessary condition is met.

Discussions :
With the bio-mechanical model of mouse in NRP and reflex control law we are able to reproduce stable hind-limb gait patterns that are purely sensory driven. The next steps to taken in the experiment are :

  1. Convert reflex laws into neuron based reflex loops
  2. Extend the reflex model for quadruped locomotion
  3. Add a CPG layer to interface with the reflex loops

References :

  1. O. Ekeberg and K. Pearson, “Computer simulation of stepping in the hind legs of the cat: an examination of mechanisms regulating the stance-to-swing transition.” Journal of neurophysiology, vol. 94, no. 6, pp. 4256–68, dec 2005.
  2. J. P. Charles, O. Cappellari, A. J. Spence, J. R. Hutchinson, and D. J. Wells, “Musculoskeletal geometry, muscle architecture and functional specialisations of the mouse hindlimb,” PLoS ONE, vol. 11, no. 4, pp. 1–21, 2016.

Neurorobotics Platform (NRP) User Workshop.

The workshop to introduce Neurorobotics Platform (NRP) was held on the SSSA with the ​participation of M.Sc. and Ph.D. students. During the workshop, two instructors from the development and research teams provided introductory information on Human Brain Project and, specifically, SP-10 Neurorobotics Platform features including open source technologies used in the NRP (e.g., ROS and Gazebo), development cycles and graphical user interface for the first time users. After the introduction, the users installed the NRP by either following instruction from the HBP Neurorobotics repository or via the bootable flash disks in order to install the NRP for a hands-on session.

The users followed the instructions from tutorial_baseball_exercise to create an experiment as a first demo and to get familiarity with the NRP concepts such as transfer functions, Brain-Body interface, closed-loop engine, to mention a few. This session ended with successfully solving the tutorial requirements with the assistance of the instructors. In the last part of the workshop, the participants discussed to integrate their own on-going project to the NRP. One of the participants expresses his ideas on integration Cerebellar model to the NRP:

My objective is to study the computational characteristics of the cerebellum, responsible for precise motor control in biological agents. Currently, a rate based model of the cerebellum has been implemented to produce accurate saccades in the primate type oculomotor system. My plan is to convert this model into a full spike based cerebellar model in the NEST simulator and apply this control model on the iCub gazebo. The NRP is definitely poised to provide me with this functionality.

Another participant expressed his plan to integrate a continuum robot, I-SUPPORT, to the NRP:

My on-going works with the NRP to create an I-SUPPORT robot model using an OpenSim muscle model to simulate the behavior of the McKibben’s present in the robotic arm.

The last project idea:

The experiments on invariant object recognition and multi-modal object representation by integrating the Hierarchical Temporal Memory, many (deep) layered networks and Spiking Neural Networks to the NRP.

The workshop closed with the evaluation of each session and discussions on the requirements for the proposed projects.

Posted by: Murat Kirtay (SSSA)