Category: bio-inspired

Real Robot Control with the Neurorobotics Platform


Thanks to its architecture, the NRP should be well suited for  directly controlling a real robotic platform with spiking neural networks. Indeed the closed-loop mechanism of the NRP software but also the use of ROS as a middleware enables developments in this direction.

A first motivation for such a project is to outsource the heavy computation load of simulating spiking neural networks on embedded hardware to a fixed server, which can itself interface with neuromorphic hardware like SpiNNaker if required. Consequently, it helps to reach real-time performance on small and low-energy robotic platforms where neuronal computation would have been impossible otherwise. A second motivation is the possibility to partially train the neural network in the NRP, to avoid mechanical and electrical wear of the physical robot. This, however, requires the transferability of neural control  from simulation to the real robot after the training; this challenging field is more known as transfer learning and requires a minimum level of accuracy in the simulation models and equations.


Our work is focused on real-time locomotion of a compliant quadruped robot using CPGs. To outsource the controller to the NRP as discussed above, we have designed both a robot and its 3D clone in simulation. In this setup, they both have four actuators (one for each “body-to-leg” joint) and four sensors (one for each unactuated “knee” joint). The motor position follows a simple open-loop CPG signal with the same amplitude and phase for each leg, such that the robot will alternate between standing-up and sitting-down periodically for fifty seconds. During this experiment, the sensor values are merely recorded,  and not used to regulate the control signal. Given the structure of the kinematic chain with springs and dampers in the knee joints, the system can be explicitly described with a Lagrangian equation. The latter is a function of the physical parameters of the robot which can only be evaluated with a large range of uncertainty as we work with laser-cut or 3D-printed parts assembled with a non-quantified amount of slack. However, the parameters of the simulation model can be set roughly and then optimized to maximize the similarity between the sensors signals output from the hardware and the physics engine. In this setup, we use CMA-ES for this job.

Screenshot from 2018-02-13 19-18-41


After optimization, we can proceed to a qualitative visual validation using different controllers. To this goal, the NRP is installed locally on a machine connected to the same network as the robot. ROS is configured on the robot and in the NRP to enable streaming of actuation and sensing topics between the different machines. A proper calibration of the robot sensors and actuators is also needed to provide meaningful results. A NRP experiment embedding the new optimized model robot is created and used to pilot the robot and the simulated physics. In the current progress, the process seemed to give encouraging results regarding the accuracy and reliability of the simulation, but further tuning is still necessary. An illustration is presented in the following animated figure. The small delay observed between the image on the screen and the robot can be due to the NRP visual rendering in the browser or to the motor PID values.


The aim of pre-training is to exclude actuation patterns that lead to instability of the physical robot (stumbling, falling) and to tune the controller into a good regime. As an interesting fact, our first experiments indicate that there is a good correlation between  failure in the simulator and failures in real observations.



CDP4 at the HBP Summit: integrating deep models for visual saliency in the NRP

Back in the beginning of 2017, we had a great NRP Hackathon @FZI in Karlsruhe, where Alexander Kroner (SP4) presented his deep learning model for computing visual saliency.

We now presented this integration at the Human Brain Summit 2017 in Glasgow as a collaboration in CDP4 – visuo-motor integration. During this presentation we also shown how to integrate any deep learning models in the Neurorobotics Platform, as was already presented in the Young Researcher Event by Kenny Sharma.

We will continue this collaboration with SP4 by connecting the saliency model to eye movements and memory modules.


A quadruped robot with traditional computation hardware as a step for a SpiNNaker version

In this post, we describe the components and the architecture of the Tigrillo robot, a compliant quadruped platform controlled with a Raspberry Pi to achieve early research on CPGs and transfer learning. In order to situate the technical description that follows in a scientific context, it may be useful to explain the research methodology that is used:

  1. Optimisation of a parametric CPG controller using the NRP and the VirtualCoach
  2.  Transfer and validation on the Tigrillo quadruped platform
  3. Collection and analysis of sensors feedback ont the robot and in the NRP to design and improve a robust closed-loop system
  4.  Implementation of the CPGs using NEST on the NRP
  5. Transfer and validation on our quadruped robot embedding SpiNNaker hardware
  6. Comparaison between simulations and the real platforms and extraction of knowledge to iterate on step 1.

The Tigrillo robot enables step 2 by providing a robot to validate the accuracy an general behavior in the NRP simulations.

Mechanical details:

The design process of Tigrillo platform have been guided considering three main features for the robot: compliance, cheapness, versatility. The compliance is a key element in this research as it is believed to add efficiency and robustness to locomotion, like what we can see in biology. However, it also challenges classical control techniques as the dynamics of the robot is now governed by equations with a higher complexity level. On the current platform, the compliance is mainly ensure by using springs in the legs knee instead of actuating them.

Electrical and Software architecture:

  • Sensors and Actuators: 4 Dynamixel RX-24F servomotors, an IMU (Inertial Measurement Unit), various force and flexion sensors in the feet and legs
  • Power supply: A DC step-up voltage convertor connected to a 3 cells LiPo battery to supply the boards and motors with a regulated voltage and a stalk current that can rise to 10A when the legs are pushing together and the motors have to deliver a high torque.
  • Control Board: A OpenCM board (based on an Atmel ARM Cortex-M3 microprocessor) that reads the analog sensor values at a constant frequency and send the position or velocity commands to the servomotors using the protocol standard defined by Dynamixel.
  • Computation board: A Raspberry Pi with Ubuntu Mate 16.04 that implements a CPG controller included  in the same Python software stack that the one used in the NRP and thus easily switch from simulation to trials and validation in the  real world.


The software repository also includes board documentation on the top of the python code used for control and simulation.

Simulating tendon driven robots

According to the concept of embodiment, a brain needs to be connected to a body interacting with the world for biological learning to happen, developing biomimetic robots is crucial to fully understand human intelligence. Here, a tendon driven approach can model muscle behavior in terms of flexibility, compliance and contractive force.

While this concept is clearly beneficial for research, it is very difficult to accurately model in simulation. In contrast to classical robots with motors applying torques in the joints, the simulation needs to apply forces  along wrapped ropes mimicking tendons and muscles. The artificial muscles developped in the Myorobotics [1] project include mechanical parts for flexiblity and force as well as electrical control in different operating modes as seen in Figure 1. To close the reality gap all physical properties need to be considered in modelling.


Figure 1: Myorobotics muscle unit (from [2])

We implemented a plugin for Gazebo that finally allows us to simulate the Myorobotics muscle setup. The plugin models tendon kinematics as well as mechanical and electrical properties of the technical actuator. The calculated forces derived from control commands can now be applied directly to a robot simulated in Gazebo. This brings it one step closer to being integrated into the NRP, allowing us to equip muscle units to arbitrary robot morphologies.

Ultimately, this will enable us to compare simulated biological muscles simulated by OpenSim to the technical muscle of Myorobotics modelled with this plugin. Eventually, this will help to build better biomimetic muscle units behaving just like their biological counterparts.



[2] C. Richter, S. Jentzsch, R. Hostettler, J. A. Garrido, E. Ros, A. Knoll, F. Röhrbein, P. van der Smagt, and J. Conradt, “Scalability in neural control”, IEEE ROBOTICS &AUTOMATIONMAGAZINE, vol. 1070, no. 9932/16, 2016.



Alexander Kuhn, Benedikt Feldotto (TU München)