Category: Publication

Paper accepted: Towards Grasping with Spiking Neural Networks for Anthropomorphic Robot Hands

We got a paper about grasping with spiking neural networks accepted for ICANN 2017!

The complete architecture is shown in the figure. The hand network (left ) receives the proprioception of all fingers and a grasp type signal to generate fingertip targets. Each finger network (middle) receives its proprioception and fingertip target to generate motor commands.

Screen Shot 2017-07-10 at 11.00.48


Representation and execution of movement in biology is an active field of research relevant to neurorobotics. Humans can remember grasp motions and modify them during execution based on the shape and the intended interaction with objects. We present a hierarchical spiking neural network with a biologically inspired architecture for representing different grasp motions. We demonstrate the ability of our network to learn from human demonstration using synaptic plasticity on two different exemplary grasp types (pinch and cylinder). We evaluate the performance of the network in simulation and on a real anthropomorphic robotic hand. The network exposes the ability of learning finger coordination and synergies between joints that can be used for grasping.


grasp motion representation, spiking networks, neurorobotics, motor primitives.

[1] J. C. Vasquez Tieck, H. Donat, J. Kaiser, I. Peric, S. Ulbrich, A. Roennau, Z. Marius, and R. Dillmann, “Towards Grasping with Spiking Neural Networks for Anthropomorphic Robot Hands,” ICANN, 2017.


Short-term visual prediction – published

Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently.

In collaboration with Prof. Maass group (IGI, TU Graz, SP9) we proposed a biologically inspired functional model. This model is based on liquid state machines and can learn to predict visual stimuli from address events provided by a Dynamic Vision Sensor (DVS).


We validated this model on various experiments both with simulated and real DVS. The results were accepted for publication in [1]. We are now currently working on using those short-term visual predictions to control robots.

[1] “Scaling up liquid state machines to predict over address events from dynamic vision sensors”, Jacques Kaiser, Rainer Stal, Anand Subramoney et al., Special issue in Bioinspiration & Biomimetics, 2017.

Morphological Properties of Mass-Spring-Damper Networks for Optimal Locomotion Learning

Morphological Properties of Mass-Spring-Damper Networks for Optimal Locomotion Learning

Robotic Embodiment

The combination of brain inspired AI and robotics is in the core of our work in the Human Brain Project. AI is a vague concept that originated from computer sciences many decades ago and encompasses all algorithms that mimic some cognitive functions of the human species. They are increaslingly based on methods that learn automatically from big datasets.

However, applying those methods to control robots is not as straightforward as it could seem. Unlike computer software, robots generally evolve in noisy and continuously changing environments but on the other hand, their mechanical complexity can be seen as an asset to simplify the control. This is studied through the fields of embodiment and morphological computation. Extreme examples have shown that mechanical structures could provide very natural behavior with no controller at all.

The Passive Walker experiment from T. McGeer is a powerful demonstration emphazing the importance of the body design versus the controller complexity to obtain robust and natural locomotion gaits.

Towards a Formalization of the Concept

Some recent investigations have tried to formalize the relation between the dynamical complexity of a mechanical system and its capability to require simple control. To this goal, a simple yet efficient tool consists in simulating structures composed of masses connected with actuated damper-spring links.

To extend this research, we developed a basic simulator of mass-spring-damper (MSD) networks and optimized a naive locomotion controller to provide them with efficient gaits in term of traveled distance and dissipated power. Three experiments have been done in open-loop to determinate the influence of the size of a structure (estimated though the number of nodes), the compliance (inverse of the spring stiffness) and the saturation at high powers.

This video presents several simulation renditions. The different locomotion processes displayed are learned through optimization in open-loop control.

In the second part of this work, the capacity of realizing closed-loop control in a very simple way requiring very few numerical computations has then been demonstrated.

clThe principal components in the closed-loop learning pipeline consist in a readout layer which is trained at each time step and a signal mixer that gradually integrates the feedback in the actuation signal.

Our Contribution

A full discussion about the results is accessible directly in this article under Creative Common license.

This work has been realized at Ghent Uuniversity together with Jonas Degrave, Francis wyffels, Joni Dambre and Benonie Carette. It is mainly mainly academic and provides a methodology to optimize a controller for locomotion and indications on what we can expect from its complexity to be able to realize this experiment. In the future, this knowledge will be used to conduct similar experiments on quadruped robots both in the real world and in simulation using the Neuro-Robotic Platform (NRP) developed in HBP.

Gazebo DVS plugin – towards a sensor library

On the NRP, we already  support any sensor included by Gazebo. Mostly, they consist of classical robotic sensors such as laser scanner and camera.

However, Gazebo does not include recent biologically inspired sensor, neither does it include neuroscience’s models of organic sensors. Those type of sensors are important for the NRP. To keep the workflow identical for classical robotic sensors and newly developed sensors, we decided to implement the later as gazebo plugins. Essentially, our sensor library will consist of a list of gazebo plugins simulating various biologically inspired sensors.

So far, we implemented a simulation of the Dynamic Vision Sensor (DVS) which is open-source and available on our  SP10 github. In the coming month, we will also adapt our implementation of COREM, retina simulation framework [1,2] and wrap it in a Gazebo plugin.


[1] Martínez-Cañada, P., Morillas, C., Pino, B., Ros, E., & Pelayo, F. (2016). A Computational Framework for Realistic Retina Modeling. International Journal of Neural Systems, 26(07), 1650030.

[2] Ambrosano A. et al. (2016). Retina Color-Opponency Based Pursuit Implemented Through Spiking Neural Networks in the Neurorobotics Platform. Biomimetic and Biohybrid Systems. Living Machines 2016. 

Publication in a Supplement to Science on Brain-Inspired Intelligent Robotics

The article “Neurorobotics: A strategic pillar of the Human Brain Project” was released in a Science Supplement on “Brain-inspired intelligent robotics: The intersection of robotics and neuroscience”, explaining the importance of our subproject and its research.


To give you an overview, you can find the first section below:

“Neurorobotics is an emerging science that studies the interaction of brain, body, and environment in closed perception–action loops where a robot’s actions affect its future sensory input. At the core of this field are robots controlled by simulated nervous systems that model the structure and function of biological brains at varying levels of detail (1). In a typical neurorobotics experiment, a robot or agent will perceive its current environment through a set of sensors that will transmit their signals to a simulated brain. The brain model may then produce signals that will cause the robot to move, thereby changing the agent’s perception of the environment. Observing how the robot then interacts with its environment and how the robot’s actions influence its future sensory input allows scientists to study how brain and body have to work together to produce the appropriate response to a given stimulus. Thus, neurorobotics links robotics and neuroscience, enabling a seamless exchange of knowledge between these two disciplines. Here, we provide an introduction to neurorobotics and report on the current state of development of the European Union–funded Human Brain Project’s (HBP’s) Neurorobotics Platform (2, 3). HBP is Europe’s biggest project in information communication technologies (ICT) to date ( and is one of two large-scale, long-term flagship research initiatives selected by the European Commission to promote disruptive scientific advance in future key technologies. It will have a duration of 10 years and deliver six open ICT platforms for future research in neuroscience, medicine, and computing, aimed at unifying the understanding of the human brain and translating this knowledge into commercial products.”

Read the entire paper here on page 25:

(image source: