Category: Computational Neuroscience

Build a fully functional visual system on the NRP

A collaboration arises from the conjoint goals of CDP4 (a co-designed project within HBP whose goal is to merge several models of the ventral and dorsal streams of the visual system into a complete model of visuo-motor integration) and and WP10.2 (a subpart of the Neurorobotics sub-project – SP10 – that integrates many models for early visual processing and motor control on the NRP). The idea is to import everything that was done in CDP4 in an already existent experiment of the NRP that already connected a model for early visual processing (visual segmentation – ventral stream) to a retina model (see here).

By connecting many models for different functions of the dorsal and the ventral stream on the NRP, this experiment will build the basis of a complete functional model of vision that can be used by any virtual NRP experiment that would require a visual system (motor-control task, decision making based on visual cues, etc.). The first step of the project is to prove that the NRP provides an efficient tool to connect various models. Indeed, different models evolve on very different framework and can potentially be very incompatible. The NRP will thus provide a unique compatibility framework, to connect models easily. The current goal of the experiment is merely to make a proof of concept and thus a very simplified version of a visual system will be built (see image below, and here, if you have access).

WP10-2_CDP4_Experiment (1)

The functions of the visual system will be connected in a modular way, so that it is possible to compare the behaviour of different models for a single function of the visual system, once embedded in a full visual system, and so that any neuroscientist can meaningfully integrate all global accounts of visual perception into his/her model, once incorporated into the NRP experiment. For example, our Laminart model (spiking model of early visual processing for visual segmentation – Francis 2017 [1]), presented here, needs to send spreading signal locally, to initiate parsing of visual information into several segmentation layers. For now, these signals are sent by hand. To gain generality, the model would need bottom-up influence on where these signals are sent (or top-down). It would thus be very interesting for us to send these signals according to the output of a saliency computation model. The Laminart model could then, for example, form a non-retinotopic representation of a moving object by constantly sending signals around saliency peaks computed by the saliency model of CDP4.


  1. Francis, G., Manassi, M., Herzog, M. H. (2017). Neural Dynamics of Grouping and Segmentation Explain Properties of Visual Crowding, Psychological Review.

Cerebellar Adaptation in Vestibule-Ocular Reflex Task

Embodiment allows biologically plausible brain models to be tested in realistic environments, receiving similar feedback as it happens in real life or behavioural experimental set-ups. By adding dynamic synapses researchers can observe the effect that behavioural adaptation plays in network state evolution and vice versa. The NeuroRobotics Platform (NRP) notably boosts the embodiment of brain models into challenging tasks, allowing the neuroscientists to skip the technical issues of implementing the simulation of the scene.

One of the nervous centres that has traditionally received more attention in neuroscience is the cerebellum. It has recurrently shown to play a critical role in learning of tasks involving temporally precise movements, and its influence in eye movement control has received frequent experimental support. Although studies from cerebellum-related patients evidence that the cerebellum is also involved in complex tasks, such as limb coordination and manipulation tasks, eye movement control involves a neural circuitry that is simpler and deeply known. However, there still remain many open questions in how the cerebellum manages to control eye movement with such an astonishing accuracy.

Researchers from the University of Granada aim to study the cerebellar role under an “embodied cognition” scenario in which the cerebellum is responsible for solving and facilitating the body interaction with the environment. To that aim, they have set a behavioural task, the vestibule-ocular reflex (VOR), a neural structure facilitating the neural interaction, the cerebellar model, and a front-end human body, the humanoid iCub robot.VOR_Cerebellum_UGR_2

In particular, two particular hypotheses are to be tested with the proposed model: (i) the VOR phase adaptation due to parallel fibre (one of the main plastic synapsis in the cerebellar cortex) plasticity [1], and (ii) the learning consolidation and gain adaptation in VOR experiments thanks to the deep cerebellar nuclei synaptic plasticity [2].

They have modelled the neural basis of VOR control to provide a mechanistic understanding of the cerebellar functionality, which plays a key role in VOR adaptation. On the one hand, this modelling work aims at cross-linking data on VOR at behavioural and neural level. Through the simulation of VOR control impairments, we will examine possible consequences on the vestibular system processing capabilities of the VOR model. This approach may provide hints, or novel hypothesis, to better interpreting experimental data gathered in VOR testing.VOR_Cerebellum_UGR_1

[1] Clopath, C., Badura, A., De Zeeuw, C. I., & Brunel, N. (2014). A cerebellar learning model of vestibulo-ocular reflex adaptation in wild-type and mutant mice. Journal of Neuroscience, 34(21), 7203-7215.

[2] Luque, N. R., Garrido, J. A., Naveros, F., Carrillo, R. R., D’Angelo, E., & Ros, E. (2016). Distributed cerebellar motor learning: a spike-timing-dependent plasticity model. Frontiers in computational neuroscience, 10.

Jesús A. Garrido, Francisco Naveros, Niceto R. Luque and Eduardo Ros. University of Granada.

CDP4 at the HBP Summit: integrating deep models for visual saliency in the NRP

Back in the beginning of 2017, we had a great NRP Hackathon @FZI in Karlsruhe, where Alexander Kroner (SP4) presented his deep learning model for computing visual saliency.

We now presented this integration at the Human Brain Summit 2017 in Glasgow as a collaboration in CDP4 – visuo-motor integration. During this presentation we also shown how to integrate any deep learning models in the Neurorobotics Platform, as was already presented in the Young Researcher Event by Kenny Sharma.

We will continue this collaboration with SP4 by connecting the saliency model to eye movements and memory modules.


A neuro-biomechanical model that highlights the ability of spinal sensorimotor circuits to generate oscillatory locomotor outputs

The goal of this project is to uncover the functional role of proprioceptive sensorimotor circuits in motor control, and to understand how their recruitment through electrical stimulation can elicit treadmill locomotion in the absence of brain inputs. This understanding is pivotal for the translation of experimental spinal cord stimulation therapies into a viable clinical application.

To this aim, we developed a closed loop neuromusculoskeletal model that encompass a spiking neural network of the muscle spindle pathway of two antagonist muscles, a musculoskeletal model of the mouse hindlimb, and a model of epidural electrical stimulation (Figure 1). The network includes alpha motoneurons, Ia inhibitory interneurons, group II excitatory interneurons, and group Ia and group II afferent fibers. The number of cells, the connectivity, and the firing behavior of alpha motor neurons was tuned according to experimental values found in literature. The effect of epidural electrical stimulation was integrated in the neuronal network by modelling every stimulation pulse as a supra threshold synaptic input in all the cells recruited by the stimulation. An experimentally validated FEM model of the lumbar rat spinal cord was used to compute the percentage of fibers recruited by the stimulation.

Closed loop simulations were performed by using the firing rates of the motoneurons populations as a signal to control the muscles activity of the musculoskeletal model, while using the muscles length information coming from the musculoskeletal model to estimate the firing rates of the neural network afferent fibers. In particular, the firing rates of Ia and II afferent fibers were estimated using an experimentally derived muscles spindle model.

The preliminary results show that muscle spindle feedback circuits alone can produce alternated movements typical of locomotion, when biomechanics and gravity are considered.

Current work is being performed in order to expand the modeled muscle spindle circuitry to control all the main hindlimb muscles together. To this purpose, the developed network will be used as a template for every couple of antagonist muscles and heteronymous connections across the different joints will be implemented. With this complete model of the hindlimb muscle spindle circuitry we will be able to assess whether this single sensorimotor pathway is sufficient to produce treadmill locomotion in combination with EES, or whether other spinal neural networks are necessarily involved.


Figure 1 : Closed loop simulation framework of Spinal Cord model and rodent hind limb to study epidural electrical stimulation

  • Emanuele Formento (PhD, TNE & G-Lab, EPFL)
  • Shravan Tata Ramalingasetty (PhD, BioRob, EPFL)

Successful NRP User Workshop

Date: 24.07.2017
Venue: FZI, Karlsruhe, Germany

Thanks to all of the 17 participants for making this workshop a great time.

Last week, we held a successful Neurorobotics Platform (NRP) User Workshop in FZI, Karlsruhe.  We welcomed 17 attendants over three days, coming from various sub-projects (such as Martin Pearson, SP3) and HBP outsiders (Carmen Peláez-Moreno and  Francisco José Valverde Albacete). We focused on hands-on sessions so that users got comfortable using the NRP themselves.


Thanks to our live boot image with the NRP pre-installed, even users who did not follow the local installation steps beforehand could run the platform locally in no time. During the first day, we provided a tutorial experiment, exclusively developed for the event, which walked the users through the many features of the NRP. This tutorial experiment is inspired from the baby playing ping pong video, which is here simulated with an iCub robot. This tutorial experiment will soon get released with the official build of the platform.



On the second and third days, more freedom was given to the users so that they could implement their own experiments. We had short hands-on sessions on the Robot Designer as well as Virtual Coach, for offline optimization and analysis. Many new experiments were successfully integrated into the platform: the Miro robot from Consequential Robotics,  a snake-like robot moving with Central Patterns Generators (CPG), revival of the Lauron experiment, …


Screenshot from 2017-09-08 14-29-33_crop

We received great feedback from the users. We are looking forward for the organization of the next NRP User Workshop!


Integrating Nengo into the NRP?

On 11th March we had the honor of welcoming Terrence Stewart from the University of Waterloo ( at the Technical University of Munich. During these two days, he first gave a fascinating presentation on Nengo and neural engineering in general.
This was followed by extensive discussions with our developers to investigate a possible integration of Nengo into our platform after it had been installed on his laptop. To this extent, we discussed what overlaps already exist and identified missing parts to make this integration happen.
This yields the opportunity for our NRP to offer additional spiking neuron simulators aside from NEST.
This collaboration would be benefitial for both sides, with us offereing a platform to interface Nengo with Roboy or other muscle based simulations.