Category: Computational Neuroscience

Implementing cerebellar learning rules for NEST simulator

The cerebellum is a relatively small center in the nervous system that accounts around half of the existing neurons. As we previously documented, the researches from the University of Granada are taking advantage of the NeuroRobotics Platform (NRP) in order to prove how cerebellar plasticity may contribute to vestibule-ocular reflex (VOR) adaptation.

Implementing neurorobotic experiments often requires some multidisciplinary efforts as:

  1. Establishing a neuroscience-relevant working hypothesis.
  2. Implementing an avatar or robot simulator to perform the task.
  3. Developing the brain model with the indicated level of detail.
  4. Transforming brain activity -spikes- into signals that can be used by the robot and viceversa.

The NRP provides useful tools in order to facilitate most of these steps. However, the definition of complex brain models might requires the implementation of  neuron and synapsis models for the brain simulation platform (NEST in our particular case). The cerebellar models that we are including involves plasticity at two different synaptic sites: the parallel fibers (PF) and the mossy fibers (MF, targeting the vestibular nuclei neurons).


Although we will go deeper into the equations (see the reference above for further details) each parallel fiber synapsis will be depressed (LTD) when a presynaptic spike occurs closely to the occurrence of a complex spike of the target Purkinje cell (PC, see figure). Similarly, the plasticity at the mossy fiber/vestibular nuclei (VN) synapsis will be driven by the inhibitory activity coming from the Purkinje neurons.

These learning rules have been previously implemented for EDLUT simulator and used for complex manipulation tasks in [1]. The neuron and synapsis models have been released in GitHub and also as part of the NRP source code. This work in the framework of the HBP will allow researchers to demonstrate the role that plasticity at the parallel fibers and mossy fibers play in vestibule-occular reflex movements.

[1] Luque, N. R., Garrido, J. A., Naveros, F., Carrillo, R. R., D’Angelo, E., & Ros, E. (2016). Distributed cerebellar motor learning: a spike-timing-dependent plasticity model. Frontiers in computational neuroscience, 10.


Using the NRP, a saliency computation model drives visual segmentation in the Laminart model

Recently, a cortical model for visual grouping and segmentation (the Laminart model) has been integrated to the NRP. From there, the goal was to build a whole visual system on the NRP, connecting many models for different functions of vision (retina processing, saliency computation, saccades generation, predictive coding, …) in a single virtual experiment, including the Laminart as a model for early visual processing. While this process is on-the-go (see here), some scientifically relevant progress already arose from the premises of this implementation. This is what is going to be explained right now.

The Laminart is one of the only models being able to satisfactorily explain how crowding occurs in the visual system. Crowding is a visual phenomenon that happens when perception of a target deteriorates in the presence of nearby elements. Crowding occurs in real life (for example when driving in the street, see fig. 1a) and is widely studied in many psychophysical experiments (see fig. 1b). Crowding happens ubiquitously in the visual system and must thus be accounted for by any complete model of the visual system.

While crowding was for a long time believed to be driven by local interactions in the brain (e.g. decremental feed-forward pooling of different receptive fields along the hierarchy of the visual system, jumbling the target’s visual features with the one from nearby elements), it recently appeared that adding remote contextual elements can still modulate crowding (see fig. 1c). The entire visual configuration is eligible to determine what happens at the very tiny scale of the target!

Fig. 1: a) Crowding in real life. If you look at the bull’s eye, the kid on the right will be easily identifiable. However, the one on the left will be harder to identify, because the nearby elements have similar features (yellow color, human shape). b) Crowding in psychophysical experiments. Top: the goal is to identify the letter in the center, while looking at the fixation cross. Neighbouring letters make the task more difficult, especially if they are very close to the target. Center and bottom: the goal here is to identify the offset of the target (small tilted lines). Again, the neighbouring lines make the task more difficult. c) The task is the same as before (visual stimuli on the x-axis are presented in the periphery of the visual field and observer must report the offset of the target). This time, squares try to decrease performance. What is plotted on the y-axis is the target offset at which observers give 75% of correct answers (low values indicate good performance). When the target is alone (dashed line), performance is very good. When only one square flanks the target, performance decreases dramatically. However, when more squares are added, the task becomes easier and easier.

To account for this exciting phenomenon (named uncrowding), Francis et al. (2017) proposed a model that parses the visual stimulus in several groups, using low-level, cortical dynamics (arising from a biologically plausible and laminarly structured network of spiking neurons, with fixed connectivity). Crucially, the Laminart is a 2-stage model in which the input image is segmented in different groups before any decremental interaction can happen between the target and nearby elements. In other words: how elements are grouped in the visual field determines how crowding occurs, making the latter a simple and behaviourally measurable phenomenon that unambiguously describes a central feature of human vision (grouping). In fig. 1c (right), the 7 squares form a group that frames the target, instead of interfering with it, hence enhancing performance. In the Laminart model, the 7 squares are grouped together by illusory contours and are segmented out, leaving a privileged access to the target left alone. However, in order to work, the Laminart model needs to start the segmentation spreading process somewhere (see fig. 2).

Fig. 2: Dynamics of the layer 2/3 of the area V2 of the Laminart model, for two different stimuli. The red/green lines correspond to the activity of the neurons that detect a vertical/horizontal contrast contour. The three different columns for each stimulus correspond to three segmentation layers, where the visual stimulus is parsed in different groups. The blue circles are spreading signals that start the segmentation process (one for each segmentation layer that is not SL0). Left: the flanker is close to the target. It is thus hard for the spreading signals to segment the flanking square from the target. Right: the flankers extend further and are linked by illusory contours. It is more easy for the the signals to segment them from the target. Thus, this condition produces less crowding than the other.

Up to now, the model was sending ad-hoc top-down signals, lacking an explicit process to generate them. Here, using the NRP, we could easily connect it to a model for saliency computation that was just integrated to the platform. Feeding the Laminart, the saliency computation model delivers its output as a bottom-up influence to where segmentation signals should arise. On the NRP, we created the stimuli appearing in the experimental results shown in fig. 1c, and presented them to the iCub robot. In this experiment, each time a segmentation signal is sent, its location is sampled from the saliency computation map, linking both models in an elegant manner. Notably, when only 1 squares flanks the target, the saliency map is merely a big blob around the target, whereas when 7 squares flank the target, the saliency map is more peaky around the outer squares (see fig. 3). Consequently, the more squares there are, the more probable it is that the segmentation signals succeed in creating 2 groups from the flankers and the target, releasing the target from crowding. This fits very well with the results of figure 1c. The next step for this project is to reproduce the results quantitatively on the NRP.

151626720039374009 (1)

Fig. 3: Coexistence of the Laminart network and the saliency network. Top: crowded condition. Bottom: uncrowded condition. In both situations, the saliency computation model drives the location of the segmentation signals in the Laminart model and explains very well how crowding and uncrowding can occur. The windows on the left display the saliency model. The ones on the right display the output of the Laminart model (up: V2 contrast borders activity ; down: V4 surface activity).

To sum up, building a visual system on the NRP, we could easily make the connection between a saliency computation model and our Laminart model. This connection greatly enhanced the latter model and gives it the opportunity to explain very well how uncrowding occurs in human vision and the low-level mechanisms of visual grouping. In the future, we will run psychophysical experiments in our lab, where it is possible to disentangle top-down from bottom-up influence on uncrowding, seeing whether a strong influence of saliency computation on visual grouping makes any sense.

Build a fully functional visual system on the NRP

A collaboration arises from the conjoint goals of CDP4 (a co-designed project within HBP whose goal is to merge several models of the ventral and dorsal streams of the visual system into a complete model of visuo-motor integration) and and WP10.2 (a subpart of the Neurorobotics sub-project – SP10 – that integrates many models for early visual processing and motor control on the NRP). The idea is to import everything that was done in CDP4 in an already existent experiment of the NRP that already connected a model for early visual processing (visual segmentation – ventral stream) to a retina model (see here).

By connecting many models for different functions of the dorsal and the ventral stream on the NRP, this experiment will build the basis of a complete functional model of vision that can be used by any virtual NRP experiment that would require a visual system (motor-control task, decision making based on visual cues, etc.). The first step of the project is to prove that the NRP provides an efficient tool to connect various models. Indeed, different models evolve on very different framework and can potentially be very incompatible. The NRP will thus provide a unique compatibility framework, to connect models easily. The current goal of the experiment is merely to make a proof of concept and thus a very simplified version of a visual system will be built (see image below, and here, if you have access).

WP10-2_CDP4_Experiment (1)

The functions of the visual system will be connected in a modular way, so that it is possible to compare the behaviour of different models for a single function of the visual system, once embedded in a full visual system, and so that any neuroscientist can meaningfully integrate all global accounts of visual perception into his/her model, once incorporated into the NRP experiment. For example, our Laminart model (spiking model of early visual processing for visual segmentation – Francis 2017 [1]), presented here, needs to send spreading signal locally, to initiate parsing of visual information into several segmentation layers. For now, these signals are sent by hand. To gain generality, the model would need bottom-up influence on where these signals are sent (or top-down). It would thus be very interesting for us to send these signals according to the output of a saliency computation model. The Laminart model could then, for example, form a non-retinotopic representation of a moving object by constantly sending signals around saliency peaks computed by the saliency model of CDP4.


  1. Francis, G., Manassi, M., Herzog, M. H. (2017). Neural Dynamics of Grouping and Segmentation Explain Properties of Visual Crowding, Psychological Review.

Cerebellar Adaptation in Vestibule-Ocular Reflex Task

Embodiment allows biologically plausible brain models to be tested in realistic environments, receiving similar feedback as it happens in real life or behavioural experimental set-ups. By adding dynamic synapses researchers can observe the effect that behavioural adaptation plays in network state evolution and vice versa. The NeuroRobotics Platform (NRP) notably boosts the embodiment of brain models into challenging tasks, allowing the neuroscientists to skip the technical issues of implementing the simulation of the scene.

One of the nervous centres that has traditionally received more attention in neuroscience is the cerebellum. It has recurrently shown to play a critical role in learning of tasks involving temporally precise movements, and its influence in eye movement control has received frequent experimental support. Although studies from cerebellum-related patients evidence that the cerebellum is also involved in complex tasks, such as limb coordination and manipulation tasks, eye movement control involves a neural circuitry that is simpler and deeply known. However, there still remain many open questions in how the cerebellum manages to control eye movement with such an astonishing accuracy.

Researchers from the University of Granada aim to study the cerebellar role under an “embodied cognition” scenario in which the cerebellum is responsible for solving and facilitating the body interaction with the environment. To that aim, they have set a behavioural task, the vestibule-ocular reflex (VOR), a neural structure facilitating the neural interaction, the cerebellar model, and a front-end human body, the humanoid iCub robot.VOR_Cerebellum_UGR_2

In particular, two particular hypotheses are to be tested with the proposed model: (i) the VOR phase adaptation due to parallel fibre (one of the main plastic synapsis in the cerebellar cortex) plasticity [1], and (ii) the learning consolidation and gain adaptation in VOR experiments thanks to the deep cerebellar nuclei synaptic plasticity [2].

They have modelled the neural basis of VOR control to provide a mechanistic understanding of the cerebellar functionality, which plays a key role in VOR adaptation. On the one hand, this modelling work aims at cross-linking data on VOR at behavioural and neural level. Through the simulation of VOR control impairments, we will examine possible consequences on the vestibular system processing capabilities of the VOR model. This approach may provide hints, or novel hypothesis, to better interpreting experimental data gathered in VOR testing.VOR_Cerebellum_UGR_1

[1] Clopath, C., Badura, A., De Zeeuw, C. I., & Brunel, N. (2014). A cerebellar learning model of vestibulo-ocular reflex adaptation in wild-type and mutant mice. Journal of Neuroscience, 34(21), 7203-7215.

[2] Luque, N. R., Garrido, J. A., Naveros, F., Carrillo, R. R., D’Angelo, E., & Ros, E. (2016). Distributed cerebellar motor learning: a spike-timing-dependent plasticity model. Frontiers in computational neuroscience, 10.

Jesús A. Garrido, Francisco Naveros, Niceto R. Luque and Eduardo Ros. University of Granada.

CDP4 at the HBP Summit: integrating deep models for visual saliency in the NRP

Back in the beginning of 2017, we had a great NRP Hackathon @FZI in Karlsruhe, where Alexander Kroner (SP4) presented his deep learning model for computing visual saliency.

We now presented this integration at the Human Brain Summit 2017 in Glasgow as a collaboration in CDP4 – visuo-motor integration. During this presentation we also shown how to integrate any deep learning models in the Neurorobotics Platform, as was already presented in the Young Researcher Event by Kenny Sharma.

We will continue this collaboration with SP4 by connecting the saliency model to eye movements and memory modules.


A neuro-biomechanical model that highlights the ability of spinal sensorimotor circuits to generate oscillatory locomotor outputs

The goal of this project is to uncover the functional role of proprioceptive sensorimotor circuits in motor control, and to understand how their recruitment through electrical stimulation can elicit treadmill locomotion in the absence of brain inputs. This understanding is pivotal for the translation of experimental spinal cord stimulation therapies into a viable clinical application.

To this aim, we developed a closed loop neuromusculoskeletal model that encompass a spiking neural network of the muscle spindle pathway of two antagonist muscles, a musculoskeletal model of the mouse hindlimb, and a model of epidural electrical stimulation (Figure 1). The network includes alpha motoneurons, Ia inhibitory interneurons, group II excitatory interneurons, and group Ia and group II afferent fibers. The number of cells, the connectivity, and the firing behavior of alpha motor neurons was tuned according to experimental values found in literature. The effect of epidural electrical stimulation was integrated in the neuronal network by modelling every stimulation pulse as a supra threshold synaptic input in all the cells recruited by the stimulation. An experimentally validated FEM model of the lumbar rat spinal cord was used to compute the percentage of fibers recruited by the stimulation.

Closed loop simulations were performed by using the firing rates of the motoneurons populations as a signal to control the muscles activity of the musculoskeletal model, while using the muscles length information coming from the musculoskeletal model to estimate the firing rates of the neural network afferent fibers. In particular, the firing rates of Ia and II afferent fibers were estimated using an experimentally derived muscles spindle model.

The preliminary results show that muscle spindle feedback circuits alone can produce alternated movements typical of locomotion, when biomechanics and gravity are considered.

Current work is being performed in order to expand the modeled muscle spindle circuitry to control all the main hindlimb muscles together. To this purpose, the developed network will be used as a template for every couple of antagonist muscles and heteronymous connections across the different joints will be implemented. With this complete model of the hindlimb muscle spindle circuitry we will be able to assess whether this single sensorimotor pathway is sufficient to produce treadmill locomotion in combination with EES, or whether other spinal neural networks are necessarily involved.


Figure 1 : Closed loop simulation framework of Spinal Cord model and rodent hind limb to study epidural electrical stimulation

  • Emanuele Formento (PhD, TNE & G-Lab, EPFL)
  • Shravan Tata Ramalingasetty (PhD, BioRob, EPFL)

Successful NRP User Workshop

Date: 24.07.2017
Venue: FZI, Karlsruhe, Germany

Thanks to all of the 17 participants for making this workshop a great time.

Last week, we held a successful Neurorobotics Platform (NRP) User Workshop in FZI, Karlsruhe.  We welcomed 17 attendants over three days, coming from various sub-projects (such as Martin Pearson, SP3) and HBP outsiders (Carmen Peláez-Moreno and  Francisco José Valverde Albacete). We focused on hands-on sessions so that users got comfortable using the NRP themselves.


Thanks to our live boot image with the NRP pre-installed, even users who did not follow the local installation steps beforehand could run the platform locally in no time. During the first day, we provided a tutorial experiment, exclusively developed for the event, which walked the users through the many features of the NRP. This tutorial experiment is inspired from the baby playing ping pong video, which is here simulated with an iCub robot. This tutorial experiment will soon get released with the official build of the platform.



On the second and third days, more freedom was given to the users so that they could implement their own experiments. We had short hands-on sessions on the Robot Designer as well as Virtual Coach, for offline optimization and analysis. Many new experiments were successfully integrated into the platform: the Miro robot from Consequential Robotics,  a snake-like robot moving with Central Patterns Generators (CPG), revival of the Lauron experiment, …


Screenshot from 2017-09-08 14-29-33_crop

We received great feedback from the users. We are looking forward for the organization of the next NRP User Workshop!