Category: Neurorobotics

Handling experiment-specific python packages in the NRP

In this blog post I share my method to handle experiment-specific python packages. Some of  my experiments require TensorFlow v1.6, some others need Keras – which itself requires a prior version of TensorFlow – how to handle all this on your locally installed NRP?

My method relies on the package virtualenvwrapper, which allows you to keep your python virtualenv in a single place.

pip install virtualenvwrapper --user

Additionally, I have a custom config which adds a virtualenv to the $PYTHONPATH when I activate it. Copy the postactivate and postdeactivate scripts to $WORKON_HOME – the configuration folder of virtualenvwrapper.

Now, let’s say you have an NRP experiment with custom python packages listed in a requirements.txt. Create a virtualenv for this experiment and install the experiment-specific packages:

mkvirtualenv --system-site-packages my_venv
pip install -r requirements.txt

To access your experiment-specific packages from within the NRP, simply start the NRP from the same terminal, where the virtualenv is activated:

workon my_venv

That’s it!

Structuring your local NRP experiment – some tips

Structuring your local NRP experiment – some tips

Within the context of CDP4, we created a NRP experiment showcasing some functional models from SP1/4:

  • A trained deep network to compute bottom-up saliency
  • A saccade generation model

Since these models are generic, we want to package them so that they can easily be reused in other experiment, such as the WP10.2 strategic experiment. In this post, we quickly explain the structure of the CDP4 experiment on how modularity is achieved.

We decided to implement the functional modules from SP1/SP4 as ROS packages. Therefore, these modules can be used within the NRP (in the GazeboRosPackages folder), but also independently without the NRP, in any other catkin workspace. This has the advantage that the saliency model can be fed webcam images, and easily mounted on a real robot.

The main difference compared to implementing them as transfer function is synchronicity. When the user runs the saliency model on is CPU, processing a single camera image takes around 3 seconds. If the saliency model was implemented as a transfer function, the simulation would pause until the saliency output is ready. This causes the experiment to run slower but conserves reproducability. On the other hand, implemented as a ROS-node, the simulation does not wait for the saliency network to process an image, so the simulation runs faster.

The saliency model is a pre-trained deep network running on TensorFlow. The weights and topology of the network are saved in data files, loaded during the execution. Since these files are heavy and not interesting to version-control, we uploaded them on our owncloud, where they are automatically downloaded by the saliency model if not present. This also makes it simple for our collaborators in SP1/4 to provide us with new pre-trained weights/topology.

The CDP4 experiment itself has its own repo and is very lean, as it relies on these reusable modules. Additionally, an install script is provided to download the required modules in the GazeboRosPackages.

The topic of installing TensorFlow or other python libraries required by the CDP4 experiment, so that they do not collide with other experiment-specific libraries, will be covered in another blog post.


Implementing cerebellar learning rules for NEST simulator

The cerebellum is a relatively small center in the nervous system that accounts around half of the existing neurons. As we previously documented, the researches from the University of Granada are taking advantage of the NeuroRobotics Platform (NRP) in order to prove how cerebellar plasticity may contribute to vestibule-ocular reflex (VOR) adaptation.

Implementing neurorobotic experiments often requires some multidisciplinary efforts as:

  1. Establishing a neuroscience-relevant working hypothesis.
  2. Implementing an avatar or robot simulator to perform the task.
  3. Developing the brain model with the indicated level of detail.
  4. Transforming brain activity -spikes- into signals that can be used by the robot and viceversa.

The NRP provides useful tools in order to facilitate most of these steps. However, the definition of complex brain models might requires the implementation of  neuron and synapsis models for the brain simulation platform (NEST in our particular case). The cerebellar models that we are including involves plasticity at two different synaptic sites: the parallel fibers (PF) and the mossy fibers (MF, targeting the vestibular nuclei neurons).


Although we will go deeper into the equations (see the reference above for further details) each parallel fiber synapsis will be depressed (LTD) when a presynaptic spike occurs closely to the occurrence of a complex spike of the target Purkinje cell (PC, see figure). Similarly, the plasticity at the mossy fiber/vestibular nuclei (VN) synapsis will be driven by the inhibitory activity coming from the Purkinje neurons.

These learning rules have been previously implemented for EDLUT simulator and used for complex manipulation tasks in [1]. The neuron and synapsis models have been released in GitHub and also as part of the NRP source code. This work in the framework of the HBP will allow researchers to demonstrate the role that plasticity at the parallel fibers and mossy fibers play in vestibule-occular reflex movements.

[1] Luque, N. R., Garrido, J. A., Naveros, F., Carrillo, R. R., D’Angelo, E., & Ros, E. (2016). Distributed cerebellar motor learning: a spike-timing-dependent plasticity model. Frontiers in computational neuroscience, 10.

Using the NRP, a saliency computation model drives visual segmentation in the Laminart model

Recently, a cortical model for visual grouping and segmentation (the Laminart model) has been integrated to the NRP. From there, the goal was to build a whole visual system on the NRP, connecting many models for different functions of vision (retina processing, saliency computation, saccades generation, predictive coding, …) in a single virtual experiment, including the Laminart as a model for early visual processing. While this process is on-the-go (see here), some scientifically relevant progress already arose from the premises of this implementation. This is what is going to be explained right now.

The Laminart is one of the only models being able to satisfactorily explain how crowding occurs in the visual system. Crowding is a visual phenomenon that happens when perception of a target deteriorates in the presence of nearby elements. Crowding occurs in real life (for example when driving in the street, see fig. 1a) and is widely studied in many psychophysical experiments (see fig. 1b). Crowding happens ubiquitously in the visual system and must thus be accounted for by any complete model of the visual system.

While crowding was for a long time believed to be driven by local interactions in the brain (e.g. decremental feed-forward pooling of different receptive fields along the hierarchy of the visual system, jumbling the target’s visual features with the one from nearby elements), it recently appeared that adding remote contextual elements can still modulate crowding (see fig. 1c). The entire visual configuration is eligible to determine what happens at the very tiny scale of the target!

Fig. 1: a) Crowding in real life. If you look at the bull’s eye, the kid on the right will be easily identifiable. However, the one on the left will be harder to identify, because the nearby elements have similar features (yellow color, human shape). b) Crowding in psychophysical experiments. Top: the goal is to identify the letter in the center, while looking at the fixation cross. Neighbouring letters make the task more difficult, especially if they are very close to the target. Center and bottom: the goal here is to identify the offset of the target (small tilted lines). Again, the neighbouring lines make the task more difficult. c) The task is the same as before (visual stimuli on the x-axis are presented in the periphery of the visual field and observer must report the offset of the target). This time, squares try to decrease performance. What is plotted on the y-axis is the target offset at which observers give 75% of correct answers (low values indicate good performance). When the target is alone (dashed line), performance is very good. When only one square flanks the target, performance decreases dramatically. However, when more squares are added, the task becomes easier and easier.

To account for this exciting phenomenon (named uncrowding), Francis et al. (2017) proposed a model that parses the visual stimulus in several groups, using low-level, cortical dynamics (arising from a biologically plausible and laminarly structured network of spiking neurons, with fixed connectivity). Crucially, the Laminart is a 2-stage model in which the input image is segmented in different groups before any decremental interaction can happen between the target and nearby elements. In other words: how elements are grouped in the visual field determines how crowding occurs, making the latter a simple and behaviourally measurable phenomenon that unambiguously describes a central feature of human vision (grouping). In fig. 1c (right), the 7 squares form a group that frames the target, instead of interfering with it, hence enhancing performance. In the Laminart model, the 7 squares are grouped together by illusory contours and are segmented out, leaving a privileged access to the target left alone. However, in order to work, the Laminart model needs to start the segmentation spreading process somewhere (see fig. 2).

Fig. 2: Dynamics of the layer 2/3 of the area V2 of the Laminart model, for two different stimuli. The red/green lines correspond to the activity of the neurons that detect a vertical/horizontal contrast contour. The three different columns for each stimulus correspond to three segmentation layers, where the visual stimulus is parsed in different groups. The blue circles are spreading signals that start the segmentation process (one for each segmentation layer that is not SL0). Left: the flanker is close to the target. It is thus hard for the spreading signals to segment the flanking square from the target. Right: the flankers extend further and are linked by illusory contours. It is more easy for the the signals to segment them from the target. Thus, this condition produces less crowding than the other.

Up to now, the model was sending ad-hoc top-down signals, lacking an explicit process to generate them. Here, using the NRP, we could easily connect it to a model for saliency computation that was just integrated to the platform. Feeding the Laminart, the saliency computation model delivers its output as a bottom-up influence to where segmentation signals should arise. On the NRP, we created the stimuli appearing in the experimental results shown in fig. 1c, and presented them to the iCub robot. In this experiment, each time a segmentation signal is sent, its location is sampled from the saliency computation map, linking both models in an elegant manner. Notably, when only 1 squares flanks the target, the saliency map is merely a big blob around the target, whereas when 7 squares flank the target, the saliency map is more peaky around the outer squares (see fig. 3). Consequently, the more squares there are, the more probable it is that the segmentation signals succeed in creating 2 groups from the flankers and the target, releasing the target from crowding. This fits very well with the results of figure 1c. The next step for this project is to reproduce the results quantitatively on the NRP.

151626720039374009 (1)

Fig. 3: Coexistence of the Laminart network and the saliency network. Top: crowded condition. Bottom: uncrowded condition. In both situations, the saliency computation model drives the location of the segmentation signals in the Laminart model and explains very well how crowding and uncrowding can occur. The windows on the left display the saliency model. The ones on the right display the output of the Laminart model (up: V2 contrast borders activity ; down: V4 surface activity).

To sum up, building a visual system on the NRP, we could easily make the connection between a saliency computation model and our Laminart model. This connection greatly enhanced the latter model and gives it the opportunity to explain very well how uncrowding occurs in human vision and the low-level mechanisms of visual grouping. In the future, we will run psychophysical experiments in our lab, where it is possible to disentangle top-down from bottom-up influence on uncrowding, seeing whether a strong influence of saliency computation on visual grouping makes any sense.

Neurorobotics Platform (NRP) User Workshop.

The workshop to introduce Neurorobotics Platform (NRP) was held on the SSSA with the ​participation of M.Sc. and Ph.D. students. During the workshop, two instructors from the development and research teams provided introductory information on Human Brain Project and, specifically, SP-10 Neurorobotics Platform features including open source technologies used in the NRP (e.g., ROS and Gazebo), development cycles and graphical user interface for the first time users. After the introduction, the users installed the NRP by either following instruction from the HBP Neurorobotics repository or via the bootable flash disks in order to install the NRP for a hands-on session.

The users followed the instructions from tutorial_baseball_exercise to create an experiment as a first demo and to get familiarity with the NRP concepts such as transfer functions, Brain-Body interface, closed-loop engine, to mention a few. This session ended with successfully solving the tutorial requirements with the assistance of the instructors. In the last part of the workshop, the participants discussed to integrate their own on-going project to the NRP. One of the participants expresses his ideas on integration Cerebellar model to the NRP:

My objective is to study the computational characteristics of the cerebellum, responsible for precise motor control in biological agents. Currently, a rate based model of the cerebellum has been implemented to produce accurate saccades in the primate type oculomotor system. My plan is to convert this model into a full spike based cerebellar model in the NEST simulator and apply this control model on the iCub gazebo. The NRP is definitely poised to provide me with this functionality.

Another participant expressed his plan to integrate a continuum robot, I-SUPPORT, to the NRP:

My on-going works with the NRP to create an I-SUPPORT robot model using an OpenSim muscle model to simulate the behavior of the McKibben’s present in the robotic arm.

The last project idea:

The experiments on invariant object recognition and multi-modal object representation by integrating the Hierarchical Temporal Memory, many (deep) layered networks and Spiking Neural Networks to the NRP.

The workshop closed with the evaluation of each session and discussions on the requirements for the proposed projects.

Posted by: Murat Kirtay (SSSA)

Build a fully functional visual system on the NRP

A collaboration arises from the conjoint goals of CDP4 (a co-designed project within HBP whose goal is to merge several models of the ventral and dorsal streams of the visual system into a complete model of visuo-motor integration) and and WP10.2 (a subpart of the Neurorobotics sub-project – SP10 – that integrates many models for early visual processing and motor control on the NRP). The idea is to import everything that was done in CDP4 in an already existent experiment of the NRP that already connected a model for early visual processing (visual segmentation – ventral stream) to a retina model (see here).

By connecting many models for different functions of the dorsal and the ventral stream on the NRP, this experiment will build the basis of a complete functional model of vision that can be used by any virtual NRP experiment that would require a visual system (motor-control task, decision making based on visual cues, etc.). The first step of the project is to prove that the NRP provides an efficient tool to connect various models. Indeed, different models evolve on very different framework and can potentially be very incompatible. The NRP will thus provide a unique compatibility framework, to connect models easily. The current goal of the experiment is merely to make a proof of concept and thus a very simplified version of a visual system will be built (see image below, and here, if you have access).

WP10-2_CDP4_Experiment (1)

The functions of the visual system will be connected in a modular way, so that it is possible to compare the behaviour of different models for a single function of the visual system, once embedded in a full visual system, and so that any neuroscientist can meaningfully integrate all global accounts of visual perception into his/her model, once incorporated into the NRP experiment. For example, our Laminart model (spiking model of early visual processing for visual segmentation – Francis 2017 [1]), presented here, needs to send spreading signal locally, to initiate parsing of visual information into several segmentation layers. For now, these signals are sent by hand. To gain generality, the model would need bottom-up influence on where these signals are sent (or top-down). It would thus be very interesting for us to send these signals according to the output of a saliency computation model. The Laminart model could then, for example, form a non-retinotopic representation of a moving object by constantly sending signals around saliency peaks computed by the saliency model of CDP4.


  1. Francis, G., Manassi, M., Herzog, M. H. (2017). Neural Dynamics of Grouping and Segmentation Explain Properties of Visual Crowding, Psychological Review.

Cerebellar Adaptation in Vestibule-Ocular Reflex Task

Embodiment allows biologically plausible brain models to be tested in realistic environments, receiving similar feedback as it happens in real life or behavioural experimental set-ups. By adding dynamic synapses researchers can observe the effect that behavioural adaptation plays in network state evolution and vice versa. The NeuroRobotics Platform (NRP) notably boosts the embodiment of brain models into challenging tasks, allowing the neuroscientists to skip the technical issues of implementing the simulation of the scene.

One of the nervous centres that has traditionally received more attention in neuroscience is the cerebellum. It has recurrently shown to play a critical role in learning of tasks involving temporally precise movements, and its influence in eye movement control has received frequent experimental support. Although studies from cerebellum-related patients evidence that the cerebellum is also involved in complex tasks, such as limb coordination and manipulation tasks, eye movement control involves a neural circuitry that is simpler and deeply known. However, there still remain many open questions in how the cerebellum manages to control eye movement with such an astonishing accuracy.

Researchers from the University of Granada aim to study the cerebellar role under an “embodied cognition” scenario in which the cerebellum is responsible for solving and facilitating the body interaction with the environment. To that aim, they have set a behavioural task, the vestibule-ocular reflex (VOR), a neural structure facilitating the neural interaction, the cerebellar model, and a front-end human body, the humanoid iCub robot.VOR_Cerebellum_UGR_2

In particular, two particular hypotheses are to be tested with the proposed model: (i) the VOR phase adaptation due to parallel fibre (one of the main plastic synapsis in the cerebellar cortex) plasticity [1], and (ii) the learning consolidation and gain adaptation in VOR experiments thanks to the deep cerebellar nuclei synaptic plasticity [2].

They have modelled the neural basis of VOR control to provide a mechanistic understanding of the cerebellar functionality, which plays a key role in VOR adaptation. On the one hand, this modelling work aims at cross-linking data on VOR at behavioural and neural level. Through the simulation of VOR control impairments, we will examine possible consequences on the vestibular system processing capabilities of the VOR model. This approach may provide hints, or novel hypothesis, to better interpreting experimental data gathered in VOR testing.VOR_Cerebellum_UGR_1

[1] Clopath, C., Badura, A., De Zeeuw, C. I., & Brunel, N. (2014). A cerebellar learning model of vestibulo-ocular reflex adaptation in wild-type and mutant mice. Journal of Neuroscience, 34(21), 7203-7215.

[2] Luque, N. R., Garrido, J. A., Naveros, F., Carrillo, R. R., D’Angelo, E., & Ros, E. (2016). Distributed cerebellar motor learning: a spike-timing-dependent plasticity model. Frontiers in computational neuroscience, 10.

Jesús A. Garrido, Francisco Naveros, Niceto R. Luque and Eduardo Ros. University of Granada.