Preliminary neural recordings with the M-Platform

Post 4-Fig 1
(FIG 1) The new robotic platform to have an access to the brain cortex and to record neural signal.

The M-Platform, a robotic device for motor rehabilitation after stroke in mice, has been upgraded to allow recording of neural activity during the pulling task (FIG 1). Now the platform provides the unique possibility to integrate kinetic and kinematic data with electrophysiological recordings in awake mice during a voluntary forelimb retraction task.

Post 4-Fig 2
(FIG 2) The interface of  OmniPlex D System (Plexon, USA), the system used to perform acute-electrophysiological recordings.

The new device was tested on four healthy mice: an array of 16 channels linear probe (ATLAS, USA) was inserted into the Rostral Forelimb Area (RFA) at 850 µm of depth. Signals were recorded by OmniPlex D System (Plexon, USA) at a frequency of 40 kHz (FIG 2). The analysis of the data was performed offline. We obtained promising results both for the low frequency activity, i.e. Local Field Potential (LFP), and for the high frequency activity, i.e. Multiunit Activity (MUA) and spike sorting. In particular in FIG. 3 a correspondence between the LFP and the force peak is evident; however we are planning to increase the number of recorded animals to generalize our results.

Post 4-Fig 3
(FIG 3) On the top the mean of the LFP recordings in different channels aligned on the onset; at the bottom the mean of corresponding force peaks.

This success paves the way for investigation of neuroplastic events after a cortical damages, i.e. stroke. Moreover the possibility to record spiking activity in the Caudal Forelimb Area (CFA) during the task in healthy animals allows to study firing rate in different channels and find patterns to correlate neural activity and movement of the forelimb.


SP10 + SP6 + CerebNEST New collaboration

Last month, during the last HBP summit, SP10 was able to start working on potential new collaborations with other subprojects and partnering projects in order to keep focus on the main goal of the Neurorobotics platform and the Human Brain Project. Not only for the current phase of the project (SGA 1), but also for the coming years of research.

We are really happy to say that a few days ago, the DTU Neurorobotics team came to an agreement with the SP6 (University of Pavia) and the HBP Partnering project CerebNEST (Politecnico di Milano) in order to integrate to SpiNNaker their cerebellum model (Antonietti et al., 2016 IEEE TBME) that has been already implemented in NEST.



Having a cerebellar model working in real-time in a neuromorphic platform is going to provide the possibility to analyze the performance of the model with different physical robotics platforms such as the modular robot Fable.

We will keep you updated along the process!

Build a fully functional visual system on the NRP

A collaboration arises from the conjoint goals of CDP4 (a co-designed project within HBP whose goal is to merge several models of the ventral and dorsal streams of the visual system into a complete model of visuo-motor integration) and and WP10.2 (a subpart of the Neurorobotics sub-project – SP10 – that integrates many models for early visual processing and motor control on the NRP). The idea is to import everything that was done in CDP4 in an already existent experiment of the NRP that already connected a model for early visual processing (visual segmentation – ventral stream) to a retina model (see here).

By connecting many models for different functions of the dorsal and the ventral stream on the NRP, this experiment will build the basis of a complete functional model of vision that can be used by any virtual NRP experiment that would require a visual system (motor-control task, decision making based on visual cues, etc.). The first step of the project is to prove that the NRP provides an efficient tool to connect various models. Indeed, different models evolve on very different framework and can potentially be very incompatible. The NRP will thus provide a unique compatibility framework, to connect models easily. The current goal of the experiment is merely to make a proof of concept and thus a very simplified version of a visual system will be built (see image below, and here, if you have access).

WP10-2_CDP4_Experiment (1)

The functions of the visual system will be connected in a modular way, so that it is possible to compare the behaviour of different models for a single function of the visual system, once embedded in a full visual system, and so that any neuroscientist can meaningfully integrate all global accounts of visual perception into his/her model, once incorporated into the NRP experiment. For example, our Laminart model (spiking model of early visual processing for visual segmentation – Francis 2017 [1]), presented here, needs to send spreading signal locally, to initiate parsing of visual information into several segmentation layers. For now, these signals are sent by hand. To gain generality, the model would need bottom-up influence on where these signals are sent (or top-down). It would thus be very interesting for us to send these signals according to the output of a saliency computation model. The Laminart model could then, for example, form a non-retinotopic representation of a moving object by constantly sending signals around saliency peaks computed by the saliency model of CDP4.


  1. Francis, G., Manassi, M., Herzog, M. H. (2017). Neural Dynamics of Grouping and Segmentation Explain Properties of Visual Crowding, Psychological Review.

Cerebellar Adaptation in Vestibule-Ocular Reflex Task

Embodiment allows biologically plausible brain models to be tested in realistic environments, receiving similar feedback as it happens in real life or behavioural experimental set-ups. By adding dynamic synapses researchers can observe the effect that behavioural adaptation plays in network state evolution and vice versa. The NeuroRobotics Platform (NRP) notably boosts the embodiment of brain models into challenging tasks, allowing the neuroscientists to skip the technical issues of implementing the simulation of the scene.

One of the nervous centres that has traditionally received more attention in neuroscience is the cerebellum. It has recurrently shown to play a critical role in learning of tasks involving temporally precise movements, and its influence in eye movement control has received frequent experimental support. Although studies from cerebellum-related patients evidence that the cerebellum is also involved in complex tasks, such as limb coordination and manipulation tasks, eye movement control involves a neural circuitry that is simpler and deeply known. However, there still remain many open questions in how the cerebellum manages to control eye movement with such an astonishing accuracy.

Researchers from the University of Granada aim to study the cerebellar role under an “embodied cognition” scenario in which the cerebellum is responsible for solving and facilitating the body interaction with the environment. To that aim, they have set a behavioural task, the vestibule-ocular reflex (VOR), a neural structure facilitating the neural interaction, the cerebellar model, and a front-end human body, the humanoid iCub robot.VOR_Cerebellum_UGR_2

In particular, two particular hypotheses are to be tested with the proposed model: (i) the VOR phase adaptation due to parallel fibre (one of the main plastic synapsis in the cerebellar cortex) plasticity [1], and (ii) the learning consolidation and gain adaptation in VOR experiments thanks to the deep cerebellar nuclei synaptic plasticity [2].

They have modelled the neural basis of VOR control to provide a mechanistic understanding of the cerebellar functionality, which plays a key role in VOR adaptation. On the one hand, this modelling work aims at cross-linking data on VOR at behavioural and neural level. Through the simulation of VOR control impairments, we will examine possible consequences on the vestibular system processing capabilities of the VOR model. This approach may provide hints, or novel hypothesis, to better interpreting experimental data gathered in VOR testing.VOR_Cerebellum_UGR_1

[1] Clopath, C., Badura, A., De Zeeuw, C. I., & Brunel, N. (2014). A cerebellar learning model of vestibulo-ocular reflex adaptation in wild-type and mutant mice. Journal of Neuroscience, 34(21), 7203-7215.

[2] Luque, N. R., Garrido, J. A., Naveros, F., Carrillo, R. R., D’Angelo, E., & Ros, E. (2016). Distributed cerebellar motor learning: a spike-timing-dependent plasticity model. Frontiers in computational neuroscience, 10.

Jesús A. Garrido, Francisco Naveros, Niceto R. Luque and Eduardo Ros. University of Granada.

Practical lab course on the Neurorobotics Platform @KIT

This semester, for the first time, the Neurorobotics Platform will be used as a teaching tool for students interested in embodied artificial intelligence.

The lab course started last week for KIT students, offered by FZI in Karlsruhe. Previously, instead of this practical class, we were offering a seminar were students would make literature research on Neurorobotics and learning. For the seminars, we had around 10 students registering per semester, but this year for the practical lab course, more than 20 students registered, most of them in master degree.



The initial meeting took place last week. The students were splits in seven groups of three. Their first task, familiarize themselves with the NRP and PyNN by solving the tutorial baseball experiment and provided python notebook exercises. All groups were given USB sticks with live boot for them to easily install the NRP, and also access to an online version. Throughout the semester, students will learn about Neurorobotics and the platform by designing challenges and solve them.

Organizers: Camilo Vasquez Tieck, Jacques Kaiser, Martin Schulze, Lea Steffen

Self-Adaptation in Modular Robots at the HBP Summit.

During the last few days at the annual Human Brain Project summit, we had the chance to show to the public some of our experiments.


All these experiments are based on the same concept; a biomimetic control architecture based on the modularity of the cerebellar circuit. Everything integrated by means of machine learning and a spiking cerebellum model which allows the system to adapt and manage changes in its dynamics.

Here it is shown one of the two experiments used at the demo of the first day of the summit. In the “Icub ball balancing” experiment (implemented on the NRP), the Icub robot is able to learn in real time and control the system fulfilling the task for up to 4 joints. The scalability of the system allows to change the number of actuated joints showing the modular and robust aspect of the control architecture.



In the second experiment we were able to test the same control architecture on the real modular robot Fable by Shape Robotics. This time the spiking cerebellar model was implemented using the neuromorphic platform SpiNNaker.


CDP4 at the HBP Summit: integrating deep models for visual saliency in the NRP

Back in the beginning of 2017, we had a great NRP Hackathon @FZI in Karlsruhe, where Alexander Kroner (SP4) presented his deep learning model for computing visual saliency.

We now presented this integration at the Human Brain Summit 2017 in Glasgow as a collaboration in CDP4 – visuo-motor integration. During this presentation we also shown how to integrate any deep learning models in the Neurorobotics Platform, as was already presented in the Young Researcher Event by Kenny Sharma.

We will continue this collaboration with SP4 by connecting the saliency model to eye movements and memory modules.