Category: Uncategorized

A primary test of the upgraded M-Platform


The M-Platform is a robotic device for mice that mimics a human robot device for upper limb stroke rehabilitation (the “arm-guide”) [1]. This platform allows head-fixed mice to carry out intensive and highly repeatable exercises with the forelimb, specifically repeated sessions of forelimb retraction [2]. The new upgrade of the M-Platform is the design of a component providing a variable level of friction to the slide (FIG 1).

FIG 1 blog
(FIG 1) The new component of the M-Platform used to finely control the static friction acting on the slide movement. It is composed of felt pad contacting the slide (2) moved by a screw connected to a servo motor (1), controlled by a microcontroller. The working area of the animal (3) is not obstructed by the new component.

To test the upgraded M-Platform, an experimental protocol has been designed. The experimental group consists of mice performing a two-weeks training with high friction (0.5N)  which are compared to a control group (training with a lower friction (0.3N)). We measured also isometric force during the pulling performance. First results have shown higher isometric force and better performance for high-trained animals compared to controls (FIG 2).


FIG 2 blog
(FIG 2) Preliminary results after a protocol to evaluate the effect of the friction in the pulling task (trials). The protocol consists of a 2 weeks of training, 10 trials/day per 4 days, for two groups of animals. A slight change of the training condition can modify the strength performed by healthy animals.

This experiment has been designed not to use injured (e.g. stroke) animals but healthy ones. In this way a possible translation to the complete NRP model (comprising also the point neuron model simulating motor cortex and biomechanical model) should be more feasible.

[1] Reinkensmeyer DJ, Kahn LE, Averbuch M, McKenna-Cole A, Schmit BD, Rymer WZ (2000). Understanding and treating arm movement impairment after chronic brain injury: progress with the ARM guide. J Rehabil Res Dev 37; 653-662.
[2] Spalletti C, Lai S, Mainardi M, Panarese A, Ghionzoli A, Alia C, Gianfranceschi L, Chisari C, Micera S, Caleo M (2014). A robotic system for quantitative assessment and poststroke training of forelimb retraction in mice. Neurorehabil Neural Repair 28: 188-196.

Collaboration between scientists and developers towards integration work in the NRP

Visual-motor coordination is a key research field for understanding our brain and for developing new brain-like technologies.

To address the development and evaluation of bio-inspired control architectures based on cerebellar features, SP10 scientists and developers are collaborating in the implementation of several experiments in the Neurorobotics Platform.

Ismael Baira Ojeda from the Technical University of Denmark (DTU) visited the Scuola Superiore Sant’Anna (Pisa, Italy) to integrate the Adaptive Feedback Error Learning architecture [1] into the Neurorobotics Platform using the iCub humanoid robot. This control architecture uses a combination of Machine Learning techniques and cerebellar-like microcircuits in order to give an optimized input space [2], a fast learning and accuracy for the motor control of robots. In the experiment, the iCub was commanded to balance a ball towards the center of a board, which the iCub held in its hand.

The experiment was later refined and finished during the Install Party hosted by Fortiss (April 2017).

Next, the AFEL architecture could be scaled up and combined with vision and motor control breakthroughs within the different SPs.

Thanks to all the scientists and developers for your support, especially Lorenzo Vannucci, Alessandro Ambrosano and Kenny Sharma!

iCub ball balancing
The prototype experiment running on the Neurorobotics Platform.


[1] Tolu, S., Vanegas, M., Luque, N. R., Garrido, J. A., & Ros, E. (2012). Bio-inspired adaptive feedback error learning architecture for motor control. Biological Cybernetics, 1-16.

[2] Vijayakumar, S., D’souza, A., & Schaal, S. (2005). Incremental online learning in high dimensions. Neural Computation, 17(12), 2602-2634.

Sensory models for the simulated mouse in the NRP

A biologically inspired translation model for proprioceptive sensory information was developed. The translation is achieved implementing a computational model of neural activity of type Ia and type II sensory fibers connected to muscle spindles. The model also includes activity of both static and dynamic gamma-motoneurons, that provide fusimotor activation capable of regulating the sensitivity of the proprioceptive feedback, through the contraction of specific intrafusal fibers (Proske, 19971).

Figure 1 Intrafusal fibers

The proposed model is an extension of a state-of-the art computational models of muscle spindle activity (Mileusnic, 20062). The model developed by Mileusnic and colleagues, albeit complete and validated against neuroscientific data, was completely rate based, thus it was modified in order to be integrated in a spiking neural network simulation. In particular, a spike integration technique was employed to compute fusimotor activation and the generated rate was used to generate spike trains.

The proprioceptive model is implemented on NEST (code available here), in order to provide an easy integration inside the NRP, and on SpiNNaker, for supporting real-time robotic applications. The proposed component can be coupled to both biomechanical models, like musculo-skeletal systems, and common robotic platforms (via suitable conversions from encoder values to simulated muscle length). In particular, this model will be used, as part of CDP1, to provide sensory feedback from the virtual mouse body.

Results of this work have been published in this article:

Vannucci, Lorenzo, Egidio Falotico, and Cecilia Laschi. “Proprioceptive Feedback through a Neuromorphic Muscle Spindle Model.” Frontiers in Neuroscience 11 (2017): 341.

1 Proske, U. (1997). The mammalian muscle spindle. Physiology, 12(1), 37-42.

2 Mileusnic, M. P., Brown, I. E., Lan, N., & Loeb, G. E. (2006). Mathematical models of proprioceptors. I. Control and transduction in the muscle spindle. Journal of neurophysiology, 96(4), 1772-1788.

The virtual M-Platform

Preclinical animal studies can offer a significant contribution to gain knowledge about brain function and neuroplastic mechanisms (i.e. the structural and functional changes of the neurons following inner or external stimuli). For example, an external stimulus as a cortical infarct (i.e. stroke) can produce a cascade of similar neural changes both in a human and animal (i.e. monkeys, rodents etc) brains. And even further stimuli such as input provided during a rehabilitative training can have this impact. The possibility to exploiting the neural plasticity, addressing the treatments in combination with technological advanced methods (e.g. robot-based therapy) is one goal that the HBP is pursuing.

The Neurorobotics Platform is fully part of this picture and is providing an environment that will be an important benchmark for these studies. Two labs from the Scuola Superiore Sant’Anna, in Pisa, are tightly working to develop a virtual model of a experiment carried on in a real neuroscientific environment. The core of this set up is the M-Platform (Spalletti and Lai et al. 2013), a device able to train mice to perform a retraction-pulling task with their forelimb (Figure 1A). During last months, the device has been characterized and upgraded to improve its repeatability (Figure 1B). Meanwhile, a first example of the virtual M-Platform (Figure 1C) has been developed.

Figure: The real M-Platform (A); the CAD design of the main component of the M-Platform, i.e. actuation and sensing, (B) and its virtual model in the NRP (C)

The  main components of the M-Platform (i.e. linear actuator, linear slide, handle) have been converted in a suitable format for the Gazebo simulator. Properties of the model such as link weights, joint limits and frictions have been adjusted according to the real characteristics of the slide. The actuator was connected to a PID controller whose parameters have been tuned to reproduce the behavior of the real motor.

A simple experiment has thus been designed in the NRP (currently installed on a local machine), for testing the behavior of the obtained model. The experiment includes a 100 neurons brain model, divided in two populations of 90 and 10 neurons respectively. In this closed loop experiment, the first neuron population spikes randomly, and the spike rate of the population is converted to a force value picked out of a predefined range, compatible with the range of forces possibly performable by the mouse through its forelimb.

The computed force values are continuously applied to the handle and can move the slide until the starting position. Once there, the second neural population, wired to suppress the first population spike rate when active, is triggered, so there’s no more force acting on the slide. The motor pushes the slide until the maximum extension position and it then comes back to its starting position, letting the loop start again (see video).

SP9 Quarterly in-person meeting

We are closely collaborating with SP9 (Neuromorphic hardware) to support big networks in real time. On the 20th and 21st of March 2017, we participated in the SP9 Quaterly in-person meeting to present the Neurorobotics Platform and our integration of SpiNNaker.

SP9During the meeting, we identified MUSIC as a a single interface between our platform and both supercomputers from SP7 as well as SpiNNaker. We also pointed out the features we were missing in MUSIC to keep the Neurorobotics platform interactive, most importantly dynamical ports and reset.

We also presented some complex learning rules we are working on to help SP9 identify user requirements for SpiNNaker 2 design. We were surprised to learn that one of the most complicated learning rule we are working on – SPORE derived by David Kappel in Prof. Maass group – is also used as a benchmark for SpiNNaker 2 by Prof. Mayr. This reward-based learning rule can be used to train arbitrary recurrent network of spiking neurons. Confident that it will play an important role in SGA2, we sent our master student Michael Hoff from FZI, Karlsruhe to TU Graz to use this rule in a robotic setup.

Reservoir computing for generic motor signal generation

Cyclic movements, for instance in locomotion, can be driven by cyclic neural activity, so called Central pattern generators (CPGs). CPGs have been observed at the spinal cord level and even in neural networks isolated from the brain and from sensorimotor feedback. The speed of CPG controlled locomotion, including shift of gait type, can be controlled by simple high level signals, such as tonic electrical stimulation of the brain stem. At the spinal cord level, sensorimotor feedback is integrated to fine tune the motor signals to the environment.

To integrate higher level commands with sensor/body feedback for motor signal generation, we are developing a control system based on reservoir computing (see figure below). The reservoir consists of populations of spiking neurons that are randomly connected. Inputs to the reservoir are on the one hand a generic periodic signal (modeling the high level command), and on the other hand sensor/body feedback from the robotic body that is to be controlled. The reservoir computing paradigm allows for straightforward extraction of desired motor signals from the resulting reservoir activity.


In a future blog post the physical and virtual robotic platform to conduct these experiments will be presented.

Static Validation & Verification for Neurorobotics Experiments: Aims and Scope

Validation and Verification (V&V) techniques have been widely used to make sure that simulation results accurately predict reality. As the NRP is a simulation platform to simulate a neural network embodied by a robot, the primary problem of validating that simulation results can be transferred into reality is an intrinsic problem. In fact, validating that a given neural network connected to a robot in a specific way produces the desired results is the very purpose of the NRP.

However, as of today, these validation tasks are performed entirely dynamic, i.e. through actually simulating the experiments. In this series of blog posts, we investigate how this dynamic validation can be supported by static validation and verification activities.

For this, we see the following advantages:

  • The neuroscientist gets an early feedback on their experiment. Because a static validation or verification is independent of a concrete simulation, the analsis can be performed before the code is actually simulated. This aid the design of neurorobotics experiments escpecially without a running simulation. For the NRP where the editors are currently only available within a running simulation, this means we could validate for example Transfer Functions before they are actually applied to the simulation.
  • The simulation platform uses resources more efficiently as no simulation resources are aquired for experiments that cannot be run. As of today, this advantage is not significant, as users may only change an experiment within a simulation unless they are willing to edit the plain XML models, but in the future, this is an important goal.

Static V&V argue on all possible execution paths of an experiment. However, the ability of a neural network to learn and adapt to new situations, but also the complexity of the interactions of a robot with its environment make it infeasible to argue based on single execution paths. Therefore, static V&V techniques are mostly restricted to arguments on all execution paths, in particular errorneous parts.

Therefore, the aim of static V&V in the context of neurorobotics must be to find neurobotics experiments that are errorneous for all errorneous executions, i.e. experiments that include flaws such we know that the experiment is not going to work, regardless of the exact behavior of the neural network.

For a successful validation and verification, we need to look at the three main artifacts in a neurorobotics simulation:

  • The neural network
  • The robot
  • The Transfer Functions that connect the latter.

These parts will be looked at in more detail in future blog post on this subject.