Category: Uncategorized

Collaboration between scientists and developers towards integration work in the NRP

Visual-motor coordination is a key research field for understanding our brain and for developing new brain-like technologies.

To address the development and evaluation of bio-inspired control architectures based on cerebellar features, SP10 scientists and developers are collaborating in the implementation of several experiments in the Neurorobotics Platform.

Ismael Baira Ojeda from the Technical University of Denmark (DTU) visited the Scuola Superiore Sant’Anna (Pisa, Italy) to integrate the Adaptive Feedback Error Learning architecture [1] into the Neurorobotics Platform using the iCub humanoid robot. This control architecture uses a combination of Machine Learning techniques and cerebellar-like microcircuits in order to give an optimized input space [2], a fast learning and accuracy for the motor control of robots. In the experiment, the iCub was commanded to balance a ball towards the center of a board, which the iCub held in its hand.

The experiment was later refined and finished during the Install Party hosted by Fortiss (April 2017).

Next, the AFEL architecture could be scaled up and combined with vision and motor control breakthroughs within the different SPs.

Thanks to all the scientists and developers for your support, especially Lorenzo Vannucci, Alessandro Ambrosano and Kenny Sharma!

iCub ball balancing
The prototype experiment running on the Neurorobotics Platform.

References:

[1] Tolu, S., Vanegas, M., Luque, N. R., Garrido, J. A., & Ros, E. (2012). Bio-inspired adaptive feedback error learning architecture for motor control. Biological Cybernetics, 1-16.

[2] Vijayakumar, S., D’souza, A., & Schaal, S. (2005). Incremental online learning in high dimensions. Neural Computation, 17(12), 2602-2634.

Sensory models for the simulated mouse in the NRP

A biologically inspired translation model for proprioceptive sensory information was developed. The translation is achieved implementing a computational model of neural activity of type Ia and type II sensory fibers connected to muscle spindles. The model also includes activity of both static and dynamic gamma-motoneurons, that provide fusimotor activation capable of regulating the sensitivity of the proprioceptive feedback, through the contraction of specific intrafusal fibers (Proske, 19971).

spindle
Figure 1 Intrafusal fibers

The proposed model is an extension of a state-of-the art computational models of muscle spindle activity (Mileusnic, 20062). The model developed by Mileusnic and colleagues, albeit complete and validated against neuroscientific data, was completely rate based, thus it was modified in order to be integrated in a spiking neural network simulation. In particular, a spike integration technique was employed to compute fusimotor activation and the generated rate was used to generate spike trains.

The proprioceptive model is implemented on NEST, in order to provide an easy integration inside the NRP, and on SpiNNaker, for supporting real-time robotic applications. The proposed component can be coupled to both biomechanical models, like musculo-skeletal systems, and common robotic platforms (via suitable conversions from encoder values to simulated muscle length). In particular, this model will be used, as part of CDP1, to provide sensory feedback from the virtual mouse body.

1 Proske, U. (1997). The mammalian muscle spindle. Physiology, 12(1), 37-42.

2 Mileusnic, M. P., Brown, I. E., Lan, N., & Loeb, G. E. (2006). Mathematical models of proprioceptors. I. Control and transduction in the muscle spindle. Journal of neurophysiology, 96(4), 1772-1788.

The virtual M-Platform

Preclinical animal studies can offer a significant contribution to gain knowledge about brain function and neuroplastic mechanisms (i.e. the structural and functional changes of the neurons following inner or external stimuli). For example, an external stimulus as a cortical infarct (i.e. stroke) can produce a cascade of similar neural changes both in a human and animal (i.e. monkeys, rodents etc) brains. And even further stimuli such as input provided during a rehabilitative training can have this impact. The possibility to exploiting the neural plasticity, addressing the treatments in combination with technological advanced methods (e.g. robot-based therapy) is one goal that the HBP is pursuing.

The Neurorobotics Platform is fully part of this picture and is providing an environment that will be an important benchmark for these studies. Two labs from the Scuola Superiore Sant’Anna, in Pisa, are tightly working to develop a virtual model of a experiment carried on in a real neuroscientific environment. The core of this set up is the M-Platform (Spalletti and Lai et al. 2013), a device able to train mice to perform a retraction-pulling task with their forelimb (Figure 1A). During last months, the device has been characterized and upgraded to improve its repeatability (Figure 1B). Meanwhile, a first example of the virtual M-Platform (Figure 1C) has been developed.

nrp_mplatfom
Figure: The real M-Platform (A); the CAD design of the main component of the M-Platform, i.e. actuation and sensing, (B) and its virtual model in the NRP (C)

The  main components of the M-Platform (i.e. linear actuator, linear slide, handle) have been converted in a suitable format for the Gazebo simulator. Properties of the model such as link weights, joint limits and frictions have been adjusted according to the real characteristics of the slide. The actuator was connected to a PID controller whose parameters have been tuned to reproduce the behavior of the real motor.

A simple experiment has thus been designed in the NRP (currently installed on a local machine), for testing the behavior of the obtained model. The experiment includes a 100 neurons brain model, divided in two populations of 90 and 10 neurons respectively. In this closed loop experiment, the first neuron population spikes randomly, and the spike rate of the population is converted to a force value picked out of a predefined range, compatible with the range of forces possibly performable by the mouse through its forelimb.

The computed force values are continuously applied to the handle and can move the slide until the starting position. Once there, the second neural population, wired to suppress the first population spike rate when active, is triggered, so there’s no more force acting on the slide. The motor pushes the slide until the maximum extension position and it then comes back to its starting position, letting the loop start again (see video).

SP9 Quarterly in-person meeting

We are closely collaborating with SP9 (Neuromorphic hardware) to support big networks in real time. On the 20th and 21st of March 2017, we participated in the SP9 Quaterly in-person meeting to present the Neurorobotics Platform and our integration of SpiNNaker.

SP9During the meeting, we identified MUSIC as a a single interface between our platform and both supercomputers from SP7 as well as SpiNNaker. We also pointed out the features we were missing in MUSIC to keep the Neurorobotics platform interactive, most importantly dynamical ports and reset.

We also presented some complex learning rules we are working on to help SP9 identify user requirements for SpiNNaker 2 design. We were surprised to learn that one of the most complicated learning rule we are working on – SPORE derived by David Kappel in Prof. Maass group – is also used as a benchmark for SpiNNaker 2 by Prof. Mayr. This reward-based learning rule can be used to train arbitrary recurrent network of spiking neurons. Confident that it will play an important role in SGA2, we sent our master student Michael Hoff from FZI, Karlsruhe to TU Graz to use this rule in a robotic setup.

Reservoir computing for generic motor signal generation

Cyclic movements, for instance in locomotion, can be driven by cyclic neural activity, so called Central pattern generators (CPGs). CPGs have been observed at the spinal cord level and even in neural networks isolated from the brain and from sensorimotor feedback. The speed of CPG controlled locomotion, including shift of gait type, can be controlled by simple high level signals, such as tonic electrical stimulation of the brain stem. At the spinal cord level, sensorimotor feedback is integrated to fine tune the motor signals to the environment.

To integrate higher level commands with sensor/body feedback for motor signal generation, we are developing a control system based on reservoir computing (see figure below). The reservoir consists of populations of spiking neurons that are randomly connected. Inputs to the reservoir are on the one hand a generic periodic signal (modeling the high level command), and on the other hand sensor/body feedback from the robotic body that is to be controlled. The reservoir computing paradigm allows for straightforward extraction of desired motor signals from the resulting reservoir activity.

blogPostFigure

In a future blog post the physical and virtual robotic platform to conduct these experiments will be presented.

Static Validation & Verification for Neurorobotics Experiments: Aims and Scope

Validation and Verification (V&V) techniques have been widely used to make sure that simulation results accurately predict reality. As the NRP is a simulation platform to simulate a neural network embodied by a robot, the primary problem of validating that simulation results can be transferred into reality is an intrinsic problem. In fact, validating that a given neural network connected to a robot in a specific way produces the desired results is the very purpose of the NRP.

However, as of today, these validation tasks are performed entirely dynamic, i.e. through actually simulating the experiments. In this series of blog posts, we investigate how this dynamic validation can be supported by static validation and verification activities.

For this, we see the following advantages:

  • The neuroscientist gets an early feedback on their experiment. Because a static validation or verification is independent of a concrete simulation, the analsis can be performed before the code is actually simulated. This aid the design of neurorobotics experiments escpecially without a running simulation. For the NRP where the editors are currently only available within a running simulation, this means we could validate for example Transfer Functions before they are actually applied to the simulation.
  • The simulation platform uses resources more efficiently as no simulation resources are aquired for experiments that cannot be run. As of today, this advantage is not significant, as users may only change an experiment within a simulation unless they are willing to edit the plain XML models, but in the future, this is an important goal.

Static V&V argue on all possible execution paths of an experiment. However, the ability of a neural network to learn and adapt to new situations, but also the complexity of the interactions of a robot with its environment make it infeasible to argue based on single execution paths. Therefore, static V&V techniques are mostly restricted to arguments on all execution paths, in particular errorneous parts.

Therefore, the aim of static V&V in the context of neurorobotics must be to find neurobotics experiments that are errorneous for all errorneous executions, i.e. experiments that include flaws such we know that the experiment is not going to work, regardless of the exact behavior of the neural network.

For a successful validation and verification, we need to look at the three main artifacts in a neurorobotics simulation:

  • The neural network
  • The robot
  • The Transfer Functions that connect the latter.

These parts will be looked at in more detail in future blog post on this subject.

Real Saccades for Virtual Robots

Vision is a central theme of research in both robotics and neuroscience. Yet, even though the requirements faced by robots and humans that need to perceive their environments are quite similar (high resolution, low latency, wide field of view etc.), technical vision systems are fundamentally different from the human visual system. One particular reason for these differences are the special properties of the human eye.

A special characteristic of human vision are saccadic eye movements:

Saccade refers to a rapid jerk-like movement of the eyeball which subserves vision by redirecting the visual axis to a new location.”

From John Findlay and Robin Walker (2012) Human saccadic eye movements. Scholarpedia, 7(7):5095.

Clearly, compared to a camera that is statically mounted next to a robot, human vision strongly relies on active actuation of the eyes. Importantly, the perspective on the scene changes after every eye movement which in turn influences the following saccades. Investigating computational models for saccadic eye movements therefore calls for a neurorobotics approach which directly captures this closed-loop interdependence between changing visual input and saccade generation.

To address the challenge of developing and evaluating realistic models of saccadic eye movements, SP10 is closely collaborating with other partners from the Human Brain Project in Co-Design Project 4 on Visuo-Motor Integration. In this project Rainer Goebel and Marion Senden from Maastricht University are currently developing a neural model for saccade generation with nest.

Last week, Mario visited us in Munich to integrate a first working version of the model together with SP10-colleague Florian Walter into the Neurorobotics Platform. After an introductory talk by Mario on visuo-motor integration, we directly started developing a new experiment for the Neurorobotics Platform that controls the eyes of our virtual iCub robot based on the output of the saccade model. This experiment, for the first time, interfaces a nest model comprised of analog non-spiking neurons with the Neurorobotics Platform. In future releases of the platform, these new neuron types will give both neuroscientists and roboticists even more freedom in defining their brain models.

In the next step, the saccade generation model will be connected to a salience map model to study saccades in complex visual scenes that are simulated on the Neurorobotics Platform.

Many thanks to the prompt support from the SP10 development team, especially Kenny Sharma!

Saccade-Experiment
The prototype experiment running on the Neurorobotics Platform.
Photo 2
Florian Walter (Technical University of Munich) and Mario Senden (Maastricht University) in front of the integrated prototype experiment on saccade generation.