Category: Neurorobotics

Build a fully functional visual system on the NRP

A collaboration arises from the conjoint goals of CDP4 (a co-designed project within HBP whose goal is to merge several models of the ventral and dorsal streams of the visual system into a complete model of visuo-motor integration) and and WP10.2 (a subpart of the Neurorobotics sub-project – SP10 – that integrates many models for early visual processing and motor control on the NRP). The idea is to import everything that was done in CDP4 in an already existent experiment of the NRP that already connected a model for early visual processing (visual segmentation – ventral stream) to a retina model (see here).

By connecting many models for different functions of the dorsal and the ventral stream on the NRP, this experiment will build the basis of a complete functional model of vision that can be used by any virtual NRP experiment that would require a visual system (motor-control task, decision making based on visual cues, etc.). The first step of the project is to prove that the NRP provides an efficient tool to connect various models. Indeed, different models evolve on very different framework and can potentially be very incompatible. The NRP will thus provide a unique compatibility framework, to connect models easily. The current goal of the experiment is merely to make a proof of concept and thus a very simplified version of a visual system will be built (see image below, and here, if you have access).

WP10-2_CDP4_Experiment (1)

The functions of the visual system will be connected in a modular way, so that it is possible to compare the behaviour of different models for a single function of the visual system, once embedded in a full visual system, and so that any neuroscientist can meaningfully integrate all global accounts of visual perception into his/her model, once incorporated into the NRP experiment. For example, our Laminart model (spiking model of early visual processing for visual segmentation – Francis 2017 [1]), presented here, needs to send spreading signal locally, to initiate parsing of visual information into several segmentation layers. For now, these signals are sent by hand. To gain generality, the model would need bottom-up influence on where these signals are sent (or top-down). It would thus be very interesting for us to send these signals according to the output of a saliency computation model. The Laminart model could then, for example, form a non-retinotopic representation of a moving object by constantly sending signals around saliency peaks computed by the saliency model of CDP4.

Citations:

  1. Francis, G., Manassi, M., Herzog, M. H. (2017). Neural Dynamics of Grouping and Segmentation Explain Properties of Visual Crowding, Psychological Review.
Advertisements

Cerebellar Adaptation in Vestibule-Ocular Reflex Task

Embodiment allows biologically plausible brain models to be tested in realistic environments, receiving similar feedback as it happens in real life or behavioural experimental set-ups. By adding dynamic synapses researchers can observe the effect that behavioural adaptation plays in network state evolution and vice versa. The NeuroRobotics Platform (NRP) notably boosts the embodiment of brain models into challenging tasks, allowing the neuroscientists to skip the technical issues of implementing the simulation of the scene.

One of the nervous centres that has traditionally received more attention in neuroscience is the cerebellum. It has recurrently shown to play a critical role in learning of tasks involving temporally precise movements, and its influence in eye movement control has received frequent experimental support. Although studies from cerebellum-related patients evidence that the cerebellum is also involved in complex tasks, such as limb coordination and manipulation tasks, eye movement control involves a neural circuitry that is simpler and deeply known. However, there still remain many open questions in how the cerebellum manages to control eye movement with such an astonishing accuracy.

Researchers from the University of Granada aim to study the cerebellar role under an “embodied cognition” scenario in which the cerebellum is responsible for solving and facilitating the body interaction with the environment. To that aim, they have set a behavioural task, the vestibule-ocular reflex (VOR), a neural structure facilitating the neural interaction, the cerebellar model, and a front-end human body, the humanoid iCub robot.VOR_Cerebellum_UGR_2

In particular, two particular hypotheses are to be tested with the proposed model: (i) the VOR phase adaptation due to parallel fibre (one of the main plastic synapsis in the cerebellar cortex) plasticity [1], and (ii) the learning consolidation and gain adaptation in VOR experiments thanks to the deep cerebellar nuclei synaptic plasticity [2].

They have modelled the neural basis of VOR control to provide a mechanistic understanding of the cerebellar functionality, which plays a key role in VOR adaptation. On the one hand, this modelling work aims at cross-linking data on VOR at behavioural and neural level. Through the simulation of VOR control impairments, we will examine possible consequences on the vestibular system processing capabilities of the VOR model. This approach may provide hints, or novel hypothesis, to better interpreting experimental data gathered in VOR testing.VOR_Cerebellum_UGR_1

[1] Clopath, C., Badura, A., De Zeeuw, C. I., & Brunel, N. (2014). A cerebellar learning model of vestibulo-ocular reflex adaptation in wild-type and mutant mice. Journal of Neuroscience, 34(21), 7203-7215.

[2] Luque, N. R., Garrido, J. A., Naveros, F., Carrillo, R. R., D’Angelo, E., & Ros, E. (2016). Distributed cerebellar motor learning: a spike-timing-dependent plasticity model. Frontiers in computational neuroscience, 10.

Jesús A. Garrido, Francisco Naveros, Niceto R. Luque and Eduardo Ros. University of Granada.

Self-Adaptation in Modular Robots at the HBP Summit.

During the last few days at the annual Human Brain Project summit, we had the chance to show to the public some of our experiments.

 

All these experiments are based on the same concept; a biomimetic control architecture based on the modularity of the cerebellar circuit. Everything integrated by means of machine learning and a spiking cerebellum model which allows the system to adapt and manage changes in its dynamics.

Here it is shown one of the two experiments used at the demo of the first day of the summit. In the “Icub ball balancing” experiment (implemented on the NRP), the Icub robot is able to learn in real time and control the system fulfilling the task for up to 4 joints. The scalability of the system allows to change the number of actuated joints showing the modular and robust aspect of the control architecture.

icub_exp

 

In the second experiment we were able to test the same control architecture on the real modular robot Fable by Shape Robotics. This time the spiking cerebellar model was implemented using the neuromorphic platform SpiNNaker.

modular_spinn

CDP4 at the HBP Summit: integrating deep models for visual saliency in the NRP

Back in the beginning of 2017, we had a great NRP Hackathon @FZI in Karlsruhe, where Alexander Kroner (SP4) presented his deep learning model for computing visual saliency.

We now presented this integration at the Human Brain Summit 2017 in Glasgow as a collaboration in CDP4 – visuo-motor integration. During this presentation we also shown how to integrate any deep learning models in the Neurorobotics Platform, as was already presented in the Young Researcher Event by Kenny Sharma.

We will continue this collaboration with SP4 by connecting the saliency model to eye movements and memory modules.

deep-dive-cdp4nrp-saliency

Optimising compliant robot locomotion using the HBP Neurorobotics platform

If we want robots to become a part of our everyday life, future robot platforms will have to be safe and much cheaper than most useful robots are now. Safety can be obtained by making robots compliant using passive elements (springs, soft elastic materials). Unfortunately, accurate mechanical (dynamic/kinematic) models of such robots are not available and in addition, especially when cheaper materials are used, their dynamical properties drift over time because of wear.

Therefore, cheap robots with passive compliance need adaptive control that is as robust as possible to mechanical and morphological variations. Adaptation training on each physical robot will still be necessary, but this should converge as quickly as possible.

The Tigrillo quadruped robot will be used to investigate neural closed loop motor control for locomotion to address these issues. In particular, we want to investigate how the NRP simulation framework can be used to develop such robust neural control.

As a first step, we implemented a parameterised Tigrillo simulation model generator. Using a simple script, a Gazebo simulation model with given body dimensions, mass distributions and spring constants can be generated to be simulated in the NRP. We then implemented evolutionary optimisation (CMA-ES) in the NRP’s Virtual coach to find efficient motor control patterns, which then generated with spiking population networks using a reservoir computing approach. Finally, these control patterns were transferred to the physical robot’s SpiNNaker board and the resulting gaits were compared to the simulation results.

These steps are illustrated in the video below.

Next steps are:

  • to tune the parameter ranges of  the Tigrillo generator to those that are realistic for the real robot;
  • to implement sensors on the physical robot and calibrate equivalent simulated sensors;
  • to use our setup to obtain the desired robust closed loop control and validate both qualitatively and quantitatively on the physical robot.

Many thanks to Gabriel Urbain, Alexander Vandesompele, Brecht Willems and prof. Francis wyffels for their input.

 

OpenSim support in the Neurorobotics platform

A key area of research of the Neurorobotics Platform (NRP) is the in-silico study of sensormotor skills and locomotion of biological systems. To simulate the physical environment and system embodiments, the NRP uses the Gazebo robotics simulator.

To perform biologically significant experiments, Gazebo has however been lacking an important feature until now: The ability to model and simulate musco-skeletal kinematics.

Therefore researchers had to rely on ad-hoc implementations calculating effective joint torques for the system at hand, wich is time consuming, error prone and cumbersome.

The physics plugin we implemented provides OpenSim as an additional physics engine alongside the physics engines already supported by Gazebo (ODE, Bullet, SimBody and DART). OpenSim is using SimBody as its underlying framework, thus featuring a stable and accurate mechanical simulation. The OpenSim plugin supports many of SimBody’s kinematic constraint types and implements collision detection support for sphere, plane and triangle mesh shapes along with corresponding contact forces (as exposed by OpenSim’s API).

However, first and foremost it treats physiological models of muscles as first class citizens alongside rigid bodies and kinematic joints. OpenSim is shipped with a number of predefined muscle-tendon actuators. Currently, users of our plugin can use OpenSim’s native XML configuration file format to specify the structure and properties of muscle-tendon systems, which are created on top of Gazebo models specified in Gazebo’s own file format (SDF).

A ROS-based messaging interface provides accessors for excitations and other biophysical parameters allowing to control musco-skeletal systems from external applications such as the Neurorobotics platform.

As demonstration of the capabilities of our physics plugin, we augmented a simple four-legged walker with a set of eight muscles (one synergist-antagonist pair per leg).

The problem we address in this demo is the reinforcement learning task of deriving a controller that excites the muscles in a pattern such that the walker is driven forward. Our setup consists of a Python application (remote-controlling Gazebo via the ROS-based messaging interface for the OpenSim plugin) performing the high-level optimization procedure and running a neural network (NN) controller.

We employ a simple genetic optimization procedure based on Python’s DEAP package to find parameters of the NN that maximize the score the walker obtains in individual trial runs.

The walker is rewarded for moving forward and penalized for unwanted motion behaviour (e. g. ground contacts of the walker’s body, moving off-center).

During a trial run, the physics simulation is stepped in small time increments, and during each iteration the NN is fed with various state variables. The NN’s output is comprised of excitation levels for the muscles. For simplicity we stuck to well-known artificial neural networks, implemented via the Tensorflow package.

We also experimented with fully dynamic grasping simulation using SimBody’s collision detection system and contact force implementations. Although the simulation setup for the grasping tests only comprised a simple two-jaw gripper and a cubic shape (consisting of a triangle mesh shape), the SimBody engine as used in our plugin was able to maintain a stable grasp using fully dynamic contact forces, tackling a problem that is notoriously difficult to solve with other physics engines.

Another application using the OpenSim plugin for Gazebo features a simplified muscle model of a mouse’s foreleg actuated by a neuronal controller modelled according the spinal cord of a real mouse. The details of this experimental setup will be covered in a separate blog post.

The OpenSim plugin does not support all of the features implemented with other engines in Gazebo. For instance, some joint types are not implemented yet. Also, some features unique to OpenSim (like inverse dynamics simulation) are not yet available in the current implementation.

To simplify the design of kinematic models with muscle systems and custom acutator models, it is planned to provide researchers and users of the NRP with a consistent, simple way to specify muscles via a graphical interface using the NRP‘s Robot Designer application.

A one-day workshop during the last Performance Show in Ghent

Last week, we had the chance to organize the first edition of a SP10 Performance Show in the city of Ghent, Belgium. This two-days meeting between all the partners involved in the HBP Neurorobotics subproject (SP10) was an opportunity to discuss the latest progress of each research groups and ensure a convergence of views and efforts for the next events, researches and developments.

 

SP10 Performance Show September 2017
A discussion during the SP10 Performance Show

 

On the second day, we divided our work into two tracks. Whereas the Main Track dealt with administrative and research activities, the Secondary Track was organized as a workshop on the theme Thinking the NRP of the Future. It was formatted as short one-day hackaton where everyone started by summarizing one or several iconic research advances that had been done in the last year in his field, which helped us grouping into 4 different work teams :

  • Reinforcement Learning with the NRP
  • Integrating worms brains and soft bodies in the NRP
  • Real-time interaction between real and simulated robots in the NRP
  • Helping research on visuomotor learning with the child using simulations in the NRP

 

SP10 Performance Show September 2017
On Tuesday, a work group is brainstorming about integrating worms in the NRP

 

Each of those teams brainstormed to imagine and design an experiment that could help research to move forward and a list of requirements in term of developments it would need to be achieved. After lunch, the results of this brainstorm were presented to everyone to get feedback and comments before we started working on designing a first prototype in the NRP and coding some useful models that we would need in further work. To be continued then…