Category: Uncategorized

Preliminary neural recordings with the M-Platform

Post 4-Fig 1
(FIG 1) The new robotic platform to have an access to the brain cortex and to record neural signal.

The M-Platform, a robotic device for motor rehabilitation after stroke in mice, has been upgraded to allow recording of neural activity during the pulling task (FIG 1). Now the platform provides the unique possibility to integrate kinetic and kinematic data with electrophysiological recordings in awake mice during a voluntary forelimb retraction task.

Post 4-Fig 2
(FIG 2) The interface of  OmniPlex D System (Plexon, USA), the system used to perform acute-electrophysiological recordings.

The new device was tested on four healthy mice: an array of 16 channels linear probe (ATLAS, USA) was inserted into the Rostral Forelimb Area (RFA) at 850 µm of depth. Signals were recorded by OmniPlex D System (Plexon, USA) at a frequency of 40 kHz (FIG 2). The analysis of the data was performed offline. We obtained promising results both for the low frequency activity, i.e. Local Field Potential (LFP), and for the high frequency activity, i.e. Multiunit Activity (MUA) and spike sorting. In particular in FIG. 3 a correspondence between the LFP and the force peak is evident; however we are planning to increase the number of recorded animals to generalize our results.

Post 4-Fig 3
(FIG 3) On the top the mean of the LFP recordings in different channels aligned on the onset; at the bottom the mean of corresponding force peaks.

This success paves the way for investigation of neuroplastic events after a cortical damages, i.e. stroke. Moreover the possibility to record spiking activity in the Caudal Forelimb Area (CFA) during the task in healthy animals allows to study firing rate in different channels and find patterns to correlate neural activity and movement of the forelimb.

Advertisements

SP10 + SP6 + CerebNEST New collaboration

Last month, during the last HBP summit, SP10 was able to start working on potential new collaborations with other subprojects and partnering projects in order to keep focus on the main goal of the Neurorobotics platform and the Human Brain Project. Not only for the current phase of the project (SGA 1), but also for the coming years of research.

We are really happy to say that a few days ago, the DTU Neurorobotics team came to an agreement with the SP6 (University of Pavia) and the HBP Partnering project CerebNEST (Politecnico di Milano) in order to integrate to SpiNNaker their cerebellum model (Antonietti et al., 2016 IEEE TBME) that has been already implemented in NEST.

 

blog2


Having a cerebellar model working in real-time in a neuromorphic platform is going to provide the possibility to analyze the performance of the model with different physical robotics platforms such as the modular robot Fable.


We will keep you updated along the process!

Practical lab course on the Neurorobotics Platform @KIT

This semester, for the first time, the Neurorobotics Platform will be used as a teaching tool for students interested in embodied artificial intelligence.

The lab course started last week for KIT students, offered by FZI in Karlsruhe. Previously, instead of this practical class, we were offering a seminar were students would make literature research on Neurorobotics and learning. For the seminars, we had around 10 students registering per semester, but this year for the practical lab course, more than 20 students registered, most of them in master degree.

 

 

The initial meeting took place last week. The students were splits in seven groups of three. Their first task, familiarize themselves with the NRP and PyNN by solving the tutorial baseball experiment and provided python notebook exercises. All groups were given USB sticks with live boot for them to easily install the NRP, and also access to an online version. Throughout the semester, students will learn about Neurorobotics and the platform by designing challenges and solve them.

Organizers: Camilo Vasquez Tieck, Jacques Kaiser, Martin Schulze, Lea Steffen

Mouse modeling for robotics and neuroscience…

… or why we are building a zoo of artificial mice.

Neurorobotics is about connecting simulated brains to virtual and physical robot bodies. Differently from other approaches in robotics or machine learning, the focus is on high biological plausibility, i.e. a neurorobotic system is designed to capture and predict the quantitative behavior of its biological counterpart as closely as possible. However, what is exactly meant by “close” depends on the granularity of the brain model. Clearly, simple neural networks with only a few neurons can be studied on an equally simple robot. In case of the Braitenberg vehicle experiment on the Neurorobotics Platform, a mobile robot platform with four wheels and a camera is perfectly sufficient. By contrast, brain simulations that are comprised of millions of neurons require realistic body models to simulate and reproduce data from neuroscience as accurately as possible. In this context, standard robots are no longer a viable choice. Neurorobotics is therefore not only about connecting a robot body to a brain but also about the design, simulation, and construction of that body.

The brain models developed in the Human Brain Project are among the most complex and realistic ones ever built and therefore it is only logical that they require the most realistic body models ever built. But how does the perfect body model look like? The answer is both simple and tricky: Since most of the data in neuroscience is obtained from rodents, particularly mice, the perfect choice for the body model is to simulate a mouse body. The tricky part is to determine the level of detail that is necessary to provide meaningful embodiment for the brain models. We are therefore currently designing and building a zoo of different mouse models, each of which serves a specific purpose.

The maximum level of biological detail can only be achieved in simulation. For this reason, we are developing a virtual mouse body that not only looks like a real mouse but that also has the same biomechanical properties. Every bone of the skeleton was modeled individually based on bones of real mice. Combined with the musculoskeletal simulation that will soon be available in the Neurorobotics Platform, the skeleton will enable realistic biomechanical simulations.

mouse_skeleton
Rendering of the completed mouse skeleton

The latest version of the virtual mouse got a soft skin that is fitted to the skeleton. Together with the recently added simulation of the fur, our mouse is almost indistinguishable from its biological colleagues!

mouse_fur
Rendering of the mouse model with skin and fur

Unlike simulation, the real world imposes many constraints on the types of robots that can be built. However, having a physical counterpart to our virtual mouse is beneficial for many reasons. It not only enables direct interaction with the robot but is in particular also a first step to applying results from neurorobotics research in real-world applications. Our first prototype of the mouse robot was built with a focus on small size and biomimetic leg design for robust locomotion. Upcoming releases will not only feature improved mechanics but in particular also include more sensors. Follow our blog to see how our mouse is slowly growing up!

mouse_robot
Completed initial prototype of the mouse robot

Many thanks to Matthias Clostermann, Eva Siehmann, and Peer Lucas for their contributions!

Florian Walter, Technical University of Munich
October 13, 2017

Customized design of musculoskeletal robots with the Robot Designer

In a recent blogpost we introduced the integration of muscle simulations in the Neurorobotics Platform, technically integrating the musculoskeletal simulator OpenSim into the robotic simulator Gazebo. This will enable researchers to conduct experiments with biologically validated muscle actuation. A variety of body models can be studied, either highly biomimetic or of rather technical nature. These studies becomes even more important considering the concept of embodiment, a brain is always embedded in the body and hereby the morphology gets a crucial role in any behavior learning task. For neurorobotics researchers the investigation of this direct coupling of the brain to a morphology in terms of the skeleton structure, body shape, joint assembly as well as muscle attachment points, will give rise to multiple experimental opportunities.

To foster morphological experiments in the Neurorobotics Platform, a fast and user-friendly way for adaptation of the skeleton and muscles is required. Hence, we enhanced our Blender Robot Designer plugin for interactive muscle definition. After creating of a robot by defining the kinematic structure and geometry characteristics, one can now define muscle attachments and paths in a graphical way. As demonstrated in the figure 1 with the mouse skeleton of our CDP1 mouse model, you can select an arbitrary number of pathpoints on the robot model itself. As pathpoints get listed in the user interface you can delete, change the order or refine the location of every point at any time. Afterwards muscle characteristics such as the muscle force and fiber length can be adapted and you can choose between different muscle types provided by OpenSim from biological measurements.  Defined muscles get directly exported with the robot model as an additional file in the .osim format and hereby are ready for use in the Neurorobotics platform.

 

mouse.gif

Figure 1: Graphical definition of muscles on a validated mouse skeleton in the Robot Designer

 

With the introduced muscle definition tool we hope to help researchers tackle arising questions from both a morphological and embodiment perspective: What is the effect of variation of muscle paths and characteristics on the agent’s behavior? How can a brain learn to act with a complex musculoskeletal body model and how does the musculoskeletal structure enhance learning of body motions?

For a quick start with the Robot Designer have a look at our documentation.

 

Benedikt Feldotto

Technical University of Munich

 

 

 

 

 

How we simplify your neurons ?

Reproducing complex behaviors of a musculoskeletal model such as rodent locomotion, requires the creation of a controller able to process high bandwidth of sensory input and compute the corresponding motor response.
This usually entails creating large scale neural networks which in turn result in high computational costs. To solve this issue, mathematical simplification methods are needed to capture the essential properties of these networks.

One of the most crucial steps in mouse brain reconstruction is the reduction of detailed neuronal morphologies to point neurons. This is however not trivial, as these morphologies are not only needed to determine the connectivity between neurons by providing contact points, but also by allowing the computation of the propagation of the current through your cell.
This requires however the computation of the potential of every dendritic and axonal sub-sections.

A new model is thus needed that us computationally lighter but generic enough to capture all possible dynamics observed in detailed models.
Recent work by Christian Pozzorini et al. [1] tried to address this issue by creating a General Integrate and Fire (or GIF) point neuron model. This was done by optimizing neuronal parameters by using activities, and input currents.
The GIF model captures more dynamics of biological neurons than the classical Integrate and Fire (or IaF) model, such as stochasticity of spiking or spike-triggered current. However, it still cannot reproduce all dendritic dynamics observed in detailed models.

Simplification_Pozzorini

As a result, Rössert and al. [2] created an algorithm to reduce the synaptic and dendritic processes, by creating cluster of receptors. Each receptor receives multiple currents and treats them using linear filtering. This point neuron model is therefore not only one of the most biologically accurate that exists, but is also faster than a detailed counterpart. This is crucial for large scale simulations.

Simplification_Rossert

Simplification of neuron models is a way to extract the base dynamics of your neurons to simulate only what is needed. It is also an important indicator of the information that get lost in the process. It will be therefore a required step in our project in order to simulate the whole mouse brain and indeed, we will use these models in our project of  closed-loop simulation with the rodent body.

[1] Pozzorini, C., Mensi, S., Hagens, O., Naud, R., Koch, C., & Gerstner, W. (2015). Automated High-Throughput Characterization of Single Neurons by Means of Simplified Spiking Models. PLOS Computational Biology PLoS Comput Biol, 11(6).

[2] Rössert, C., Pozzorini, C., Chindemi, G., Davison, A. P., Eroe, C., King, J., … Muller, E. (2016). Automated point-neuron simplification of data-driven microcircuit models.

First validation of the virtual M-Platform

The virtual model of the robotic platform has to accurately reproduce movements of the slide according to the applied force at different values friction force levels (FIG 1). The friction levels, that in the M-Platform are modulated with an actuated system, are reproduced on the virtual model regulating the friction coefficient of the slide. This study has been carried out as a joint work with the Prof. LASCHI’s group (SSSA, member of SP10).

FIG 1bis blog
(FIG 1) M-Platform on the Gazebo Simulator

We tested a pool of animals performing the pulling task on the real M-Platform in different conditions (i.e. increasing friction force levels to be overcome in order to perform the task). The animals performed a force through their forelimb trying to pull a slide back until a resting position. These real force signals have been used as inputs to the simulator to evaluate if the output monitored variables (i.e. the variation of position of the slide following the application of the force) could be comparable between real and simulated environment. Reasonable results for single pulling movements have been observed, whereas same synchronicity and  trend but less reproducibility have been seen for multiple movements (FIG 2).

We think that these results are due to the difficulties to model the inertial force of the linear slide acting on the real M-Platform, one-two order of magnitude lower than the friction force and the force performed by the animal. Indeed for high force peaks (resulting into single movements), the animals are able to complete the entire pulling movement (10 mm) and this is properly simulated in the NRP. However when the force peaks are lower in amplitude, in the real experiment the inertial force allows longer movements than the simulated ones. These latter are generated by the simulated model by means of the application of the force overcoming the friction level. Thus, whenever the force goes down this threshold, suddenly the movement is stopped not describing the real movement of the slide. Although the variation in position is different, the synchronicity of the movements and its trend continue being the same.

FIG 2 bis blog
(FIG 2) Two examples of the comparison between real and simulated experiments

In Figure 2 on the left a single force peak (red curve) overcoming the friction value (0.4N) is recorded during a real pulling task performed by a mouse on the M-Platform. The resulting variation of position is shown on the bottom left panel (red curve). The same real force has been used as input force acting on the handle-joint of the simulated M-Platform. This over-friction-threshold force (computed force, blue curve) can generate a simulated movement in the NRP model, as shown in the blue line on the bottom left panel, similar to the real position curve. On the right panels, multiple force peaks (red curve) overcoming the friction value (0.4N) are recorded during a real pulling task performed by a mouse on the M-Platform. Same procedure as previously described has been followed. In this case the trend is similar between real and simulated positions and the synchronicity between force peaks and movements is still present.