Category: Uncategorized

How we simplify your neurons ?

Reproducing complex behaviors of a musculoskeletal model such as rodent locomotion, requires the creation of a controller able to process high bandwidth of sensory input and compute the corresponding motor response.
This usually entails creating large scale neural networks which in turn result in high computational costs. To solve this issue, mathematical simplification methods are needed to capture the essential properties of these networks.

One of the most crucial steps in mouse brain reconstruction is the reduction of detailed neuronal morphologies to point neurons. This is however not trivial, as these morphologies are not only needed to determine the connectivity between neurons by providing contact points, but also by allowing the computation of the propagation of the current through your cell.
This requires however the computation of the potential of every dendritic and axonal sub-sections.

A new model is thus needed that us computationally lighter but generic enough to capture all possible dynamics observed in detailed models.
Recent work by Christian Pozzorini et al. [1] tried to address this issue by creating a General Integrate and Fire (or GIF) point neuron model. This was done by optimizing neuronal parameters by using activities, and input currents.
The GIF model captures more dynamics of biological neurons than the classical Integrate and Fire (or IaF) model, such as stochasticity of spiking or spike-triggered current. However, it still cannot reproduce all dendritic dynamics observed in detailed models.

Simplification_Pozzorini

As a result, Rössert and al. [2] created an algorithm to reduce the synaptic and dendritic processes, by creating cluster of receptors. Each receptor receives multiple currents and treats them using linear filtering. This point neuron model is therefore not only one of the most biologically accurate that exists, but is also faster than a detailed counterpart. This is crucial for large scale simulations.

Simplification_Rossert

Simplification of neuron models is a way to extract the base dynamics of your neurons to simulate only what is needed. It is also an important indicator of the information that get lost in the process. It will be therefore a required step in our project in order to simulate the whole mouse brain and indeed, we will use these models in our project of  closed-loop simulation with the rodent body.

[1] Pozzorini, C., Mensi, S., Hagens, O., Naud, R., Koch, C., & Gerstner, W. (2015). Automated High-Throughput Characterization of Single Neurons by Means of Simplified Spiking Models. PLOS Computational Biology PLoS Comput Biol, 11(6).

[2] Rössert, C., Pozzorini, C., Chindemi, G., Davison, A. P., Eroe, C., King, J., … Muller, E. (2016). Automated point-neuron simplification of data-driven microcircuit models.

Advertisements

First validation of the virtual M-Platform

The virtual model of the robotic platform has to accurately reproduce movements of the slide according to the applied force at different values friction force levels (FIG 1). The friction levels, that in the M-Platform are modulated with an actuated system, are reproduced on the virtual model regulating the friction coefficient of the slide. This study has been carried out as a joint work with the Prof. LASCHI’s group (SSSA, member of SP10).

FIG 1bis blog
(FIG 1) M-Platform on the Gazebo Simulator

We tested a pool of animals performing the pulling task on the real M-Platform in different conditions (i.e. increasing friction force levels to be overcome in order to perform the task). The animals performed a force through their forelimb trying to pull a slide back until a resting position. These real force signals have been used as inputs to the simulator to evaluate if the output monitored variables (i.e. the variation of position of the slide following the application of the force) could be comparable between real and simulated environment. Reasonable results for single pulling movements have been observed, whereas same synchronicity and  trend but less reproducibility have been seen for multiple movements (FIG 2).

We think that these results are due to the difficulties to model the inertial force of the linear slide acting on the real M-Platform, one-two order of magnitude lower than the friction force and the force performed by the animal. Indeed for high force peaks (resulting into single movements), the animals are able to complete the entire pulling movement (10 mm) and this is properly simulated in the NRP. However when the force peaks are lower in amplitude, in the real experiment the inertial force allows longer movements than the simulated ones. These latter are generated by the simulated model by means of the application of the force overcoming the friction level. Thus, whenever the force goes down this threshold, suddenly the movement is stopped not describing the real movement of the slide. Although the variation in position is different, the synchronicity of the movements and its trend continue being the same.

FIG 2 bis blog
(FIG 2) Two examples of the comparison between real and simulated experiments

In Figure 2 on the left a single force peak (red curve) overcoming the friction value (0.4N) is recorded during a real pulling task performed by a mouse on the M-Platform. The resulting variation of position is shown on the bottom left panel (red curve). The same real force has been used as input force acting on the handle-joint of the simulated M-Platform. This over-friction-threshold force (computed force, blue curve) can generate a simulated movement in the NRP model, as shown in the blue line on the bottom left panel, similar to the real position curve. On the right panels, multiple force peaks (red curve) overcoming the friction value (0.4N) are recorded during a real pulling task performed by a mouse on the M-Platform. Same procedure as previously described has been followed. In this case the trend is similar between real and simulated positions and the synchronicity between force peaks and movements is still present.

 

Development of an interface board to connect neuromorphic hardware with real world and simulated robots

Neuromorphic computing systems (e.g. SpiNNaker) allow real-time closed-loop robot control in simulation and real-world robots and fascilitate the use of neuromorphic sensors such as silicon retinae and silicon chochleae, because of the spiking nature of these systems.

To connect such neuromorphic hardware to the NRP, an interface board was developed to allow the communication of spikes between SpiNNaker and neuromorphic sensors and actuators.

The current systems allows for up to 500.000 events to be processed per second on five UART ports simultaneously, which is significantly faster compared to the Ethernet interface SpiNNaker provides with about 20.000 events per second.

The second iteration of the interface board will integrate communication via UART, SPI, CAN, and high-speed USB, while also increasing the throughput by using a more advanced microprocessor and optimised CPLD programming (Fig. 1).

IOBoard_2ndIt
Fig. 1: A prototype of the second iteration of the interface board currently in development.

Showcases of the interface board include connecting SpiNNaker to a 2 DOF MyoRobotics system (Fig. 2), a modular framework for the development of compliant musculoskeletal robots. The results of this experiment are published in [1], but research is still ongoing.

MyoRobotAndSpiNNaker_edited
Fig. 2: Example setup of the interface board with SpiNNaker and MyoRobotics 2 DoF arm [1].

Future work will include more showcases of real-world robots driven by neuromorphic hardware as well as the integration of these robots into the NRP, to make these robots accessible to a broader user base and to provide an infrastructure to enable researchers to test their control algorithms on real robots.

[1] RICHTER, Christoph, et al. Musculoskeletal robots: scalability in neural control. IEEE Robotics & Automation Magazine, 2016, 23. Jg., Nr. 4, S. 128-137.

Connecting the Laminart model to a retina modelling framework on the NRP

After having integrated a cortical model for visual segmentation to the NRP, we (Laboratory of Psychophysics, EPFL) connected it to a framework for retina modelling that was already integrated to the NRP. Early August, collaborating with SSSA, we could design a virtual experiment where the iCub robot performs a visual segmentation task, using both retinal and cortical model (see next figure).

AAA

The iCub robot performs a visual segmentation task, using the Laminart model. The goal is to detect the target (small tilted lines). This is only possible if the nearby flankers are segmented by the model. The retina model delivers its output to the Laminart model. This experiment was done to check the compatibility between both models. The scientific goals for this connection is to use the retina to deliver gain controlled input to the Laminart, to gain insights about how color information can be used by the Laminart model to create perceptual groups and to see how retinal magnification can have an influence on grouping. Left windows: output of the Laminart model (top: V2 activity – bottom: V4 activity) ; the model parses visual information into different perceptual groups, thanks to grouping mechanisms. Center windows: output of the retina model (ON- and OFF-centered ganglion cells activity). Right window: output of the brain visualiser, displaying all the neurons of the Laminart model (here: approximately 500’000 IaF neurons).

The future plan for this virtual experiment is to use the connection between the retinal and the cortical model to extend the predictions of the Laminart model to more general cases. For now, the model is the only one to explain many behavioural data about visual crowding (Francis, 2017 [1]), using grouping mechanisms, and it will be very interesting to see how color information is used by the visual system to group elements together. We will use data about crowding and color (Manassi, 2012 [2]) to validate the connection.

More in general, the NRP can be used to give a realistic framework to any model. For example, we tried to see how the Laminart model behaves in realistic conditions by adding some feedback, making the robot move its eyes towards the target when it is detected. The outcome was that the segmentation was not stable, if the bottom-up input was shifted by an eye movement. Using this, we designed a mechanism explaining how vision can generate non-retinotopic representation of objects (see next figure).

output2 (Converted)

The robot moves its eyes towards the target when detected. Each eye movement triggers new segmentation signals whose location adapt to the amplitude and the direction of the eye movements (low neuronal cost). Using this simple mechanism, the model can generate a non-retinotopic representation of the perceptual groups.


Citations:

  1. Francis, G., Manassi, M., Herzog, M. H. (2017). Neural Dynamics of Grouping and Segmentation Explain Properties of Visual Crowding, Psychological Review.
  2. Manassi, M., Sayim, B., & Herzog, M. H. (2012). Grouping, pooling, and when bigger is better in visual crowding. Journal of Vision12(10), 13-13.

 

 

A primary test of the upgraded M-Platform

 

The M-Platform is a robotic device for mice that mimics a human robot device for upper limb stroke rehabilitation (the “arm-guide”) [1]. This platform allows head-fixed mice to carry out intensive and highly repeatable exercises with the forelimb, specifically repeated sessions of forelimb retraction [2]. The new upgrade of the M-Platform is the design of a component providing a variable level of friction to the slide (FIG 1).

FIG 1 blog
(FIG 1) The new component of the M-Platform used to finely control the static friction acting on the slide movement. It is composed of felt pad contacting the slide (2) moved by a screw connected to a servo motor (1), controlled by a microcontroller. The working area of the animal (3) is not obstructed by the new component.

To test the upgraded M-Platform, an experimental protocol has been designed. The experimental group consists of mice performing a two-weeks training with high friction (0.5N)  which are compared to a control group (training with a lower friction (0.3N)). We measured also isometric force during the pulling performance. First results have shown higher isometric force and better performance for high-trained animals compared to controls (FIG 2).

 

FIG 2 blog
(FIG 2) Preliminary results after a protocol to evaluate the effect of the friction in the pulling task (trials). The protocol consists of a 2 weeks of training, 10 trials/day per 4 days, for two groups of animals. A slight change of the training condition can modify the strength performed by healthy animals.

This experiment has been designed not to use injured (e.g. stroke) animals but healthy ones. In this way a possible translation to the complete NRP model (comprising also the point neuron model simulating motor cortex and biomechanical model) should be more feasible.

Bibliography
[1] Reinkensmeyer DJ, Kahn LE, Averbuch M, McKenna-Cole A, Schmit BD, Rymer WZ (2000). Understanding and treating arm movement impairment after chronic brain injury: progress with the ARM guide. J Rehabil Res Dev 37; 653-662.
[2] Spalletti C, Lai S, Mainardi M, Panarese A, Ghionzoli A, Alia C, Gianfranceschi L, Chisari C, Micera S, Caleo M (2014). A robotic system for quantitative assessment and poststroke training of forelimb retraction in mice. Neurorehabil Neural Repair 28: 188-196.

Collaboration between scientists and developers towards integration work in the NRP

Visual-motor coordination is a key research field for understanding our brain and for developing new brain-like technologies.

To address the development and evaluation of bio-inspired control architectures based on cerebellar features, SP10 scientists and developers are collaborating in the implementation of several experiments in the Neurorobotics Platform.

Ismael Baira Ojeda from the Technical University of Denmark (DTU) visited the Scuola Superiore Sant’Anna (Pisa, Italy) to integrate the Adaptive Feedback Error Learning architecture [1] into the Neurorobotics Platform using the iCub humanoid robot. This control architecture uses a combination of Machine Learning techniques and cerebellar-like microcircuits in order to give an optimized input space [2], a fast learning and accuracy for the motor control of robots. In the experiment, the iCub was commanded to balance a ball towards the center of a board, which the iCub held in its hand.

The experiment was later refined and finished during the Install Party hosted by Fortiss (April 2017).

Next, the AFEL architecture could be scaled up and combined with vision and motor control breakthroughs within the different SPs.

Thanks to all the scientists and developers for your support, especially Lorenzo Vannucci, Alessandro Ambrosano and Kenny Sharma!

iCub ball balancing
The prototype experiment running on the Neurorobotics Platform.

References:

[1] Tolu, S., Vanegas, M., Luque, N. R., Garrido, J. A., & Ros, E. (2012). Bio-inspired adaptive feedback error learning architecture for motor control. Biological Cybernetics, 1-16.

[2] Vijayakumar, S., D’souza, A., & Schaal, S. (2005). Incremental online learning in high dimensions. Neural Computation, 17(12), 2602-2634.

Sensory models for the simulated mouse in the NRP

A biologically inspired translation model for proprioceptive sensory information was developed. The translation is achieved implementing a computational model of neural activity of type Ia and type II sensory fibers connected to muscle spindles. The model also includes activity of both static and dynamic gamma-motoneurons, that provide fusimotor activation capable of regulating the sensitivity of the proprioceptive feedback, through the contraction of specific intrafusal fibers (Proske, 19971).

spindle
Figure 1 Intrafusal fibers

The proposed model is an extension of a state-of-the art computational models of muscle spindle activity (Mileusnic, 20062). The model developed by Mileusnic and colleagues, albeit complete and validated against neuroscientific data, was completely rate based, thus it was modified in order to be integrated in a spiking neural network simulation. In particular, a spike integration technique was employed to compute fusimotor activation and the generated rate was used to generate spike trains.

The proprioceptive model is implemented on NEST (code available here), in order to provide an easy integration inside the NRP, and on SpiNNaker, for supporting real-time robotic applications. The proposed component can be coupled to both biomechanical models, like musculo-skeletal systems, and common robotic platforms (via suitable conversions from encoder values to simulated muscle length). In particular, this model will be used, as part of CDP1, to provide sensory feedback from the virtual mouse body.

Results of this work have been published in this article:

Vannucci, Lorenzo, Egidio Falotico, and Cecilia Laschi. “Proprioceptive Feedback through a Neuromorphic Muscle Spindle Model.” Frontiers in Neuroscience 11 (2017): 341.

1 Proske, U. (1997). The mammalian muscle spindle. Physiology, 12(1), 37-42.

2 Mileusnic, M. P., Brown, I. E., Lan, N., & Loeb, G. E. (2006). Mathematical models of proprioceptors. I. Control and transduction in the muscle spindle. Journal of neurophysiology, 96(4), 1772-1788.