Author: Jacques Kaiser

Successful NRP User Workshop

Date: 24.07.2017
Venue: FZI, Karlsruhe, Germany

Thanks to all of the 17 participants for making this workshop a great time.

Last week, we held a successful Neurorobotics Platform (NRP) User Workshop in FZI, Karlsruhe.  We welcomed 17 attendants over three days, coming from various sub-projects (such as Martin Pearson, SP3) and HBP outsiders (Carmen Peláez-Moreno and  Francisco José Valverde Albacete). We focused on hands-on sessions so that users got comfortable using the NRP themselves.

IMG_1183IMG_1185

Thanks to our live boot image with the NRP pre-installed, even users who did not follow the local installation steps beforehand could run the platform locally in no time. During the first day, we provided a tutorial experiment, exclusively developed for the event, which walked the users through the many features of the NRP. This tutorial experiment is inspired from the baby playing ping pong video, which is here simulated with an iCub robot. This tutorial experiment will soon get released with the official build of the platform.

IMG_20170724_120245.jpg

On the second and third days, more freedom was given to the users so that they could implement their own experiments. We had short hands-on sessions on the Robot Designer as well as Virtual Coach, for offline optimization and analysis. Many new experiments were successfully integrated into the platform: the Miro robot from Consequential Robotics,  a snake-like robot moving with Central Patterns Generators (CPG), revival of the Lauron experiment, …

IMG_20170725_094219.jpg

We received great feedback from the users. We are looking forward for the organization of the next NRP User Workshop!

 

Advertisements

Short-term visual prediction – published

Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently.

In collaboration with Prof. Maass group (IGI, TU Graz, SP9) we proposed a biologically inspired functional model. This model is based on liquid state machines and can learn to predict visual stimuli from address events provided by a Dynamic Vision Sensor (DVS).

Fig_1_rescaled

We validated this model on various experiments both with simulated and real DVS. The results were accepted for publication in [1]. We are now currently working on using those short-term visual predictions to control robots.

[1] “Scaling up liquid state machines to predict over address events from dynamic vision sensors”, Jacques Kaiser, Rainer Stal, Anand Subramoney et al., Special issue in Bioinspiration & Biomimetics, 2017.

SP9 Quarterly in-person meeting

We are closely collaborating with SP9 (Neuromorphic hardware) to support big networks in real time. On the 20th and 21st of March 2017, we participated in the SP9 Quaterly in-person meeting to present the Neurorobotics Platform and our integration of SpiNNaker.

SP9During the meeting, we identified MUSIC as a a single interface between our platform and both supercomputers from SP7 as well as SpiNNaker. We also pointed out the features we were missing in MUSIC to keep the Neurorobotics platform interactive, most importantly dynamical ports and reset.

We also presented some complex learning rules we are working on to help SP9 identify user requirements for SpiNNaker 2 design. We were surprised to learn that one of the most complicated learning rule we are working on – SPORE derived by David Kappel in Prof. Maass group – is also used as a benchmark for SpiNNaker 2 by Prof. Mayr. This reward-based learning rule can be used to train arbitrary recurrent network of spiking neurons. Confident that it will play an important role in SGA2, we sent our master student Michael Hoff from FZI, Karlsruhe to TU Graz to use this rule in a robotic setup.

Gazebo DVS plugin – towards a sensor library

On the NRP, we already  support any sensor included by Gazebo. Mostly, they consist of classical robotic sensors such as laser scanner and camera.

However, Gazebo does not include recent biologically inspired sensor, neither does it include neuroscience’s models of organic sensors. Those type of sensors are important for the NRP. To keep the workflow identical for classical robotic sensors and newly developed sensors, we decided to implement the later as gazebo plugins. Essentially, our sensor library will consist of a list of gazebo plugins simulating various biologically inspired sensors.

So far, we implemented a simulation of the Dynamic Vision Sensor (DVS) which is open-source and available on our  SP10 github. In the coming month, we will also adapt our implementation of COREM, retina simulation framework [1,2] and wrap it in a Gazebo plugin.

DVS_generic_image_viewer

[1] Martínez-Cañada, P., Morillas, C., Pino, B., Ros, E., & Pelayo, F. (2016). A Computational Framework for Realistic Retina Modeling. International Journal of Neural Systems, 26(07), 1650030.

[2] Ambrosano A. et al. (2016). Retina Color-Opponency Based Pursuit Implemented Through Spiking Neural Networks in the Neurorobotics Platform. Biomimetic and Biohybrid Systems. Living Machines 2016. 

NRP User Hackathon @ Karlsruhe

At the end of the ramp-up phase, we realized that the NeuroRobotics Platform (NRP) lacked users despite increasing maturity. To resolve this, in SGA-1 we splitted the core team in two: a development and a research team. The former would continue developing the NRP, while the latter would become driving users. This split became particulary interesting with the rise of new potential users such as joining SP10 partners as well as CDP co-workers.

To engage those potential users with the NRP, we organized the first NRP User Hackathon in FZI, Karlsruhe from the 15th to the 17th of February 2017. During those three days, two NRP and robotic experts (Jacques Kaiser, FZI, & Alessandro Ambrosano, SSSA) helped neuroscientists Alban Bornet (EPFL, joining SP-10) and Alexander Kroner (Maastricht University, CDP4 partner) integrating their models to the NRP. With various knowledge backgrounds, the small committe pair-promming setup allowed everyone to learn from the others.

For Alban’s visual segmentation model implemented in PyNN+NEST, the NRP integration brought light to interesting performances on real scenes, while we would interact with the environment by moving cubes around. His model was the most complex neural model which has ever been run within the NRP, consisting of more than 50 000 neurons and 300 000 synapses for a 20×20 input image size and simple settings. To speed up the run-time of the model, we also ported it to SpiNNaker in batch processing mode.

hackathon-team

For Alexander, we were able to connect his bottom-up visual attention model implemented with Keras+Theano (deep learning frameworks) to ROS, and consequently to the NRP. This gave us some insights into how we would implement an upcoming NRP feature: running arbitrary user code. In this instance, we could wrap his model in a rosnode converting input images to saliency map images. We could imagine a spiking network model taking those saliency maps as input and performing saccadic eye movements.

saliency-combined

Both Alban and Alexander adopted the NeuroRobotics Platform and will spread the word in their respective labs. After the success of this hackathon, it is likely that we will soon organize more to grow the user base of the NRP organically.