Author: Jacques Kaiser, FZI, Karlsruhe

Practical lab course on the Neurorobotics Platform @KIT

This semester, for the first time, the Neurorobotics Platform will be used as a teaching tool for students interested in embodied artificial intelligence.

The lab course started last week for KIT students, offered by FZI in Karlsruhe. Previously, instead of this practical class, we were offering a seminar were students would make literature research on Neurorobotics and learning. For the seminars, we had around 10 students registering per semester, but this year for the practical lab course, more than 20 students registered, most of them in master degree.

 

 

The initial meeting took place last week. The students were splits in seven groups of three. Their first task, familiarize themselves with the NRP and PyNN by solving the tutorial baseball experiment and provided python notebook exercises. All groups were given USB sticks with live boot for them to easily install the NRP, and also access to an online version. Throughout the semester, students will learn about Neurorobotics and the platform by designing challenges and solve them.

Organizers: Camilo Vasquez Tieck, Jacques Kaiser, Martin Schulze, Lea Steffen

Advertisements

CDP4 at the HBP Summit: integrating deep models for visual saliency in the NRP

Back in the beginning of 2017, we had a great NRP Hackathon @FZI in Karlsruhe, where Alexander Kroner (SP4) presented his deep learning model for computing visual saliency.

We now presented this integration at the Human Brain Summit 2017 in Glasgow as a collaboration in CDP4 – visuo-motor integration. During this presentation we also shown how to integrate any deep learning models in the Neurorobotics Platform, as was already presented in the Young Researcher Event by Kenny Sharma.

We will continue this collaboration with SP4 by connecting the saliency model to eye movements and memory modules.

deep-dive-cdp4nrp-saliency

Successful NRP User Workshop

Date: 24.07.2017
Venue: FZI, Karlsruhe, Germany

Thanks to all of the 17 participants for making this workshop a great time.

Last week, we held a successful Neurorobotics Platform (NRP) User Workshop in FZI, Karlsruhe.  We welcomed 17 attendants over three days, coming from various sub-projects (such as Martin Pearson, SP3) and HBP outsiders (Carmen Peláez-Moreno and  Francisco José Valverde Albacete). We focused on hands-on sessions so that users got comfortable using the NRP themselves.

IMG_1183IMG_1185

Thanks to our live boot image with the NRP pre-installed, even users who did not follow the local installation steps beforehand could run the platform locally in no time. During the first day, we provided a tutorial experiment, exclusively developed for the event, which walked the users through the many features of the NRP. This tutorial experiment is inspired from the baby playing ping pong video, which is here simulated with an iCub robot. This tutorial experiment will soon get released with the official build of the platform.

IMG_20170724_120245.jpg

IMG_20170725_094219.jpg

On the second and third days, more freedom was given to the users so that they could implement their own experiments. We had short hands-on sessions on the Robot Designer as well as Virtual Coach, for offline optimization and analysis. Many new experiments were successfully integrated into the platform: the Miro robot from Consequential Robotics,  a snake-like robot moving with Central Patterns Generators (CPG), revival of the Lauron experiment, …

 

Screenshot from 2017-09-08 14-29-33_crop

We received great feedback from the users. We are looking forward for the organization of the next NRP User Workshop!

 

Short-term visual prediction – published

Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently.

In collaboration with Prof. Maass group (IGI, TU Graz, SP9) we proposed a biologically inspired functional model. This model is based on liquid state machines and can learn to predict visual stimuli from address events provided by a Dynamic Vision Sensor (DVS).

Fig_1_rescaled

We validated this model on various experiments both with simulated and real DVS. The results were accepted for publication in [1]. We are now currently working on using those short-term visual predictions to control robots.

[1] “Scaling up liquid state machines to predict over address events from dynamic vision sensors”, Jacques Kaiser, Rainer Stal, Anand Subramoney et al., Special issue in Bioinspiration & Biomimetics, 2017.

SP9 Quarterly in-person meeting

We are closely collaborating with SP9 (Neuromorphic hardware) to support big networks in real time. On the 20th and 21st of March 2017, we participated in the SP9 Quaterly in-person meeting to present the Neurorobotics Platform and our integration of SpiNNaker.

SP9During the meeting, we identified MUSIC as a a single interface between our platform and both supercomputers from SP7 as well as SpiNNaker. We also pointed out the features we were missing in MUSIC to keep the Neurorobotics platform interactive, most importantly dynamical ports and reset.

We also presented some complex learning rules we are working on to help SP9 identify user requirements for SpiNNaker 2 design. We were surprised to learn that one of the most complicated learning rule we are working on – SPORE derived by David Kappel in Prof. Maass group – is also used as a benchmark for SpiNNaker 2 by Prof. Mayr. This reward-based learning rule can be used to train arbitrary recurrent network of spiking neurons. Confident that it will play an important role in SGA2, we sent our master student Michael Hoff from FZI, Karlsruhe to TU Graz to use this rule in a robotic setup.

Gazebo DVS plugin – towards a sensor library

On the NRP, we already  support any sensor included by Gazebo. Mostly, they consist of classical robotic sensors such as laser scanner and camera.

However, Gazebo does not include recent biologically inspired sensor, neither does it include neuroscience’s models of organic sensors. Those type of sensors are important for the NRP. To keep the workflow identical for classical robotic sensors and newly developed sensors, we decided to implement the later as gazebo plugins. Essentially, our sensor library will consist of a list of gazebo plugins simulating various biologically inspired sensors.

So far, we implemented a simulation of the Dynamic Vision Sensor (DVS) which is open-source and available on our  SP10 github. In the coming month, we will also adapt our implementation of COREM, retina simulation framework [1,2] and wrap it in a Gazebo plugin.

DVS_generic_image_viewer

[1] Martínez-Cañada, P., Morillas, C., Pino, B., Ros, E., & Pelayo, F. (2016). A Computational Framework for Realistic Retina Modeling. International Journal of Neural Systems, 26(07), 1650030.

[2] Ambrosano A. et al. (2016). Retina Color-Opponency Based Pursuit Implemented Through Spiking Neural Networks in the Neurorobotics Platform. Biomimetic and Biohybrid Systems. Living Machines 2016.