Author: Jacques Kaiser, FZI, Karlsruhe

Handling experiment-specific python packages in the NRP

In this blog post I share my method to handle experiment-specific python packages. Some of  my experiments require TensorFlow v1.6, some others need Keras – which itself requires a prior version of TensorFlow – how to handle all this on your locally installed NRP?

My method relies on the package virtualenvwrapper, which allows you to keep your python virtualenv in a single place.

pip install virtualenvwrapper --user

Additionally, I have a custom config which adds a virtualenv to the $PYTHONPATH when I activate it. Copy the postactivate and postdeactivate scripts to $WORKON_HOME – the configuration folder of virtualenvwrapper.

Now, let’s say you have an NRP experiment with custom python packages listed in a requirements.txt. Create a virtualenv for this experiment and install the experiment-specific packages:

mkvirtualenv --system-site-packages my_venv
pip install -r requirements.txt

To access your experiment-specific packages from within the NRP, simply start the NRP from the same terminal, where the virtualenv is activated:

workon my_venv
cle-start

That’s it!

Advertisements
Structuring your local NRP experiment – some tips

Structuring your local NRP experiment – some tips

Within the context of CDP4, we created a NRP experiment showcasing some functional models from SP1/4:

  • A trained deep network to compute bottom-up saliency
  • A saccade generation model

Since these models are generic, we want to package them so that they can easily be reused in other experiment, such as the WP10.2 strategic experiment. In this post, we quickly explain the structure of the CDP4 experiment on how modularity is achieved.

We decided to implement the functional modules from SP1/SP4 as ROS packages. Therefore, these modules can be used within the NRP (in the GazeboRosPackages folder), but also independently without the NRP, in any other catkin workspace. This has the advantage that the saliency model can be fed webcam images, and easily mounted on a real robot.

The main difference compared to implementing them as transfer function is synchronicity. When the user runs the saliency model on is CPU, processing a single camera image takes around 3 seconds. If the saliency model was implemented as a transfer function, the simulation would pause until the saliency output is ready. This causes the experiment to run slower but conserves reproducability. On the other hand, implemented as a ROS-node, the simulation does not wait for the saliency network to process an image, so the simulation runs faster.

The saliency model is a pre-trained deep network running on TensorFlow. The weights and topology of the network are saved in data files, loaded during the execution. Since these files are heavy and not interesting to version-control, we uploaded them on our owncloud, where they are automatically downloaded by the saliency model if not present. This also makes it simple for our collaborators in SP1/4 to provide us with new pre-trained weights/topology.

The CDP4 experiment itself has its own repo and is very lean, as it relies on these reusable modules. Additionally, an install script is provided to download the required modules in the GazeboRosPackages.

The topic of installing TensorFlow or other python libraries required by the CDP4 experiment, so that they do not collide with other experiment-specific libraries, will be covered in another blog post.

 

Practical lab course on the Neurorobotics Platform @KIT

This semester, for the first time, the Neurorobotics Platform will be used as a teaching tool for students interested in embodied artificial intelligence.

The lab course started last week for KIT students, offered by FZI in Karlsruhe. Previously, instead of this practical class, we were offering a seminar were students would make literature research on Neurorobotics and learning. For the seminars, we had around 10 students registering per semester, but this year for the practical lab course, more than 20 students registered, most of them in master degree.

 

 

The initial meeting took place last week. The students were splits in seven groups of three. Their first task, familiarize themselves with the NRP and PyNN by solving the tutorial baseball experiment and provided python notebook exercises. All groups were given USB sticks with live boot for them to easily install the NRP, and also access to an online version. Throughout the semester, students will learn about Neurorobotics and the platform by designing challenges and solve them.

Organizers: Camilo Vasquez Tieck, Jacques Kaiser, Martin Schulze, Lea Steffen

CDP4 at the HBP Summit: integrating deep models for visual saliency in the NRP

Back in the beginning of 2017, we had a great NRP Hackathon @FZI in Karlsruhe, where Alexander Kroner (SP4) presented his deep learning model for computing visual saliency.

We now presented this integration at the Human Brain Summit 2017 in Glasgow as a collaboration in CDP4 – visuo-motor integration. During this presentation we also shown how to integrate any deep learning models in the Neurorobotics Platform, as was already presented in the Young Researcher Event by Kenny Sharma.

We will continue this collaboration with SP4 by connecting the saliency model to eye movements and memory modules.

deep-dive-cdp4nrp-saliency

Successful NRP User Workshop

Date: 24.07.2017
Venue: FZI, Karlsruhe, Germany

Thanks to all of the 17 participants for making this workshop a great time.

Last week, we held a successful Neurorobotics Platform (NRP) User Workshop in FZI, Karlsruhe.  We welcomed 17 attendants over three days, coming from various sub-projects (such as Martin Pearson, SP3) and HBP outsiders (Carmen Peláez-Moreno and  Francisco José Valverde Albacete). We focused on hands-on sessions so that users got comfortable using the NRP themselves.

IMG_1183IMG_1185

Thanks to our live boot image with the NRP pre-installed, even users who did not follow the local installation steps beforehand could run the platform locally in no time. During the first day, we provided a tutorial experiment, exclusively developed for the event, which walked the users through the many features of the NRP. This tutorial experiment is inspired from the baby playing ping pong video, which is here simulated with an iCub robot. This tutorial experiment will soon get released with the official build of the platform.

IMG_20170724_120245.jpg

IMG_20170725_094219.jpg

On the second and third days, more freedom was given to the users so that they could implement their own experiments. We had short hands-on sessions on the Robot Designer as well as Virtual Coach, for offline optimization and analysis. Many new experiments were successfully integrated into the platform: the Miro robot from Consequential Robotics,  a snake-like robot moving with Central Patterns Generators (CPG), revival of the Lauron experiment, …

 

Screenshot from 2017-09-08 14-29-33_crop

We received great feedback from the users. We are looking forward for the organization of the next NRP User Workshop!

 

Short-term visual prediction – published

Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently.

In collaboration with Prof. Maass group (IGI, TU Graz, SP9) we proposed a biologically inspired functional model. This model is based on liquid state machines and can learn to predict visual stimuli from address events provided by a Dynamic Vision Sensor (DVS).

Fig_1_rescaled

We validated this model on various experiments both with simulated and real DVS. The results were accepted for publication in [1]. We are now currently working on using those short-term visual predictions to control robots.

[1] “Scaling up liquid state machines to predict over address events from dynamic vision sensors”, Jacques Kaiser, Rainer Stal, Anand Subramoney et al., Special issue in Bioinspiration & Biomimetics, 2017.