Category: Uncategorized

Videnskab.dk came to interview DTU Center for Playware!

by Ismael Baira Ojeda ¦ Research assistant at DTU – Center for Playware.

The research of the DTU Center for Playware and the Human Brain Project does not go unnoticed in Denmark.

robot_henrik_2_1

Professor Henrik Hautop Lund is in charge of Denmark’s contribution to a major EU project to map the human brain. His group of researchers including Silvia Tolu and Ismael Baira Ojeda at the DTU Center for Playware are developing cerebellar-like models that together with machine learning algorithms are controlling and teaching modular robots how to move (Photo: Henrik Hautop Lund, DTU Electrical Engineering).

Following, a translation of the article:

Artificial brains to provide innovative brain-like technologies.

Approximately 100 research groups collaborate within The Human Brain Project, working at different topics regarding neuroscientific and robotics research.

“Our role involves robotics research, that is to create models of the brain to be put into a simulation of a physical body. We must not only create a complete artificial brain but also implement the interaction between ‘nerve signals’ and movement, “explains Henrik Hautop Lund, head of the Danish contribution to the project.

DTU researchers implement cerebellar-like models using the neuromorphic SpiNNaker platform. Those models are linked via radio to the robot modules achieving the motor control and learning of the desired trajectory.

How is it done?

The artificial brain is implemented on simulations or in neuromorphic hardware. The brain-like model sends signals to a radio transmitter that transmits them to the robot. When the radio signal is received by the robot, the robot reads them and then traces out the movement defined by the code. Source: Ismael Baira Ojeda.

Click here to watch our short demo! – Video edited by Videnskab.dk

This interaction between brain models and robot actuators might make possible the development of more flexible prosthesis in the future that may have a greater human-like movement, explains Henrik Hautop Lund.

“We may eventually create robots that are more compliant and that can adapt better to new or uncertain environments while achieving smooth movements. ” comments Ismael Baira Ojeda.

At the same time, Henrik Lund Hautop thinks that in the future we will be able to enjoy household robots that can better adapt to different households and needs.

“It is not good that a robot has stiff and precise movements that could possibly damage a person if it is to be part of a household or collaborate with humans.” says Ismael Baira Ojeda.

Click here if you feel like reading the original Videnskab’s article in danish.

Advertisements

NRP User Hackathon @ Karlsruhe

At the end of the ramp-up phase, we realized that the NeuroRobotics Platform (NRP) lacked users despite increasing maturity. To resolve this, in SGA-1 we splitted the core team in two: a development and a research team. The former would continue developing the NRP, while the latter would become driving users. This split became particulary interesting with the rise of new potential users such as joining SP10 partners as well as CDP co-workers.

To engage those potential users with the NRP, we organized the first NRP User Hackathon in FZI, Karlsruhe from the 15th to the 17th of February 2017. During those three days, two NRP and robotic experts (Jacques Kaiser, FZI, & Alessandro Ambrosano, SSSA) helped neuroscientists Alban Bornet (EPFL, joining SP-10) and Alexander Kroner (Maastricht University, CDP4 partner) integrating their models to the NRP. With various knowledge backgrounds, the small committe pair-promming setup allowed everyone to learn from the others.

For Alban’s visual segmentation model implemented in PyNN+NEST, the NRP integration brought light to interesting performances on real scenes, while we would interact with the environment by moving cubes around. His model was the most complex neural model which has ever been run within the NRP, consisting of more than 50 000 neurons and 300 000 synapses for a 20×20 input image size and simple settings. To speed up the run-time of the model, we also ported it to SpiNNaker in batch processing mode.

hackathon-team

For Alexander, we were able to connect his bottom-up visual attention model implemented with Keras+Theano (deep learning frameworks) to ROS, and consequently to the NRP. This gave us some insights into how we would implement an upcoming NRP feature: running arbitrary user code. In this instance, we could wrap his model in a rosnode converting input images to saliency map images. We could imagine a spiking network model taking those saliency maps as input and performing saccadic eye movements.

saliency-combined

Both Alban and Alexander adopted the NeuroRobotics Platform and will spread the word in their respective labs. After the success of this hackathon, it is likely that we will soon organize more to grow the user base of the NRP organically.

The Closed Loop Engine Architecture Explained

In order to simulate arbitrary neurorobotics experiments incorporating a coupled simulation of a neural network and a robot as its physical counterpart, it is required to abstract from the technical details.

Specification in Python

For the specification of a closed loop in the Neurorobotics Platform, we have chosen the Python language, as Python seems very popular among neuroscientists and is generally easy to learn. The specification of a closed loop is divided into Transfer Functions, which can be specified in a Python internal DSL called PyTF. PyTF essentially defines a set of decorators to specify how the parameters of a regular Python function should be mapped either to the neural network or to robot sensors or control channels. Transfer Functions in PyTF look as follows:

import hbp_nrp_cle.tf_framework as nrp
from geometry_msgs.msg import Vector3, Twist
@nrp.MapSpikeSink("left_wheel_neuron",nrp.brain.actors[0], nrp.leaky_integrator_alpha)
@nrp.MapSpikeSink("right_wheel_neuron",nrp.brain.actors[1], nrp.leaky_integrator_alpha)
@nrp.Neuron2Robot(Topic(’/husky/cmd_vel’, Twist))
def wheel_transmit(t, left_wheel_neuron, right_wheel_neuron):
   linear = Vector3(20 * min(left_wheel_neuron.voltage,
                             right_wheel_neuron.voltage), 0, 0)
   angular = Vector3(0, 0, 100 *
                     (right_wheel_neuron.voltage - left_wheel_neuron.voltage))
   return Twist(linear=linear, angular=angular)

Here, the decorators of the Python function describe how the parameters of the underlying Python function should be mapped to Robot control and neural network information. Each decorator specifies the parameter that is mapped and how this parameter is mapped. The first parameter must be named t and must not be mapped. Instead, it is automatically filled with the simulation time.

Runtime Architecture

From this specification, the CLE deducts a runtime architecture of the Transfer Function. For the Transfer Function above, this creates the Transfer Function component WheelTransmit in the diagram below. The CLE then creates the necessary components to connect each required interface of the Transfer Function component with a respective implementation.

runtimearchitecture

Not shown in the diagram, the CLE also deducts a specification to which neurons the leaky integrator components should be connected to. Furthermore, the choice of the concrete component type is

A Transfer Function generally only implements an open loop, thus either forwards information from the neuronal network to the robot or the other way round. To establish a closed loop, other Transfer Functions in the opposite direction are required, as depicted in the lower part of the diagram above.

Static Architecture

To assemble the runtime architecture from a given specification, the CLE uses a static architecture to dispatch which components should be created for a given interface based on the chosen neural and world simulator. A diagram of this architecture is shown below.

staticclearchitecture

For each world and neural simulator, the CLE distinguishes between components managing the control flow and components managing the data flow. The concrete implementation is encapsulated behind one of four interfaces, making the simulators used by the CLE easy to exchange. This separation also makes it possible to reuse data flow implementations for multiple simulators. For example, multiple world simulators use ROS to communicate with the robot.

Continue reading “The Closed Loop Engine Architecture Explained”

New frontiers Article Explains the Technology Powering the HBP Neurorobotics Platform

frontiers

After our recent Science Supplement article on Neurorobotics in the Human Brain Project, our paper “Connecting artificial brains to robots in a comprehensive simulation framework: the Neurorobotics Platform” now got accepted for publication by frontiers in Neurorobotics. Which of the papers should you read? Definitely both! The Science Supplement article states the key concepts of neurorobotics and outlines how they are reflected in the HBP Neurorobotics Workflow. The new frontiers paper explains how this workflow is actually implemented in the Neurorobotics Platform:

“Combined efforts in the fields of neuroscience, computer science and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to filling this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in-silico experimentation.

In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.”

Can’t wait to start reading the paper? A preprint is availalble for free open access download on frontiers.