Posts by Collection

notes

Inverse LQR

Published:

Given a trajectory of states and actions that is solution of Linear Quadratic Regulator optimization, the problem is to find cost parameters that generates the trajectory

posters

IAF_Dynamics

This work uses Inverse Autoregressive Flows to model each time-step in a sequential data-point and then compares this architecture with the (then) state of the art on the open-ai gym pendulum dataset to show that the current method is only slightly worse, but samples lesser number of times.

Iterative Closest Point Analysis

Iterative closest Point (ICP) is an algorithm employed to minimize the difference between two point clouds given an initial estimate of the relative pose. It is often used to reconstruct 2D or 3D surfaces from different scans, to localize robots and achieve optimal path planning and to register medical scans. ICP has several steps and each step may be implemented in various ways which give rise to a multitude of ICP variants. This project implements and analyzes several variants of ICP, comparing them on the basis of execution speed and quality of the result.

Importance Weighted Autoencoders

This work re-implements Importance Weighted Autoencoders and Variational Autoencoders. The results and comparisons between these two models are laid out in a poster below.

Synthesis of Depth images from RGB images

Depth images are a rich source information about their subjects and have been found to be particularly useful for tasks such as 3D Reconstruction. This work aims to learn a supervised pixel to pixel mapping from an RGB image to its corresponding depth image. The architecture in this work is based on the paper by Li, Jun, Reinhard Klein, and Angela Yao. “Learning fine-scaled depth maps from single RGB images.” arXiv preprint (2016)

For more details, see the poster below.

projects

Dynamics with Inverse Autoregressive Flows

Through this work, we propose the incorporation of Inverse Autoregressive Flows for determining the state space (latents) in a dynamical system model. This reduces the number of samples that need to be obtained in order to approximate the posterior distribution (and thus the underlying states/latents for a set of observations and controls) from one per time step to one per sequence of observations. Our experiments with pendulum-v01, an environment from openai gym confirmed that the accuracy with which the observations are generated are close to the state of the art for sequence models.

Iterative Closest Point Analysis

Iterative closest Point (ICP) is an algorithm employed to minimize the difference between two point clouds given an initial estimate of the relative pose. It is often used to reconstruct 2D or 3D surfaces from different scans, to localize robots and achieve optimal path planning and to register medical scans. ICP has several steps and each step may be implemented in various ways which give rise to a multitude of ICP variants. This project implements and analyzes several variants of ICP, comparing them on the basis of execution speed and quality of the result.

Reimplementation of Importance Weighted Autoencoders

Importance Weighted Autoencoders are a variant of Variational Autoencoders with a more powerful posterior. The Variational Autoencoder is a generative modelling technique that assumes an underlying latent structure to the data x. To successfully generate samples from the same distribution x was generated from, one must have a model of the data that maximizes p(x). However p(x) is the integral over p(x|z)p(z), which is intractable. Therefore, we instead maximize the lower bound by assuming a variational posterior (since we cannot tractably compute the true one or sample from it.)

Unfortunately, VAE assumes the variational posterior form to be factorial which restricts the capacity of the model to distributions with factorial true posterior. IWAE overcomes this assumption by sampling multiple times from the approximate posterior to get a tighter lower bound on p(x). The variational posterior provably approaches the true posterior in the limit value of infinite samples.

This work re-implements IWAE and VAE. The results and comparisons between these two models are laid out in a poster that can be downloaded by following the link below.

Semantic Segmentation of human depth images

This project is the consequence of an internal course requirement during my masters. The goal of this project, on its own, is to segment human body parts from depth images. Results from several different U-Net based models are compared in this work, with particular focus on high speed and accuracy.

Provided in this project are functions to obtain and preprocess the training data, train the segmentation graph, and freezes the graph to a protobuf file for later inference in C++.

Further development of this work (not included in this repository) focuses on combining the segmented depth maps using KinectFusion to provide a segmented 3D point cloud/mesh. This segmented 3D model of the human body was used in collaboration with some other techniques to provide a view into a patient’s body.

A detailed report on this work can be found in the link below.

Synthesis of Depth images from RGB images

Depth images are a rich source information about their subjects and have been found to be particularly useful for tasks such as 3D Reconstruction. This work aims to learn a supervised pixel to pixel mapping from an RGB image to its corresponding depth image. The architecture in this work is based on the paper by Li, Jun, Reinhard Klein, and Angela Yao. “Learning fine-scaled depth maps from single RGB images.” arXiv preprint (2016)

For more details, see the code and the poster in the links below.

publications

Beta-DVBF

Published in Arxiv, 2019

In this work, we extend a popular architecture for learning a dynamical system - Deep Variational Bayes Filter - to incorporate high-dimensional image data.

Cite with: N. Das, M. Karl, P. Becker-Ehmck and P. van der Smagt, "Beta DVBF: Learning State-Space Models for Control from High Dimensional Observations." 2019 arXiv preprint arXiv:1911.00756. https://arxiv.org/abs/1911.00756

Learning State-Dependent Losses for Inverse Dynamics Learning

Published in IROS, 2020

In this work, we propose to apply meta-learning to learn structured, state-dependent loss functions during a meta-training phase. This allows us to quickly adapt the model to changes in dynamics

Cite with: K. Morse, N. Das, Y. Lin, A. S. Wang, A. Rai and F. Meier, "Learning State-Dependent Losses for Inverse Dynamics Learning," 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) https://ieeexplore.ieee.org/document/9341701

Model-Based Inverse Reinforcement Learning from Visual Demonstrations

Published in CoRL, 2021

In this work, we present a gradient-based inverse reinforcement learning framework that utilizes a pre-trained visual dynamics model to learn cost functions when given only visual human demonstrations. The learned cost functions are then used to reproduce the demonstrated behavior via visual model predictive control.

Cite with: N. Das, S. Bechtle, T. Davchev, D. Jayaraman, A. Rai and F. Meier, "Model-Based Inverse Reinforcement Learning from Visual Demonstrations," 2020 Conference on Robot Learning (CoRL) https://corlconf.github.io/paper_432

Learning Extended Body Schemas from Visual Keypoints for Object Manipulation.

Published in Arxiv, 2021

Humans have impressive generalization capabilities when it comes to manipulating objects and tools in completely novel environments. These capabilities are, at least partially, a result of humans having internal models of their bodies and any grasped object. How to learn such body schemas for robots remains an open problem. In this work, we develop an self-supervised approach that can extend a robot’s kinematic model when grasping an object from visual latent representations.

Cite with: S. Bechtle, N. Das and F. Meier, "Learning Extended Body Schemas from Visual Keypoints for Object Manipulation." 2020 arXiv preprint arXiv:2011.03882. https://arxiv.org/abs/2011.03882