Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
Given a trajectory of states and actions that is solution of Linear Quadratic Regulator optimization, the problem is to find cost parameters that generates the trajectory
This work uses Inverse Autoregressive Flows to model each time-step in a sequential data-point and then compares this architecture with the (then) state of the art on the open-ai gym pendulum dataset to show that the current method is only slightly worse, but samples lesser number of times.
Iterative closest Point (ICP) is an algorithm employed to minimize the difference between two point clouds given an initial estimate of the relative pose. It is often used to reconstruct 2D or 3D surfaces from different scans, to localize robots and achieve optimal path planning and to register medical scans. ICP has several steps and each step may be implemented in various ways which give rise to a multitude of ICP variants. This project implements and analyzes several variants of ICP, comparing them on the basis of execution speed and quality of the result.
This work re-implements Importance Weighted Autoencoders and Variational Autoencoders. The results and comparisons between these two models are laid out in a poster below.
Depth images are a rich source information about their subjects and have been found to be particularly useful for tasks such as 3D Reconstruction. This work aims to learn a supervised pixel to pixel mapping from an RGB image to its corresponding depth image. The architecture in this work is based on the paper by Li, Jun, Reinhard Klein, and Angela Yao. “Learning fine-scaled depth maps from single RGB images.” arXiv preprint (2016)
For more details, see the poster below.
Through this work, we propose the incorporation of Inverse Autoregressive Flows for determining the state space (latents) in a dynamical system model. This reduces the number of samples that need to be obtained in order to approximate the posterior distribution (and thus the underlying states/latents for a set of observations and controls) from one per time step to one per sequence of observations. Our experiments with pendulum-v01, an environment from openai gym confirmed that the accuracy with which the observations are generated are close to the state of the art for sequence models.
Iterative closest Point (ICP) is an algorithm employed to minimize the difference between two point clouds given an initial estimate of the relative pose. It is often used to reconstruct 2D or 3D surfaces from different scans, to localize robots and achieve optimal path planning and to register medical scans. ICP has several steps and each step may be implemented in various ways which give rise to a multitude of ICP variants. This project implements and analyzes several variants of ICP, comparing them on the basis of execution speed and quality of the result.
Importance Weighted Autoencoders are a variant of Variational Autoencoders with a more powerful posterior. The Variational Autoencoder is a generative modelling technique that assumes an underlying latent structure to the data x. To successfully generate samples from the same distribution x was generated from, one must have a model of the data that maximizes p(x). However p(x) is the integral over p(x|z)p(z), which is intractable. Therefore, we instead maximize the lower bound by assuming a variational posterior (since we cannot tractably compute the true one or sample from it.)
Unfortunately, VAE assumes the variational posterior form to be factorial which restricts the capacity of the model to distributions with factorial true posterior. IWAE overcomes this assumption by sampling multiple times from the approximate posterior to get a tighter lower bound on p(x). The variational posterior provably approaches the true posterior in the limit value of infinite samples.
This work re-implements IWAE and VAE. The results and comparisons between these two models are laid out in a poster that can be downloaded by following the link below.
This project is the consequence of an internal course requirement during my masters. The goal of this project, on its own, is to segment human body parts from depth images. Results from several different U-Net based models are compared in this work, with particular focus on high speed and accuracy.
Provided in this project are functions to obtain and preprocess the training data, train the segmentation graph, and freezes the graph to a protobuf file for later inference in C++.
Further development of this work (not included in this repository) focuses on combining the segmented depth maps using KinectFusion to provide a segmented 3D point cloud/mesh. This segmented 3D model of the human body was used in collaboration with some other techniques to provide a view into a patient’s body.
A detailed report on this work can be found in the link below.
Depth images are a rich source information about their subjects and have been found to be particularly useful for tasks such as 3D Reconstruction. This work aims to learn a supervised pixel to pixel mapping from an RGB image to its corresponding depth image. The architecture in this work is based on the paper by Li, Jun, Reinhard Klein, and Angela Yao. “Learning fine-scaled depth maps from single RGB images.” arXiv preprint (2016)
For more details, see the code and the poster in the links below.
Published in github.com, 2018
This work explores the literature around deep learning sequence models, especially in context of NLP
Cite with: Das, Neha. (2018). Seminar: Deep Learning Sequence Modelling (Natural Language Processing). http://neha191091.github.io/files/seminar_nlp.pdf
Published in github.com, 2018
This work presents a novel method for semantically segmenting a 3D point cloud of a human
Cite with: Das, Neha. (2018). Development of a system that allows for the semantic segmentation of a 3D model of a human body into its constituent parts. http://neha191091.github.io/files/Semantic_Segmentation_IDP_Report.pdf
Published in Arxiv, 2019
Learning a model of dynamics from high-dimensional images can be a core ingredient for success in many applications across different domains, especially in sequential decision making. However, currently prevailing methods based on latent-variable models are limited to working with low resolution images only. In this work, we show that some of the issues with using high-dimensional observations arise from the discrepancy between the dimensionality of the latent and observable space, and propose solutions to overcome them.
Cite with: N. Das, M. Karl, P. Becker-Ehmck and P. van der Smagt, "Beta DVBF: Learning State-Space Models for Control from High Dimensional Observations." 2019 arXiv preprint arXiv:1911.00756. https://arxiv.org/abs/1911.00756
Published in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
Being able to quickly adapt to changes in dynamics is paramount in model-based control for object manipulation tasks. In order to influence fast adaptation of the inverse dynamics model’s parameters, data efficiency is crucial. Given observed data, a key element to how an optimizer updates model parameters is the loss function. In this work, we propose to apply meta-learning to learn structured, state-dependent loss functions during a meta-training phase. We then replace standard losses with our learned losses during online adaptation tasks. We evaluate our proposed approach on inverse dynamics learning tasks, both in simulation and on real hardware data. In both settings, the structured and state-dependent learned losses improve online adaptation speed, when compared to standard, state-independent loss functions.
Cite with: K. Morse, N. Das, Y. Lin, A. S. Wang, A. Rai and F. Meier, "Model-Based Inverse Reinforcement Learning from Visual Demonstrations," 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) https://ieeexplore.ieee.org/document/9341701
Published in Conference on Robot Learning (CoRL 2020), 2021
Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem. The key challenges lie in learning good dynamics models, developing algorithms that scale to high-dimensional state-spaces and being able to learn from both visual and proprioceptive demonstrations. In this work, we present a gradient-based inverse reinforcement learning framework that utilizes a pre-trained visual dynamics model to learn cost functions when given only visual human demonstrations. The learned cost functions are then used to reproduce the demonstrated behavior via visual model predictive control. We evaluate our framework on hardware on two basic object manipulation tasks.
Cite with: N. Das, S. Bechtle, T. Davchev, D. Jayaraman, A. Rai and F. Meier, "Model-Based Inverse Reinforcement Learning from Visual Demonstrations," 2020 Conference on Robot Learning (CoRL) [https://corlconf.github.io/paper_432](https://proceedings.mlr.press/v155/das21a/das21a.pdf)
Published in Arxiv, 2021
Humans have impressive generalization capabilities when it comes to manipulating objects and tools in completely novel environments. These capabilities are, at least partially, a result of humans having internal models of their bodies and any grasped object. How to learn such body schemas for robots remains an open problem. In this work, we develop an self-supervised approach that can extend a robot’s kinematic model when grasping an object from visual latent representations.
Cite with: S. Bechtle, N. Das and F. Meier, "Learning Extended Body Schemas from Visual Keypoints for Object Manipulation." 2020 arXiv preprint arXiv:2011.03882. https://arxiv.org/abs/2011.03882
Published in 22nd IFAC World Congress: Yokohama, Japan, July 9-14, 2023, 2023
Data-driven control in unknown environments requires a clear understanding of the involved uncertainties for ensuring safety and efficient exploration. While aleatoric uncertainty that arises from measurement noise can often be explicitly modeled given a parametric description, it can be harder to model epistemic uncertainty, which describes the presence or absence of training data. The latter can be particularly useful for implementing exploratory control strategies when system dynamics are unknown. We propose a novel method for detecting the absence of training data using deep learning, which gives a continuous valued scalar output between 0 (indicating low uncertainty) and 1 (indicating high uncertainty). We utilize this detector as a proxy for epistemic uncertainty and show its advantages over existing approaches on synthetic and real-world datasets. Our approach can be directly combined with aleatoric uncertainty estimates and allows for uncertainty estimation in real-time as the inference is sample-free unlike existing approaches for uncertainty modeling. We further demonstrate the practicality of this uncertainty estimate in deploying online data-efficient control on a simulated quadcopter acted upon by an unknown disturbance model.
Cite with: Das, N., Umlauft, J., Lederer, A., Capone, A., Beckers, T., & Hirche, S. Deep Learning based Uncertainty Decomposition for Real-time Control." 2023 IFAC-PapersOnLine, 56(2), 847-853 https://www.sciencedirect.com/science/article/pii/S2405896323020803
Published in Frontiers in Neurorobotics, 2023
Stroke survivors often compensate for the loss of motor function in their distal joints by altered use of more proximal joints and body segments. Since this can be detrimental to the rehabilitation process in the long-term, it is imperative that such movements are indicated to the patients and their caregiver. This is a difficult task since compensation strategies are varied and multi-faceted. Recent works that have focused on supervised machine learning methods for compensation detection often require a large training dataset of motions with compensation location annotations for each time-step of the recorded motion. In contrast, this study proposed a novel approach that learned a linear classifier from energy-based features to discriminate between healthy and compensatory movements and identify the compensating joints without the need for dense and explicit annotations.
Cite with: Das, N., Endo, S., Patel, S., Krewer, C., & Hirche, S. Online detection of compensatory strategies in human movement with supervised classification: a pilot study." 2023 Frontiers in Neurorobotics, 17, 1155826 https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2023.1155826/full
Published in 2024 10th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), 2024
Stroke survivors and individuals with neuromus-cular disorders often experience motor function impairments, particularly during hand movements crucial for activities of daily living (ADL). Functional Electrical Stimulation (FES) has emerged as a potential assistive and rehabilitative technique to address these limitations. However, accurately determining user intent during FES poses a significant challenge. This work proposes a framework for rapidly learning a model of the user’s hand intent from surface electromyography (sEMG) signals, specifically for continuous FES-based control of the ipsilateral hand. The framework systematically collects data from expected volitional and FES-evoked hand motions, followed by training a logistic regression model for intent classification. The study demonstrates that the proposed model can learn from limited data and compares favorably to deep neural nets trained on the same dataset. This model is able to recognize user intent with high accuracy even during concurrent FES stimulation.
Cite with: N. Das, S. Endo, H. Kavianirad and S. Hirche, "Framework for Learning a Hand Intent Recognition Model from sEMG for FES-Based control," 2024 10th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), Heidelberg, Germany, 2024, pp. 1320-1327, doi: 10.1109/BioRob60516.2024.10719910 https://ieeexplore.ieee.org/abstract/document/10719910
Published in TechRxiv (2025), 2025
Occurrence of compensatory motor behaviors is common after stroke and other movement disorders, and can hinder patient rehabilitation. Detection of such behaviors is essential but traditionally relies on subjective, labor-intensive manual assessment. While supervised learning has been explored for automation, its effectiveness is constrained by inter-individual variability in compensatory strategies and the difficulty of obtaining detailed labels. To address these challenges, we present an unsupervised anomaly detection framework that models healthy motion distributions using Probabilistic Movement Primitives and quantifies deviations to detect potential compensations. The transparent structure of the proposed framework enables localization of compensation sources and differentiation between compensating and inhibited degrees of freedom, thereby enhancing its explainability and providing deeper insight into its outcomes. The framework was validated on a reaching-motion dataset, and achieved high accuracy in detecting compensation among stroke patients (F1 = 0.95). Trials from the evaluation dataset were further annotated with one or more compensation sources by multiple raters, yielding a rich dataset with quantifiable label uncertainties. Evaluation indicated that the framework achieved high performance in isolating compensation sources, particularly when annotator agreement is high.
Cite with: N. Das, S. Endo, S. Patel, M. Rossini, A. De Crignis, E. Guanziroli, G. Palumbo, A. Specchia, C. Krewer and S. Hirche, "Explainable Unsupervised Anomaly Detection for Identifying Compensatory Motor Behavior in Stroke Rehabilitation," 2024 10th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), Heidelberg, Germany, 2024, pp. 1320-1327, doi: 10.1109/BioRob60516.2024.10719910 https://www.techrxiv.org/users/993864/articles/1355353-explainable-unsupervised-anomaly-detection-for-identifying-compensatory-motor-behavior-in-stroke-rehabilitation