I'm a Masters Student in Electrical and Computer Engineering department at Carnegie Mellon University. I work with Prof. Guanya Shi at the LeCAR Lab on Learning and Control of legged robots, with a current focus on improving the transferability of simulation-trained policies and enabling reliable adaptation during deployment. I also collaborate with Search-Based Planning Lab on Multi-Agent Quadrupeds. Additionally, I also work with Dynamic Robotics and Control Laboratory under the guidance of Prof. Quan Nguyen on Prior-guided Reinforcement Learning for Agile Legged Robot Control.
Previously, I received my Bachelors in Electrical Engineering from IIT Madras during which I interned as a Robotics Intern at DiFACTO Robotics and Automation focusing on navigation and recovery of in house AMR. I also worked as an Undergraduate Research Assistant at Control Engineering Laboratory with Dr. Bharath Bhikkaji on Optimal Strategies for 1vN Pursuit-Evasion games and Multi-agent Quadrotor Tracking.
Apart from this I was part of Team Anveshak, a student run Mars Rover Team of IIT Madras which participates in University Rover Challenge at Mars Research Desert Station,Utah and worked as an Embedded Systems and Control Engineer, later getting promoted as Team lead for the period 2022-2023.
I'm currently looking for PhD positions for Fall 2026. My research statement comprising of my prior research and future interests can be found here.
My research goal is to develop physics‑aware learning and control frameworks that enable robots to acquire scalable, contact‑rich
skills and execute them robustly in the real world. I am particularly interested in Learning‑based control (Agile Locomotion, Loco‑
Manipulation and Human Motion tracking), Real2Sim2Real(System Identification, Dynamics‑Aware sim‑to‑real adaptation, Active
Exploration, reality‑gap aware benchmarks) and Multi‑Robot coordination spanning algorithms and training pipelines.
News
August 2025: SPI-Active accepted to CoRL 2025 (Oral)🎉.
July 2025: MAPF for Quadrupeds accepted to ICAPS 2025 (Demo Track)🎉.
May 2025: Preferential OGMP accepted to IROS 2025🎉.
April 2025: ASAP accepted to RSS 2025🎉.
August 2024: Started my Masters in ECE at Carnegie Mellon University.
Publications
Sampling-Based System Identification with Active Exploration for Legged Robot Sim2Real Learning
CoRL 2025 (Oral) (All Strong Accepts)
Nikhil Sobanbabu, Guanqi He, Tairan He, Yuxiang Yang, Guanya Shi
HDMI: Learning Interactive Humanoid Whole-Body Control from Human Videos
Haoyang Weng, Yitang Li,Nikhil Sobanbabu, Zihan Wang, Zhengyi Luo, Tairan He, Deva Ramanan, Guanya Shi
Augmenting Learned Centroidal Controller with Adaptive Force Control
OCRL Project (Spring 2025)
Nikhil Sobanbabu, Kailash Jagadeesh, Tony Tao, Bharath Sateeshkumar
Improving Payload Adaptability of CaJun controller by Augmenting it with ℒ1 Adaptive Control.
Multi-agent trajectory tracking for Crazyflie Quadrotors
Coordinated autonomous control of multi-agent quadrotors in a regular polygon formation with varied orientations using the in-built mellinger controller.
Receding-Horizon mode planner for mode planning against perturbations
Nikhil Sobanbabu, Lokesh Krishna
Planner chooser behaviours encoded as latent modes using monte-carlo roll outs to be robust against perturbations generating emergent transitions between modes/behaviors.
Multi-agent Game Theoretic Framwork for Target-Attacker-Defender game
Nikhil Sobanbabu, Shivendra Verma
Simulation environment for the single attacker, singe target, multiple defender pursuit evation differntial game.
Course Projects
Swing-up and Stabilisation of inverted pendulum.
Course EE6415 Non-Linear System Analysis
Swing-up is done using a control from an energy based Lyapunov function. After a reaching an appropriate angle, pole-placement based stabilisation kicks in.
Motion planning for a KUKA mobile Manipulator
Course ED5215 Intro to Motion Planning Nikhil Sobanbabu, Balaji R, Kanishkan M S
Problem Statement: Optimal pick and place of multiple objects to a given destination with payload constraints for the mobile manipulator.
Solution: Dijkstra+Modified TSP for navigation. RRT* for manipulation.