Welcome to my Research page! I’m currently a 1st Year Ph.D. student with the LIS Group within MIT CSAIL. I’m officially advised by Leslie Kaelbling and Tomás Lozano-Pérez, though I frequently collaborate with Dylan Hadfield-Menell, Josh Tenenbaum, and many other wonderful people within CSAIL’s Embodied Intelligence Initiative. I’m extremely grateful for support from the NSF Graduate Research Fellowship.

Resume | CV | Google Scholar | Bio | GitHub | Twitter

Research Areas

I’m broadly interested in enabling robots to operate robustly in long-horizon, multi-task settings so that they can accomplish tasks like multi-object manipulation, cooking, or even performing household chores. To this end, I’m interested in combining classical AI planning and reasoning approaches with modern machine learning techniques. My research draws on ideas from reinforcement learning, task and motion planning (TAMP), continual learning, and neurosymbolic AI.

Publications

Conference Papers

Just Label What You Need: Fine-Grained Active Selection for Perception and Prediction through Partially Labeled Scenes

Nishanth Kumar*, Sean Segal*, Sergio Casas, Mengye Ren, Jingkang Wang, Raquel Urtasun
Conference on Robot Learning (CoRL) poster, 2021.
OpenReview / arXiv / poster

Introduces fine-grained active selection via partial labeling for efficient labeling for perception and prediction.
[* denotes equal contribution. Work was done while at Uber ATG]

Building Plannable Representations with Mixed Reality

Eric Rosen, Nishanth Kumar, Nakul Gopalan, Daniel Ullman, George Konidaris, Stefanie Tellex
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.
paper / video

Introduces Action-Oriented Semantic Maps (AOSM's) and a system to specify these with mixed reality, which robots can use to perform a wide-variety of household tasks.

Multi-Object Search using Object-Oriented POMDPs

Arthur Wandzel, Yoonseon Oh, Michael Fishman, Nishanth Kumar, Lawson L.S Wong, Stefanie Tellex
IEEE International Conference on Robotics and Automation (ICRA), 2019.
paper / video

Introduces the Object-Oriented Partially Observable Monte-Carlo Planning (OO-POMCP) algorithm for efficiently solving Object-Oriented Partially Observable Markov Decision Processes (OO-POMDPs) and shows how this can enable a robot to efficiently find multiple objects in a home environment.

Workshop Papers and Extended Abstracts

Inventing relational state and action abstractions for effective and efficient bilevel planning

Tom Silver*, Rohan Chitnis*, Nishanth Kumar, Willie McClinton, Tomás Lozano-Pérez, Leslie Kaelbling, Joshua Tenenbaum
RLDM, 2022.

(Spotlight Talk)


arxiv / code

Introduces a new, program-synthesis inspired approach for learning neuro-symbolic and relational state and action abstractions from demonstrations. The abstractions are explicitly optimized for effective and efficient bilevel planning.
[* denotes equal contribution]

Task Scoping for Efficient Planning in Open Worlds

Nishanth Kumar*, Michael Fishman*, Natasha Danas, Michael Littman, Stefanie Tellex George Konidaris
AAAI Conference on Artificial Intelligence, Student Workshop,, 2020.
paper

Introduces high-level ideas for how large Markov Decision Processes (MPDs) might be efficiently pruned to include only states and actions relevant to a particular reward function. This paper is subsumed by our arxiv preprint on task scoping.
[* denotes equal contribution]

Knowledge Acquisition for Robots through Mixed Reality Head-Mounted Displays

Nishanth Kumar*, Eric Rosen*, Stefanie Tellex
The Second International Workshop on Virtual, Augmented and Mixed Reality for Human Robot Interaction, 2019.
paper

Sketches high level ideas for how a mixed reality system might enable users to specify information for a robot to perform pick-place and other household tasks. This work is subsumed by our AOSM work.
[* denotes equal contribution]

Preprints

Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains

Nishanth Kumar*, Willie McClinton*, Tom Silver*, Rohan Chitnis*, Tomás Lozano-Pérez, Leslie Kaelbling, Joshua Tenenbaum
arXiv, 2022.
paper / video / code

Introduces "operators with ignore effects": a generalization of the standard STRIPS operators that enables operators to decline to model particular effects and thus better serve as a high-level action abstraction that is incorrect but useful for efficient bilevel planning. Also introduces an algorithm to learn such operators from a handful of demonstrations.
[* denotes equal contribution]

PGMax: Factor Graphs for Discrete Probabilistic Graphical Models and Loopy Belief Propagation in JAX

Guangyao Zhou*, Nishanth Kumar*, Miguel Lázaro-Gredilla, Shrinu Kushagra, Dileep George
arXiv, 2022.
arxiv / blog post / code

Introduces a new JAX-based framework that aims to make it easy to build and run inference on probabilistic graphical models (PGM's).
[* denotes equal contribution]

Task Scoping: Building Goal-Specific Abstractions for Planning in Complex Domains

Nishanth Kumar*, Michael Fishman*, Natasha Danas, Michael Littman, Stefanie Tellex George Konidaris
arXiv, 2020.
arxiv

Introduces a method for how large classical planning problems can be efficiently pruned to exclude states and actions that are irrelevant to a particular goal so that agents can solve very large, 'open-scope' domains that are capable of supporting multiple goals.
[* denotes equal contribution]

Learning Deep Parameterized Skills from Demonstration for Re-targetable Visuomotor Control

Nishanth Kumar*, Jonathan Chang*, Sean Hastings, Aaron Gokaslan, Diego Romeres, Devesh Jha, Daniel Nikovski, George Konidaris, Stefanie Tellex
arXiv, 2020.
arxiv

Shows how the generalization capabilities of Behavior Cloning (BC) can be improved by learning a policy parameterized by some input that enables the agent to distinguish different goals (e.g. different buttons to press in a grid). Includes several exhaustive experiments in simulation and on two different robots.
[* denotes equal contribution. Work was done in collaboration with Mitsubishi Electric Research Laboratories]

Theses and Misc. Publications

You Only Need What’s in Scope: Generating Task-Specific Abstractions for Efficient AI Planning

Nishanth Kumar
Undergraduate Honors Thesis, Brown University, 2021.
paper

Presents a detailed, thesis-style description of my work on Task Scoping. This work is largely subsumed by our 'Task Scoping' preprint.

Talks

Awards

Industry Experience and Research Collaborations

Teaching

Selected Press Coverage