ICRA Workshop on

Language and Semantics of Task and Motion Planning

Key Facts

TAMP Example

Workshop Date
May 23th @ ICRA 2025
Workshop Location
Atlanta, GA, USA
Abstract Submission
OpenReview
Submission Deadline
April 11, 2025 (AoE)
Acceptance Notification
April 18, 2025
Organizers
Neil T. Dantam, Wil Thomason, Sihui Li, Lydia E. Kavraki
Contents
  1. Description
  2. Invited Speakers
  3. Schedule
  4. Call for Posters


Description

Generating complex robot behavior requires reasoning simultaneously on both discrete, task-related, logical decisions and feasibility of continuous motions. This problem is often referred to as Task And Motion Planning (TAMP). Research into TAMP approaches has proceeded for well over a decade, yielding key results and algorithms. Yet a prerequisite for such approaches is specification of the planning problem; beyond classic issues of knowledge engineering, varying assumptions on TAMP formulations require varying aspects to the specification, e.g., grasp information, placements, and manipulation strategies. Addressing such concerns of specification and semantics will enhance fair comparison, progress, and application in TAMP.

Audience

This workshop is intended for researchers from academia, industry and government working in AI planning, motion planning, and controls who are interested in improving the autonomy of robots for complex, real-world tasks such as mobile manipulation.

The two main target audiences for the workshop are: (1) members actively researching new methods, future trends and open questions in task and motion planning (2) people who are interested in learning about the current state-of-the-art in order to incorporate these methods into their own projects. We strongly encourage the participation of graduate students.

This workshop follows the previous workshops on varying aspects of Task and Motion Planning in 2016-2020 and 2023 from the same organizers. Past workshops received excellent participation with approximately 50 attendees, 10 presented posters, and engaging group discussions. During the previous workshops, numerous questions were raised regarding handling uncertainty, real-world environments, semantics, and learning. This workshop aims to address these needs with specific focus on language and semantics of TAMP.

Prior Workshops

This workshop continues a series of TAMP workshops:

top


Invited Speakers

Jeannette Bohg (Stanford University, USA)

Jeannette Bohg

(Tentative) Title: Unpacking Failure Modes of Learned Policies and theirimpact on long-horizon plans

Bio: Jeannette Bohg a Professor for Robotics at Stanford University and directs the Interactive Perception and Robot Learning Lab. Her research generally explores two questions: What are the underlying principles of robust sensorimotor coordination in humans, and how we can implement these principles on robots? Research on this topic lies at the intersection of Robotics, Machine Learning and Computer Vision, and her lab focuses specifically on Robotic Grasping and Manipulation.

Sylvain Calinon (Idiap Research Institute, Switzerland)

Sylvain Calinon

(Tentative) Title: Human-guided learning of robot manipulation skills with discrete and continuous variables

Bio: Sylvain Calinon is a Senior Research Scientist at the Idiap Research Institute, with research interests covering robot learning, optimal control, geometrical approaches, and human-robot collaboration. He is also a lecturer at the Ecole Polytechnique Fédérale de Lausanne (EPFL).

Sylvain's work focuses on human-centered robotics applications in which the robots can acquire new skills from only few demonstrations and interactions. It requires the development of models that can exploit the structure and geometry of the acquired data in an efficient way, the development of optimal control techniques that can exploit the learned task variations and coordination patterns, and the development of intuitive interfaces to acquire meaningful demonstrations.

Sonia Chernova (Georgia Tech, USA)

Sonia Chernova

Bio: Sonia Chernova is an Associate Professor in the School of Interactive Computing at Georgia Tech. She directs the Robot Autonomy and Interactive Learning (RAIL) lab, where they work on developing robots that are able to effectively operate in human environments. He research interests span robotics and artificial intelligence, including semantic reasoning, adaptive autonomy, human-robot interaction and explainable AI.

Animesh Garg (Georgia Tech, USA)

Animesh Garg

Bio: Animesh Garg is the Stephen Fleming Early Career Professor in Computer Science at Georgia Tech. He leads the People, AI, and Robotics (PAIR) research group. He is on the core faculty in the Robotics and Machine Learning programs. Animesh is also a Senior Researcher at Nvidia Research. Animesh earned a Ph.D. from UC Berkeley and was a postdoc at the Stanford AI Lab. He is on leave from the department of Computer Science at University of Toronto and CIFAR Chair position at the Vector Institute.

Garg earned his M.S. in Computer Science and Ph.D. in Operations Research from UC, Berkeley. He worked with Ken Goldberg at Berkeley AI Research (BAIR). He also worked closely with Pieter Abbeel, Alper Atamturk & UCSF Radiation Oncology. Animesh was later a postdoc at Stanford AI Lab with Fei-Fei Li and Silvio Savarese.

Garg's research vision is to build the Algorithmic Foundations for Generalizable Autonomy, that enables robots to acquire skills, at both cognitive & dexterous levels, and to seamlessly interact & collaborate with humans in novel environments. His group focuses on understanding structured inductive biases and causality on a quest for general-purpose embodied intelligence that learns from imprecise information and achieves flexibility & efficiency of human reasoning.

Edward Johns (Imperial College London, UK)

Edward Johns

(Tentative) Title: A Computer Vision Perspective on Task and Motion Planning

Bio: Dr Edward Johns is the Director of the Robot Learning Lab at Imperial College London, and a Senior Lecturer (Associate Professor). His work lies at the intersection of robotics, computer vision, and machine learning, with a particular focus on efficient learning of vision-based robot manipulation skills.

He received a BA and MEng in Electrical and Information Engineering from Cambridge University, and a PhD in visual place recognition from Imperial College. Following his PhD, he was a post-doc at UCL, before returning to Imperial College as a founding member of the Dyson Robotics Lab with Prof Andrew Davison, where he led the robot manipulation team.

Beomjoon Kim (KAIST, Republic of Korea)

Beomjoon Kim

(Tentative) Title: Leveraging Prior Knowledge And Experience In Task And Motion Planning

Bio: Beomjoon Kim is an Assistant Professor in Graduate School of AI at KAIST. He directs the Intelligent mobile-manipulation (IM^2) lab. He is interested in creating general-purpose mobile manipulation robots that can efficiently make decisions in complex environments.

Previously, Beomjoon obtained his Ph.D. in computer science from MIT CSAIL, MSc in computer science from McGill University, and BMath in computer science and statistics from University of Waterloo.

Zak Kingston (Purdue University, USA)

Zak Kingston

(Tentative) Title: How Hard Could It Be? Understanding Computational Effort and Constraints for Efficient TAMP

Bio: Zak Kingston is an Assistant Professor in the Department of Computer Science at Purdue University, leading the Computational Motion, Manipulation, and Autonomy (CoMMA) Lab.

Previously, Zak was a postdoctoral research associate and lab manager for the Kavraki Lab at Rice University under the direction of Dr. Lydia Kavraki. During his Ph.D., he was funded by a NASA Space Technology Research Fellowship and worked with the Robonaut 2 team at NASA JSC. His research interests are in robot motion planning and long-horizon robot autonomy, with focus on manipulation planning, planning with constraints, and hardware and software for planning.

Nick Roy (MIT, USA)

Nick Roy

Bio: Nick Roy is a Professor of Aeronautics and Astronautics at MIT. His research and teaching interests are in robotics, machine learning, autonomous systems, planning and reasoning, human-computer interaction, and micro air vehicles. He directs the Robust Robotics Group at MIT, is the Director of Engineering for MIT Quest for Intelligence, and founder of Project Wing at Google X.

Marc Toussaint (TU Berlin, Germany)

Marc Toussaint

Bio: Marc Toussaint is a professor in the area of AI & Robotics at TU Berlin, lead of the Learning & Intelligent Systems Lab at the EECS Faculty, and member of the Science Of Intelligence cluster of excellence. His research interests are in the intersection of AI and robotics, namely in using machine learning, optimization, and AI reasoning to tackle fundamental problems in robotics. The integration of learning and reasoning, of data-based and model-based decision making is of particular interest to him. Concrete research topics he works on are models and algorithms for physical reasoning, task-and-motion planning (logic-geometric programming), learning heuristics, the planning-as-inference paradigm, algorithms and methods for robotic building construction, and learning to transfer model-based strategies to reactive and adaptive real-world behavior. To this end, he builds on methodologies from optimization, reinforcement learning, machine learning, search, planning, and probabilistic inference. Some of his earlier work was on evolutionary algorithms (esp. evolving genetic representations and compression), relational reinforcement learning, and active learning. His physics diploma research was on gravity theory as a gauge theory.

Danfei Xu (Georgia Tech, USA)

Danfei Xu

(Tentative) Title: Generative Task and Motion Planning

Bio: Danfei Xu is an Assistant Professor at the School of Interactive Computing at Georgia Tech and a (part-time) Research Scientist at NVIDIA AI. He works at the intersection of Robotics and Machine Learning.

Danfei received his Ph.D. in CS from Stanford University advised by Fei-Fei Li and Silvio Savarese (2015-2021) and B.S. from Columbia University (SEAS'15). He has spent time at DeepMind UK (2019), ZOOX (2017), Autodesk Research (2016), CMU RI (2014), and Columbia Robotics Lab (2013-2015).

top


Tentative Schedule

Time Event
09:00-09:10 Workshop Introduction
09:10-09:35 Nick Roy
09:35-10:00 Zak Kingston: How Hard Could It Be? Understanding Computational Effort and Constraints for Efficient TAMP
10:00-10:25 Sylvain Calinon: Human-guided learning of robot manipulation skills with discrete and continuous variables
10:25-10:40 Coffee Break
10:40-11:05 Jeannette Bohg: Unpacking Failure Modes of Learned Policies and theirimpact on long-horizon plans
11:05-11:30 Beomjoon Kim: Leveraging Prior Knowledge And Experience In Task And Motion Planning
11:30-11:55 Sonia Chernova
11:55-12:10 Poster Lightning Talks
12:10-13:30 Lunch
13:30-14:30 Poster Session
14:30-14:55 Marc Toussaint
14:55-15:20 Danfei Xu: Generative Task and Motion Planning
15:20-15:35 Coffee break
15:35-16:00 Animesh Garg
16:00-16:25 Edward Johns: A Computer Vision Perspective on Task and Motion Planning
16:25-16:30 Best Poster Award
16:30-17:30 Panel Discussion and Wrap-Up

top


Call for Posters

We invite researchers to present their latest results, ongoing, or recently submitted work during our poster session. We particularly encourage submissions that address the following topics:

Accepted abstracts will be posted on this website but are non-archival.

Best Poster Award and Travel Support

Courtesy of a gift from SYMBOTIC, we are pleased to offer a best student poster award of $500 and a limited number of travel grants to student poster presenters.

Submission Format

Please submit an abstract in the ICRA format on OpenReview by April 11, 2025 (AoE). The length limit is 2 pages, not including references. Accepted contributions will be notified by April 18, 2025.

Presentation Format

Accepted abstracts will give a short (1-3 minute) "lightning talk" at the workshop and present during the poster session.

top

Acknowledgments

This workshop is supported with a gift from Symbotic.

top