Generating complex robot behavior requires reasoning simultaneously on both discrete, task-related, logical decisions and feasibility of continuous motions. This problem is often referred to as Task And Motion Planning (TAMP). Research into TAMP approaches has proceeded for well over a decade, yielding key results and algorithms. Yet a prerequisite for such approaches is specification of the planning problem; beyond classic issues of knowledge engineering, varying assumptions on TAMP formulations require varying aspects to the specification, e.g., grasp information, placements, and manipulation strategies. Addressing such concerns of specification and semantics will enhance fair comparison, progress, and application in TAMP.
This workshop is intended for researchers from academia, industry and government working in AI planning, motion planning, and controls who are interested in improving the autonomy of robots for complex, real-world tasks such as mobile manipulation.
The two main target audiences for the workshop are: (1) members actively researching new methods, future trends and open questions in task and motion planning (2) people who are interested in learning about the current state-of-the-art in order to incorporate these methods into their own projects. We strongly encourage the participation of graduate students.
This workshop follows the previous workshops on varying aspects of Task and Motion Planning in 2016-2020 and 2023 from the same organizers. Past workshops received excellent participation with approximately 50 attendees, 10 presented posters, and engaging group discussions. During the previous workshops, numerous questions were raised regarding handling uncertainty, real-world environments, semantics, and learning. This workshop aims to address these needs with specific focus on language and semantics of TAMP.
This workshop continues a series of TAMP workshops:
Title: Unpacking Failure Modes of Learned Policies and their impact on long-horizon plans
Bio: Jeannette Bohg a Professor for Robotics at Stanford University and directs the Interactive Perception and Robot Learning Lab. Her research generally explores two questions: What are the underlying principles of robust sensorimotor coordination in humans, and how we can implement these principles on robots? Research on this topic lies at the intersection of Robotics, Machine Learning and Computer Vision, and her lab focuses specifically on Robotic Grasping and Manipulation.
Title: Human-guided learning of robot manipulation skills with discrete and continuous variables
Bio: Sylvain Calinon is a Senior Research Scientist at the Idiap Research Institute, with research interests covering robot learning, optimal control, geometrical approaches, and human-robot collaboration. He is also a lecturer at the Ecole Polytechnique Fédérale de Lausanne (EPFL).
Sylvain's work focuses on human-centered robotics applications in which the robots can acquire new skills from only few demonstrations and interactions. It requires the development of models that can exploit the structure and geometry of the acquired data in an efficient way, the development of optimal control techniques that can exploit the learned task variations and coordination patterns, and the development of intuitive interfaces to acquire meaningful demonstrations.
Bio: Sonia Chernova is an Associate Professor in the School of Interactive Computing at Georgia Tech. She directs the Robot Autonomy and Interactive Learning (RAIL) lab, where they work on developing robots that are able to effectively operate in human environments. He research interests span robotics and artificial intelligence, including semantic reasoning, adaptive autonomy, human-robot interaction and explainable AI.
Bio: Animesh Garg is the Stephen Fleming Early Career Professor in Computer Science at Georgia Tech. He leads the People, AI, and Robotics (PAIR) research group. He is on the core faculty in the Robotics and Machine Learning programs. Animesh is also a Senior Researcher at Nvidia Research. Animesh earned a Ph.D. from UC Berkeley and was a postdoc at the Stanford AI Lab. He is on leave from the department of Computer Science at University of Toronto and CIFAR Chair position at the Vector Institute.
Garg earned his M.S. in Computer Science and Ph.D. in Operations Research from UC, Berkeley. He worked with Ken Goldberg at Berkeley AI Research (BAIR). He also worked closely with Pieter Abbeel, Alper Atamturk & UCSF Radiation Oncology. Animesh was later a postdoc at Stanford AI Lab with Fei-Fei Li and Silvio Savarese.
Garg's research vision is to build the Algorithmic Foundations for Generalizable Autonomy, that enables robots to acquire skills, at both cognitive & dexterous levels, and to seamlessly interact & collaborate with humans in novel environments. His group focuses on understanding structured inductive biases and causality on a quest for general-purpose embodied intelligence that learns from imprecise information and achieves flexibility & efficiency of human reasoning.
Title: A Computer Vision Perspective on Task and Motion Planning
Bio: Dr Edward Johns is the Director of the Robot Learning Lab at Imperial College London, and a Senior Lecturer (Associate Professor). His work lies at the intersection of robotics, computer vision, and machine learning, with a particular focus on efficient learning of vision-based robot manipulation skills.
He received a BA and MEng in Electrical and Information Engineering from Cambridge University, and a PhD in visual place recognition from Imperial College. Following his PhD, he was a post-doc at UCL, before returning to Imperial College as a founding member of the Dyson Robotics Lab with Prof Andrew Davison, where he led the robot manipulation team.
Title: Object Representation for Manipulation
Bio: Beomjoon Kim is an Assistant Professor in Graduate School of AI at KAIST. He directs the Intelligent mobile-manipulation (IM^2) lab. He is interested in creating general-purpose mobile manipulation robots that can efficiently make decisions in complex environments.
Previously, Beomjoon obtained his Ph.D. in computer science from MIT CSAIL, MSc in computer science from McGill University, and BMath in computer science and statistics from University of Waterloo.
Title: Speaking Pl{ai/a}nly: Language to Constraints and Back Again
Bio: Zak Kingston is an Assistant Professor in the Department of Computer Science at Purdue University, leading the Computational Motion, Manipulation, and Autonomy (CoMMA) Lab.
Previously, Zak was a postdoctoral research associate and lab manager for the Kavraki Lab at Rice University under the direction of Dr. Lydia Kavraki. During his Ph.D., he was funded by a NASA Space Technology Research Fellowship and worked with the Robonaut 2 team at NASA JSC. His research interests are in robot motion planning and long-horizon robot autonomy, with focus on manipulation planning, planning with constraints, and hardware and software for planning.
Bio: Nick Roy is a Professor of Aeronautics and Astronautics at MIT. His research and teaching interests are in robotics, machine learning, autonomous systems, planning and reasoning, human-computer interaction, and micro air vehicles. He directs the Robust Robotics Group at MIT, is the Director of Engineering for MIT Quest for Intelligence, and founder of Project Wing at Google X.
Time | Event |
---|---|
09:00-09:10 | Workshop Introduction |
09:10-09:35 | Beomjoon Kim: Object Representation for Manipulation |
09:35-10:00 | Jeannette Bohg: Unpacking Failure Modes of Learned Policies and their impact on long-horizon plans |
10:00-10:25 | Sylvain Calinon: Human-guided learning of robot manipulation skills with discrete and continuous variables |
10:25-10:40 | Coffee Break |
10:40-11:05 | Danfei Xu: Generative Task and Motion Planning |
11:05-11:30 | Nick Roy |
11:30-11:55 | Sonia Chernova |
11:55-12:20 | Poster Lightning Talks |
12:20-14:00 | Lunch |
14:00-14:25 | Animesh Garg |
14:25-14:50 | Zak Kingston: Speaking Pl{ai/a}nly: Language to Constraints and Back Again |
14:50-16:00 | Poster Session and coffee break |
16:00-16:25 | Edward Johns: A Computer Vision Perspective on Task and Motion Planning |
16:25-16:30 | Best Poster Award |
16:30-17:30 | Panel Discussion and Wrap-Up |
Courtesy of a gift from SYMBOTIC, we are pleased to offer a best student poster award of $500 and a limited number of travel grants to student poster presenters.
Accepted abstracts will give a short (90 second) "lightning talk" at the workshop and present during the poster session.
This workshop is supported with a gift from Symbotic.