Authors | Project | Keyword | |
---|---|---|---|
Task and motion planning operates in a combined discrete and continuous space to find a sequence of high-level, discrete actions and corresponding low-level, continuous paths to go from an initial state to a goal state.
@inbook{dantam2020taskmotion, author = {Neil T. Dantam}, title = {Task and Motion Planning}, booktitle = {Encyclopedia of Robotics}, editor = {Ang, Marcelo H. and Khatib, Oussama and Siciliano, Bruno}, publisher = {Springer Berlin Heidelberg}, year = {2020} }
Multiple robots can effectively acquire information from an environment and transmit it to a static base station. However, communication may not be possible everywhere in the environment, so robots must coordinate when and where to rendezvous with disconnected teammates. Such coordination presents challenges since knowing how long it will take to explore requires information about the environment, which is typically what the robots are acquiring. We propose a method to estimate disconnected robots' state and use observed lack of communication to refine our state estimation, allowing robots to plan for rendezvous or parallel exploration by predicting where teammates are likely to be. We show up to 87% improved performance for exploration tasks against a baseline approach that performs no such predictions.
@article{schack2024silence, author = {Matthew A. Schack and John G. Rogers and Neil T. Dantam}, title = {The Sound of Silence: Exploiting Information from the Lack of Communication}, journal = {IEEE Robotics and Automation Letters}, year = {2024} }
Achieving a complete motion planner that guarantees a plan or infeasibility proof in finite time is challenging, especially in high-dimensional spaces. Previous efforts have introduced asymptotically complete motion planners capable of providing a plan or infeasibility proof given long enough time. The algorithm trains a manifold using configuration space samples as data and triangulates the manifold to ensure its existence in the obstacle region of the configuration space. In this paper, we extend the construction of infeasibility proofs to higher dimensions by adapting Coxeter triangulation’s manifold tracing and cell construction procedures to concurrently triangulate the configuration space codimension-one manifold, and we apply a local elastic update to fix the triangulation when part of it is in the free space. We perform experiments on 4-DOF and 5-DOF serial manipulators. Infeasibility proofs in 4D are two orders of magnitude faster than previous results. Infeasibility proofs in 5D complete within minutes.
@article{li2023scaling, author = {Sihui Li and Neil T. Dantam}, title = {Scaling Infeasibility Proofs via Concurrent, Codimension-one, Locally-updated Coxeter Triangulation}, number = {12}, journal = {IEEE Robotics and Automation Letters}, volume = {8}, pages = {8303-8310}, year = {2023} }
We present a learning-based approach to prove infeasibility of kinematic motion planning problems. Sampling-based motion planners are effective in high-dimensional spaces but are only probabilistically complete. Consequently, these planners cannot provide a definite answer if no plan exists, which is important for high-level scenarios, such as task-motion planning. We apply data generated during multi-directional sampling-based planning (such as PRM) to a machine learning approach to construct an infeasibility proof. An infeasibility proof is a closed manifold in the obstacle region of the configuration space that separates the start and goal into disconnected components of the free configuration space. We train the manifold using common machine learning techniques and then triangulate the manifold into a polytope to prove containment in the obstacle region. Under assumptions about the hyper-parameters and robustness of configuration space optimization, the output is either an infeasibility proof or a motion plan in the limit. We demonstrate proof construction for up to 4-DOF configuration spaces. A large part of the algorithm is parallelizable, which offers potential to address higher dimensional configuration spaces.
@article{li2022sampling, author = {Sihui Li and Neil T. Dantam}, title = {A Sampling and Learning Framework to Prove Motion Planning Infeasibility}, number = {10}, journal = {The International Journal of Robotics Research}, volume = {42}, pages = {938-956}, year = {2023} }
In many exploration scenarios, it is important for robots to efficiently explore new areas and constantly communicate results. Mobile robots inherently couple motion and network topology due to the effects of position on wireless propagation, e.g., distance or obstacles between network nodes. Information gain is a useful measure of exploration. However, finding paths that maximize information gain while preserving communication is challenging due to the non-Markovian nature of information gain, discontinuities in network topology, and zero-reward local optima. We address these challenges through an optimization and sampling-based algorithm. Our algorithm scales to 50% more robots and obtains 2-5 times more information relative to path cost compared to baseline planning approaches.
@article{schack2021optimizing, author = {Matthew A. Schack and John G. Rogers and Qi Han and Neil T. Dantam}, title = {Optimizing Non-Markovian Information Gain under Physics-based Communication Constraints}, number = {3}, journal = {IEEE Robotics and Automation Letters}, volume = {6}, pages = {4813-4819}, year = {2021} }
Modern approaches for robot kinematics employ the product of exponentials formulation, represented using homogeneous transformation matrices. Quaternions over dual numbers are an established alternative representation; however, their use presents certain challenges: the dual quaternion exponential and logarithm contain a zero-angle singularity, and many common operations are less efficient using dual quaternions than with matrices. We present a new derivation of the dual quaternion exponential and logarithm that removes the singularity, we show an implicit representation of dual quaternions offers analytical and empirical efficiency advantages compared with both matrices and explicit dual quaternions, and we derive efficient dual quaternion forms of differential and inverse position kinematics. Analytically, implicit dual quaternions are more compact and require fewer arithmetic instructions for common operations, including chaining and exponentials. Empirically, we demonstrate a 30--40% speedup on forward kinematics and a 300--500% speedup on inverse position kinematics. This work relates dual quaternions with modern exponential coordinates and demonstrates that dual quaternions are a robust and efficient representation for robot kinematics.
@article{dantam2020robust, author = {Neil T. Dantam}, title = {Robust and efficient forward, differential, and inverse kinematics using dual quaternions}, number = {10-11}, journal = {The International Journal of Robotics Research}, volume = {40}, pages = {1087-1105}, year = {2021} }
Task and motion planning (TMP) combines discrete search and continuous motion planning. Earlier work has shown that to efficiently find a task-motion plan, the discrete search can leverage information about the continuous geometry. However, incorporating continuous elements into discrete planners presents challenges. We improve the scalability of TMP algorithms in tabletop scenarios with a fixed robot by introducing geometric knowledge into a constraint-based task planner in a robust way. The key idea is to learn a classifier for feasible motions and to use this classifier as a heuristic to order the search for a task-motion plan. The learned heuristic guides the search towards feasible motions and thus reduces the total number of motion planning attempts. A critical property of our approach is allowing robust planning in diverse scenes. We train the classifier on minimal exemplar scenes and then use principled approximations to apply the classifier to complex scenarios in a way that minimizes the effect of errors. By combining learning with planning, our heuristic yields order-of-magnitude run time improvements in diverse tabletop scenarios. Even when classification errors are present, properly biasing our heuristic ensures we will have little computational penalty.
@article{wells2019learning, author = {Wells, Andrew and Neil T. Dantam and Anshumali Shrivastava and Lydia E. Kavraki}, title = {Learning Feasibility for Task and Motion Planning in Tabletop Environments}, number = {2}, journal = {IEEE Robotics and Automation Letters}, volume = {4}, pages = {1255--1262}, year = {2019} }
We present the first platform-independent evaluation method for Task and Motion Planning (TAMP). Previously point, various problems have been used to test individual planners for specific aspects of TAMP. However, no common set of metrics, formats, and problems have been accepted by the community. We propose a set of benchmark problems covering the challenging aspects of TAMP and a planner-independent specification format for these problems. Our objective is to better evaluate and compare TAMP planners, foster communication and progress within the field, and lay a foundation to better understand this class of planning problems.
@article{lagriffoul2018benchmark, author = {Fabien Lagriffoul and Neil T. Dantam and Garrett, Caelan and Akbari, Aliakbar and Siddharth Srivastava and Lydia E. Kavraki}, title = {Platform-Independent Benchmarks for Task and Motion Planning}, number = {4}, journal = {IEEE Robotics and Automation Letters}, volume = {3}, pages = {3765--3772}, year = {2018} }
We present a new algorithm for task and motion planning (TMP) and discuss the requirements and abstrations necessary to obtain robust solutions for TMP in general. Our Iteratively Deepened Task and Motion Planning (IDTMP) method is probabilistically-complete and offers improved per- formance and generality compared to a similar, state-of-the- art, probabilistically-complete planner. The key idea of IDTMP is to leverage incremental constraint solving to efficiently add and remove constraints on motion feasibility at the task level. We validate IDTMP on a physical manipulator and evaluate scalability on scenarios with many objects and long plans, showing order-of-magnitude gains compared to the benchmark planner and a four-times self-comparison speedup from our extensions. Finally, in addition to describing a new method for TMP and its implementation on a physical robot, we also put forward requirements and abstractions for the development of similar planners in the future.
@article{dantam2018tmp, author = {Neil T. Dantam and Zachary K. Kingston and Swarat Chaudhuri and Lydia E. Kavraki}, title = {An Incremental Constraint-Based Framework for Task and Motion Planning}, number = {10}, journal = {The International Journal of Robotics Research}, volume = {37}, pages = {1134--1151}, year = {2018} }
Robots require novel reasoning systems to achieve complex objectives in new environments. Everyday activities in the physical world couple discrete and continuous reasoning. For example, to set the table in Fig. 1, the robot must make discrete decisions about which objects to pick and the order in which to do so, and execute these decisions by computing continuous motions to reach objects or desired locations. Robotics has traditionally treated these issues in isolation. Reasoning about discrete events is referred to as task planning while reasoning about and computing continuous motions is the realm of motion planning. However, several recent works have shown that separating task planning from motion planning---that is finding first a series of actions that will later be executed through continuous motion---is problematic; for example, the next discrete action may specify picking an object, but there may be no continuous motion for the robot to bring its hand to a configuration that can actually grasp the object to pick it up. Instead, Task--Motion Planning (TMP) tightly couples task planning and motion planning, producing a sequence of steps that can actually be executed by a real robot to bring the world from an initial to a final state. This article provides an introduction to TMP and discusses the implementation and use of an open-source TMP framework that is adaptable to new robots, scenarios, and algorithms.
@article{dantam2018tmkit, author = {Neil T. Dantam and Swarat Chaudhuri and Lydia E. Kavraki}, title = {The Task Motion Kit}, number = {3}, journal = {Robotics and Automation Magazine}, volume = {25}, pages = {61--70}, publisher = {IEEE}, year = {2018} }
Robot software combines the challenges of general purpose and real-time software, requiring complex logic and bounded resource use. Physical safety, particularly for dynamic systems such as humanoid robots, depends on correct software. General purpose computation has converged on unix-like operating systems -- standardized as POSIX, the Portable Operating System Interface -- for devices from cellular phones to supercomputers. The modular, multi-process design typical of POSIX applications is effective for building complex and reliable software. Absent from POSIX, however, is an interproccess communication mechanism that prioritizes newer data as typically desired for control of physical systems. We address this need in the Ach communication library which provides suitable semantics and performance for real-time robot control. Although initially designed for humanoid robots, Ach has broader applicability to complex mechatronic devices -- humanoid and otherwise -- that require real-time coupling of sensors, control, planning, and actuation. The initial user space implementation of Ach was limited in the ability to receive data from multiple sources. We remove this limitation by implementing Ach as a Linux kernel module, enabling Ach's high-performance and latest-message-favored semantics within conventional POSIX communication pipelines. We discuss how these POSIX interfaces and design principles apply to robot software, and we present a case study using the Ach kernel module for communication on the Baxter robot.
@article{dantam2016unix, author = {Neil T. Dantam and Bøndergaard, Kim and Johansson, Mattias A. and Furuholm, Tobias and Lydia E. Kavraki}, title = {Unix Philosophy and the Real World: Control Software for Humanoid Robots}, journal = {Frontiers in Robotics and Artificial Intelligence, Research Topic on Software Architectures for Humanoid Robotics}, volume = {3}, year = {2016} }
Correct real-time software is vital for robots in safety-critical roles such as service and disaster response. These systems depend on software for locomotion, navigation, manipulation, and even seemingly innocuous tasks such as safely regulating battery voltage. A multi-process software design increases robustness by isolating errors to a single process, allowing the rest of the system to continue operating. This approach also assists with modularity and concurrency. For real-time tasks such as dynamic balance and force control of manipulators, it is critical to communicate the latest data sample with minimum latency. There are many communication approaches intended for both general purpose and real-time needs. Typical methods focus on reliable communication or network-transparency and accept a trade-off of increased message latency or the potential to discard newer data. By focusing instead on the specific case of real-time communication on a single host, we reduce communication latency and guarantee access to the latest sample. We present a new Interprocess Communication (IPC) library, Ach, which addresses this need, and discuss its application for real-time, multiprocess control on three humanoid robots.
@article{dantam2015multiprocess, author = {Neil T. Dantam and Daniel M. Lofaro and Ayonga Hereid and Paul Oh and Aaron Ames and Mike Stilman}, title = {The Ach IPC Library}, number = {1}, journal = {Robotics and Automation Magazine}, volume = {22}, pages = {76--85}, publisher = {IEEE}, year = {2015} }
We present the Motion Grammar: an approach to represent and verify robot control policies based on Context-Free Grammars. The production rules of the grammar represent a top-down task decomposition of robot behavior. The terminal symbols of this language represent sensor readings that are parsed in real-time. Efficient algorithms for context-free parsing guarantee that online parsing is computationally tractable. We analyze verification properties and language constraints of this linguistic modeling approach, show a linguistic basis that unifies several existing methods, and demonstrate effectiveness through experiments on a 14-DOF manipulator interacting with 32 objects (chess pieces) and an unpredictable human adversary. We provide many of the algorithms discussed as Open Source, permissively licensed software.
@article{dantam2013motion, author = {Neil T. Dantam and Mike Stilman}, title = {The Motion Grammar: Analysis of a Linguistic Method for Robot Control}, number = {3}, journal = {Transactions on Robotics}, volume = {29}, pages = {704--718}, publisher = {IEEE}, year = {2013} }
Robot teams are an effective tool for communicating data in unstructured environments (e.g., data collection). Since robot teams can rearrange themselves, they can ferry data through physical motion rather than only wireless transmissions, potentially increasing network throughput. We construct a Robot Network Graph that represents throughput due to both physical motion and wireless communication, and use the Robot Network Graph to find a polynomial time upper bound on network throughput. We combine the upper bound with an optimization formulation for maximizing network throughput, producing a bounded-optimal solution. We analyze the performance of our bounded-optimal solution and show our bound is ~3x tighter than a bound derived from the maximum flow across a graph.
@inproceedings{schack2024throughput, author = {Matthew A. Schack and John G. Rogers and Neil T. Dantam}, title = {Sneakernet and station wagons to robots: bounds for robotic network throughput}, booktitle = {Algorithmic Foundations of Robotics XVI}, year = {2024} }
Many robot tasks may involve achieving visibility (such as to observe areas of interest) or maintaining occlusion (such as to avoid disturbing other agents). We generally formulate such sequential visibility tasks for 3D worlds, termed the Park Rangers' Problem, and we develop an approach to solve such tasks offering completeness under certain requirements. Our approach constructs an abstraction based on an exact test for visibility between areas and multiple tests and relaxations for the nonconvex problem of determining occlusions between areas. We apply a constraint-based planning approach and iteratively refine the abstraction. Finally, we evaluate the approach on simulated visibility scenarios.
@inproceedings{phillips2024park, author = {John Phillips and Sihui Li and Neil T. Dantam}, title = {Park Rangers' Problem: Motion Planning for Sequential Visibility Requirements}, booktitle = {Algorithmic Foundations of Robotics XVI}, year = {2024} }
Many robot tasks impose constraints on the workspace. For example, a robot may need to move a container without spilling its contents or open a door following the doorknob’s arc. Such constraints may induce narrow volumes in the configuration space, traditionally a challenge for sampling-based methods, and further cause infeasibility. We extend sample-driven connectivity learning (SDCL), a robust approach for planning with narrow passages, to develop a sampling ensemble for workspace constraints. In particular, the ensemble combines SDCL, projection via dual quaternion optimization, and random sampling. These complementary sampling approaches support efficient and robust planning under workspace constraints. Further, this framework offers the ability to determine infeasibility under workspace constraints, which is unaddressed by previous constrained planning methods.
@inproceedings{li2024sampling, author = {Sihui Li and Matthew A. Schack and Aakriti Upadhyay and Neil T. Dantam}, title = {A Sampling Ensemble for Asymptotically Complete Motion Planning with Volume-Reducing Workspace Constraints}, booktitle = {International Conference on Intelligent Robots and Systems}, publisher = {IEEE/RSJ}, year = {2024} }
While maximizing line-of-sight coverage of specific regions or agents in the environment is a well-explored path planning objective, the converse problem of minimizing exposure to the entire environment during navigation is especially interesting in the context of minimizing detection risk. This work demonstrates that minimizing line-of-sight exposure to the environment is non-Markovian, which cannot be efficiently solved optimally with traditional path planning. The optimality gap of the graph-search algorithm A* and the trade-offs in optimality vs. computation time of several approximating heuristics is explored. Finally, the concept of equal-exposure corridors, which afford polynomial time determination of all paths that do not increase exposure, is presented.
@inproceedings{hamzezadeh2024exposure, author = {Hamzezadeh, Eugene and John G. Rogers and Neil T. Dantam and Petruska, Andrew}, title = {Exposure Conscious Path Planning for Equal Exposure Corridors}, booktitle = {Conference on Automation Science and Engineering}, year = {2024} }
Cyber-physical systems (CPS) such as robots and self-driving cars pose strict physical requirements to avoid failure. The scheduling choices impact these requirements. This presents a challenge: How do we find efficient schedules for CPS with heterogeneous process- ing units, such that the schedules are resource-bounded to meet the physical requirements? For example, tasks that require signif- icant computation time in a self-driving car can delay reaction, decreasing available braking time. Heterogeneous computing sys- tems — containing CPUs, GPUs, and other types of domain-specific accelerators — offer effective capabilities to reduce computation time or energy consumption and expand such operating conditions. However, doing so under physical requirements presents several challenges that existing scheduling solutions fail to address. We propose the creation of a structured system, the Constrained Autonomous Workload Scheduler (CAuWS). This structured and system-agnostic approach determines scheduling decisions with direct relations to the environment and differs from current ad hoc approaches which either lack heterogeneity, system generality, or this consideration of the physical world. By using a representation language (AuWL), timed Petri nets, and mixed-integer linear programming, CAuWS offers novel capabilities to represent and schedule many types of CPS workloads, real-world constraints, and optimization criteria, creating optimal assignment of heterogeneous processing units to tasks. We demonstrate the utility of CAuWS with a drone simulation under multiple physical constraints. The autonomous computation for the drone is made up of commonly used workloads (i.e., SLAM, and vision networks) and is run on a popular heterogeneous system-on-chip, NVIDIA Xavier AGX.
@inproceedings{mcgowen2024scheduling, author = {McGowen, Justin and Dagli, Ismet and Neil T. Dantam and Mehmet E. Belviranli}, title = {Scheduling for Cyber-Physical Systems with Heterogeneous Processing Units under Real-World Constraints}, booktitle = {International Conference on Supercomputing}, pages = {298–311}, publisher = {ACM}, year = {2024} }
Cyber-physical systems (CPS) such as robots and self-driving cars pose strict physical requirements to avoid failure. Scheduling choices impact these requirements. This presents a challenge: how do we find efficient schedules for CPS with heterogeneous processing units, such that the schedules are resource-bounded to meet the physical requirements? We propose the creation of a structured system, the Constrained Autonomous Workload Scheduler, which determines scheduling decisions with direct relations to the environment. By using a representation language (AuWL), Timed Petri nets, and mixed-integer linear programming, our scheme offers novel capabilities to represent and schedule many types of CPS workloads, real world constraints, and optimization criteria.
@inproceedings{mcgowen2024constraint, author = {McGowen, Justin and Dagli, Ismet and Neil T. Dantam and Mehmet E. Belviranli}, title = {Constraint-Aware Resource Management for Cyber-Physical Systems}, booktitle = {2024 Design, Automation and Test in Europe Conference and Exhibition}, pages = {1-2}, year = {2024} }
Using robots to collect data is an effective way to obtain information from the environment and communicate it to a static base station. Furthermore, robots have the capability to communicate with one another, potentially decreasing the time for data to reach the base station. We present a Mixed Integer Linear Program that reasons about discrete routing choices, continuous robot paths, and their effect on the latency of the data collection task. We analyze our formulation, discuss optimization challenges inherent to the data collection problem, and propose a factored formulation that finds optimal answers more efficiently. We show our work is able to find paths that reduce latency by up to 101% compared to treating all robots independently in our tested scenarios.
@inproceedings{schack2023communication, author = {Matthew A. Schack and John G. Rogers and Qi Han and Neil T. Dantam}, title = {Robot Team Data Collection with Anywhere Communication}, booktitle = {International Conference on Intelligent Robots and Systems}, publisher = {IEEE/RSJ}, year = {2023} }
Robot networks are necessary for a variety of monitoring and surveillance applications where robots must communicate observations. This work considers robot networks that must operate in an area affected by potential communication jamming and excessive radio frequency noise. Specifically, we look at deploying a reinforcement robot to strengthen an existing network of robots that are already performing a monitoring task. The robot's objective is to find a location that maximizes the network's signal-to-noise ratio. We present a Gaussian process approach to predict areas of excessive radio frequency noise, enabling the robot to avoid these areas and relay more data. We evaluate our algorithm's performance in physical robot experiments using a Clearpath Jackal robot and a USRP software-defined radio to broadcast a jamming signal. Our results show major improvements in networking metrics over a baseline and demonstrate the need for jamming-aware robot planning.
@inproceedings{diller2023communication, author = {Jonathan Diller and Neil T. Dantam and John G. Rogers and Qi Han}, title = {Communication Jamming-Aware Robot Path Adaptation}, booktitle = {International Conference on Distributed Computing in Smart Systems and the Internet of Things}, pages = {768-773}, publisher = {IEEE}, year = {2023} }
In this paper, we explore how robots can properly explain failures during navigation tasks with privacy concerns. We present an integrated robotics approach to generate visual failure explanations, by combining a language-capable cognitive architecture (for recognizing intent behind commands), an object- and location-based context recognition system (for identifying the locations of people and classifying the context in which those people are situated) and an infeasibility proof-based motion planner (for explaining planning failures on the basis of contextually mediated privacy concerns). The behavior of this integrated system is validated using a series of experiments in a simulated medical environment.
@inproceedings{li2023failure, author = {Sihui Li and Sriram Siva and Terran Mott and Tom Williams and Hao Zhang and Neil T. Dantam}, title = {Failure Explanation in Privacy-Sensitive Contexts: An Integrated Systems Approach}, booktitle = {International Conference on Robot and Human Interactive Communication}, publisher = {IEEE}, year = {2023} }
Sampling-based motion planning works well in many cases but is less effective if the configuration space has narrow passages. In this paper, we propose a learning-based strategy to sample in these narrow passages, which improves overall planning time. Our algorithm first learns from the configuration space planning graphs and then uses the learned information to effectively generate narrow passage samples. We perform experiments in various 6-DOF and 7-DOF scenes, and our algorithm offers order of magnitude performance improvements compared to baseline planners in some of these scenarios.
@inproceedings{li2023SDCL, author = {Sihui Li and Neil T. Dantam}, title = {Sample-Driven Connectivity Learning for Motion Planning in Narrow Passages}, booktitle = {International Conference on Robotics and Automation}, pages = {5681-5687}, publisher = {IEEE}, year = {2023} }
Proving motion planning infeasibility is an important part of a complete motion planner. Common approaches for high-dimensional motion planning are only probabilistically complete. Previously, we presented an algorithm to construct infeasibility proofs by applying machine learning to sampled configurations from a bidirectional sampling-based planner. In this work, we prove that the learned manifold converges to an infeasibility proof exponentially. Combining prior approaches for sampling-based planning and our converging infeasibility proofs, we propose the term asymptotic completeness to describe the property of returning a plan or infeasibility proof in the limit. We compare the empirical convergence of different sampling strategies to validate our analysis.
@inproceedings{li2022exponential, author = {Sihui Li and Neil T. Dantam}, title = {Exponential Convergence of Infeasibility Proofs for Kinematic Motion Planning}, booktitle = {Algorithmic Foundations of Robotics XV}, pages = {294--311}, publisher = {Springer International Publishing}, year = {2023} }
Robots that use natural language in collaborative tasks must refer to objects in their environment. Recent work has shown the utility of the linguistic theory of the Givenness Hierarchy (GH) in generating appropriate referring forms. But before referring expression generation (REG), collaborative robots must the determine content and structure of a sequence of utterances, a task known as document planning in the natural language generation (NLG) community. This problem presents additional challenges for robots in situated contexts, where described objects change both physically and in the minds of their interlocutors. In this work, we consider how robots can "think ahead" about the objects they must refer to and how to refer to them, sequencing object references to form a coherent, easy to follow chain. Specifically, we leverage GH to enable robots to plan their utterances in a way that keeps objects at a high cognitive status, which enables use of concise, anaphoric referring forms. We encode these linguistic insights as a mixed integer program (MIP) within a planning context, formulating constraints to concisely and efficiently capture GH-theoretic cognitive properties. We demonstrate that this GH-informed planner generates sequences of utterances with high inter-sentential coherence, which we argue should enable substantially more efficient and natural human-robot dialogue.
@inproceedings{spevak2022givenness, author = {Spevak, Kevin and Han, Zhao and Tom Williams and Neil T. Dantam}, title = {Givenness Hierarchy Informed Optimal Sentence Planning for Situated Human-Robot Interaction}, booktitle = {International Conference on Intelligent Robots and Systems}, pages = {6109-6115}, publisher = {IEEE/RSJ}, year = {2022} }
Acceptance of social robots in human-robot collaborative environments depends on the robots' sensitivity to human moral and social norms. Robot behavior that violates norms may decrease trust and lead human interactants to blame the robot and view it negatively. Hence, for long-term acceptance, social robots need to detect possible norm violations in their action plans and refuse to perform such plans. This paper integrates the Distributed, Integrated, Affect, Reflection, Cognition (DIARC) robot architecture (implemented in the Agent Development Environment (ADE)) with a novel place recognition module and a norm-aware task planner to achieve context-sensitive moral reasoning. This will allow the robot to reject inappropriate commands and comply with context-sensitive norms. In a validation scenario, our results show that the robot would not comply with a human command to violate a privacy norm in a private context.
@inproceedings{jackson2021integrated, author = {Jackson, Ryan Blake and Sihui Li and Banisetty, Santosh Balajee and Siva, Sriram and Zhang, Hao and Neil T. Dantam and Tom Williams}, title = {An Integrated Approach to Context-Sensitive Moral Cognition in Robot Cognitive Architectures}, booktitle = {International Conference on Intelligent Robots and Systems}, pages = {1911--1918}, publisher = {IEEE/RSJ}, year = {2021} }
Exploring robots may fail due to environmental hazards. Thus, robots need to account for the possibility of failure to plan the best exploration paths. Optimizing expected utility enables robots to find plans that balance achievable reward with the inherent risks of exploration. Moreover, when robots rendezvous and communicate to exchange observations, they increase the probability that at least one robot is able to return with the map. Optimal exploration is NP-hard, so we apply a constraint-based approach to enable highly-engineered solution techniques. We model exploration under the possibility of robot failure and communication constraints as an integer, linear program and a generalization of the Vehicle Routing Problem. Empirically, we show that for several scenarios, this formulation produces paths within 50% of a theoretical optimum and achieves twice as much reward as a baseline greedy approach.
@inproceedings{schack2021disconnection, author = {Matthew A. Schack and John G. Rogers and Qi Han and Neil T. Dantam}, title = {Optimization-Based Robot Team Exploration Considering Attrition and Communication Constraints}, booktitle = {International Conference on Intelligent Robots and Systems}, pages = {5864--5871}, publisher = {IEEE/RSJ}, year = {2021} }
We present a learning-based approach to prove infeasibility of kinematic motion planning problems. Sampling-based motion planners are effective in high-dimensional spaces but are only probabilistically complete. Consequently; these planners cannot provide a definite answer if no plan exists; which is important for high-level scenarios; such as task-motion planning. We propose a combination of bidirectional sampling-based planning (such as RRT-connect) and machine learning to construct an infeasibility proof alongside the two search trees. An infeasibility proof is a closed manifold in the obstacle region of the configuration space that separates the start and goal into disconnected components of the free configuration space. We train the manifold using common machine learning techniques and then triangulate the manifold into a polytope to prove containment in the obstacle region. Under assumptions about learning hyper-parameters and robustness of configuration space optimization; the output is either an infeasibility proof or a motion plan. We demonstrate proof construction for 3-DOF and 4-DOF manipulators and show improvement over previous algorithms.
@inproceedings{li2021learning, author = {Sihui Li and Neil T. Dantam}, title = {Learning Proofs of Motion Planning Infeasibility}, booktitle = {Robotics: Science and Systems}, year = {2021} }
We present a general approach for constructing proofs of motion planning infeasibility. Effective high-dimensional motion planners, such as sampling-based methods, are limited to probabilistic completeness, so when no plan exists, these planners either do not terminate or can only run until a timeout. We address this completeness challenge by augmenting a sampling-based planner with a method to create an infeasibility proof in conjunction with building the search tree. An infeasibility proof is a closed polytope that separates the start and goal into disconnected components of the free configuration space. We identify possible facets of the polytope via a nonlinear optimization procedure using sampled points in the non-free configuration space. We identify the set of facets forming the separating polytope via a linear constraint satisfaction problem. This proof construction is valid for general (i.e., non-Cartesian) configuration spaces. We demonstrate this approach on the low-dimensional Jaco manipulator and discuss engineering approaches to scale to higher dimensional spaces.
@inproceedings{li2020towards, author = {Sihui Li and Neil T. Dantam}, title = {Towards General Infeasibility Proofs in Motion Planning}, booktitle = {International Conference on Intelligent Robots and Systems}, pages = {6704-6710}, publisher = {IEEE/RSJ}, year = {2020} }
Modern approaches for robot kinematics employ the product of exponentials formulation, represented using homogeneous transformation matrices. Quaternions over dual numbers are an established alternative representation; however, their use presents certain challenges: the dual quaternion exponential and logarithm contain a zero-angle singularity, and many common operations are less efficient using dual quaternions than with matrices. We present a new derivation of the dual quaternion exponential and logarithm that removes the singularity, and we show an implicit representation of dual quaternions offers analytical and empirical efficiency advantages compared to both matrices and explicit dual quaternions. Analytically, implicit dual quaternions are more compact and require fewer arithmetic instructions for common operations, including chaining and exponentials. Empirically, we demonstrate a 25%-40% speedup to compute the forward kinematics of multiple robots. This work offers a practical connection between dual quaternions and modern exponential coordinates, demonstrating that a quaternion-based approach provides a more efficient alternative to matrices for robot kinematics.
@inproceedings{dantam2018practical, author = {Neil T. Dantam}, title = {Practical Exponential Coordinates using Implicit Dual Quaternions}, booktitle = {Algorithmic Foundations of Robotics XIII}, pages = {639--655}, publisher = {Springer International Publishing}, year = {2020} }
When humans interact with each other, they often make use of deictic gestures such as pointing to help pick out targets of interest to their conversation. In the field of Human-Robot Interaction, research has repeatedly demonstrated the utility of enabling robots to use such gestures as well. Recent work in augmented, mixed, and virtual reality stands to enable enormous advances in robot deixis, both by allowing robots to gesture in ways that were not previously feasible, and by enabling gesture on robotic platforms and environmental contexts in which gesture was not previously feasible. In this paper, we summarize our own recent work on using augmented, mixed, and virtual-reality techniques to advance the state-of-the-art of robot-generated deixis.
@inproceedings{williams2018augmented, author = {Tom Williams and Tran, Nhan and Rands, Josh and Neil T. Dantam}, title = {Augmented, Mixed, and Virtual Reality Enabling of Robot Deixis}, booktitle = {Virtual, Augmented and Mixed Reality: Interaction, Navigation, Visualization, Embodiment, and Simulation}, editor = {Chen, Jessie Y.C. and Fragomeni, Gino}, pages = {257--275}, year = {2018} }
We present a new algorithm for task and motion planning (TMP) and discuss the requirements and abstractions necessary to obtain robust solutions for TMP in general. Our Iteratively Deepened Task and Motion Planning (IDTMP) method is probabilistically-complete and offers improved per- formance and generality compared to a similar, state-of-the- art, probabilistically-complete planner. The key idea of IDTMP is to leverage incremental constraint solving to efficiently add and remove constraints on motion feasibility at the task level. We validate IDTMP on a physical manipulator and evaluate scalability on scenarios with many objects and long plans, showing order-of-magnitude gains compared to the benchmark planner and a four-times self-comparison speedup from our extensions. Finally, in addition to describing a new method for TMP and its implementation on a physical robot, we also put forward requirements and abstractions for the development of similar planners in the future.
@inproceedings{dantam2016tmp, author = {Neil T. Dantam and Zachary K. Kingston and Swarat Chaudhuri and Lydia E. Kavraki}, title = {Incremental Task and Motion Planning: A Constraint-Based Approach}, booktitle = {Robotics: Science and Systems}, year = {2016} }
We present a novel and scalable policy synthesis approach for robots. Rather than producing single-path plans for a static environment, we consider changing environments with uncontrollable agents, where the robot needs a policy to respond correctly over the infinite-horizon interaction with the environment. Our approach operates on task and motion domains, and combines actions over discrete states with continuous, collision-free paths. We synthesize a task and motion policy by iteratively generating a candidate policy and verifying its correctness. For efficient policy generation, we use grammars for potential policies to limit the search space and apply domain-specific heuristics to generalize verification failures, providing stricter constraints on policy candidates. For efficient policy verification, we construct compact, symbolic constraints for valid policies and employ a Satisfiability Modulo Theories (SMT) solver to check the validity of these constraints. Furthermore, the SMT solver enables quantitative specifications such as energy limits. The results show that our approach offers better scalability compared to a state-of-the-art policy synthesis tool in the tested benchmarks and demonstrate an order-of-magnitude speedup from our heuristics for the tested mobile manipulation domain.
@inproceedings{wang2016task, author = {Wang, Yue and Neil T. Dantam and Swarat Chaudhuri and Lydia E. Kavraki}, title = {Task and Motion Policy Synthesis as Liveness Games}, booktitle = {International Conference on Automated Planning and Scheduling}, publisher = {AAAI}, year = {2016} }
We present a method for Cartesian workspace control of a robot manipulator that enforces joint-level acceleration, velocity, and position constraints using linear optimization. This method is robust to kinematic singularities. On redundant manipulators, we avoid poor configurations near joint limits by including a maximum permissible velocity term to center each joint within its limits. Compared to the baseline Jacobian damped least-squares method of workspace control, this new approach honors kinematic limits, ensuring physically realizable control inputs and providing smoother motion of the robot. We demonstrate our method on simulated redundant and non-redundant manipulators and implement it on the physical 7-degree-of-freedom Baxter manipulator. We provide our control software under a permissive license.
@inproceedings{kingston2015lc3, author = {Zachary K. Kingston and Neil T. Dantam and Lydia E. Kavraki}, title = {Kinematically Constrained Workspace Control via Linear Optimization}, booktitle = {International Conference on Humanoid Robots}, pages = {758--764}, publisher = {IEEE}, year = {2015} }
We demonstrate that millimeter-level bimanual manipulation accuracy can be achieved without the static camera registration typically required for visual servoing. We register multiple cameras online, converging in seconds, by visually tracking features on the robot hands and filtering the result. Then, we compute and track continuous-velocity relative workspace trajectories for the end-effector. We demonstrate the approach using Schunk LWA4 and SDH manipulators and Logitech C920 cameras, showing accurate relative positioning for pen-capping and object hand-off tasks. Our filtering software is available under a permissive license.
@inproceedings{dantam2014multi, author = {Neil T. Dantam and Heni Ben Amor and Henrik Christensen and Mike Stilman}, title = {Online Multi-Camera Registration for Bimanual Workspace Trajectories}, booktitle = {International Conference on Humanoid Robots}, pages = {588--593}, publisher = {IEEE}, year = {2014} }
We present a new approach to generate workspace trajectories for multiple waypoints. To satisfy workspace constraints with constant-axis rotation, this method splines a given sequence of orientations, maintaining constant-axis within each segment. This improves on other approaches which are point-to-point or take indirect paths. We derive this approach by blending subsequent spherical linear interpolation phases, computing interpolation parameters so that rotational velocity is continuous. We show this method first on simulated manipulator and then perform a physical screwing task on a Schunk LWA4 robot arm. Finally, we provide permissively licensed software which implements this trajectory generation and tracking.
@inproceedings{dantam2014spherical, author = {Neil T. Dantam and Mike Stilman}, title = {Spherical Parabolic Blends for Robot Workspace Trajectories}, booktitle = {International Conference on Intelligent Robots and Systems}, pages = {3624--3629}, publisher = {IEEE}, year = {2014} }
We demonstrate that millimeter-level manipulation accuracy can be achieved without the static camera registration typically required for visual servoing. We register the camera online, converging in seconds, by visually tracking features on the robot and filtering the result. This online registration handles cases such as perturbed camera positions, wear and tear on camera mounts, and even a camera held by a human. We implement the approach on a Schunk LWA4 manipulator and Logitech C920 camera, servoing to target and pre-grasp configurations. Our filtering software is available under a permissive license.
@inproceedings{dantam2014online, author = {Neil T. Dantam and Heni Ben Amor and Henrik Christensen and Mike Stilman}, title = {Online Camera Registration for Robot Manipulation}, booktitle = {International Symposium on Experimental Robotics}, pages = {179--194}, publisher = {Springer}, year = {2014} }
We present a software synthesis method for speed-controlled robot walking based on supervisory control of a context-free Motion Grammar. First, we use Human-Inspired control to identify parameters for stable fixed speed walking and for transitions between fixed speeds. Next, we build a Motion Grammar representing the discrete-time control for this set of speeds. Then, we synthesize C code from this grammar and generate supervisors online to achieve desired walking speeds, ensuring correctness of discrete behavior. Finally, we demonstrate this approach on the Aldebaran NAO, showing stable walking transitions with dynamically selected speeds.
@inproceedings{dantam2013correct, author = {Neil T. Dantam and Ayonga Hereid and Aaron Ames and Mike Stilman}, title = {Correct Software Synthesis for Stable Speed-Controlled Robotic Walking}, booktitle = {Robotics: Science and Systems}, year = {2013} }
Humanoid robots require greater software reliability than traditional mechantronic systems if they are to perform useful tasks in typical human-oriented environments. This paper covers a software architecture which distributes the load of computation and control tasks over multiple processes, enabling fail-safes within the software. These fail-safes ensure that unexpected crashes or latency do not produce damaging behavior in the robot. The distribution also offers benefits for future software development by making the architecture modular and extensible. Utilizing a low-latency inter-process communication protocol (Ach), processes are able to communicate with high control frequencies. The key motivation of this software architecture is to provide a practical framework for safe and reliable humanoid robot software development. The authors test and verify this framework on a HUBO2 Plus humanoid robot.
@inproceedings{grey2013architecture, author = {M.X. Grey and Neil T. Dantam and Daniel M. Lofaro and Paul Oh and Aaron Bobick and Magnus Egerstedt and Mike Stilman}, title = {Multi-Process Control Software for Humanoid Robots}, booktitle = {IEEE International Conference on Technologies for Practical Robot Applications}, pages = {190--195}, year = {2013} }
We present a new Interprocess Communication (IPC) mechanism and library. Ach is uniquely suited for coordinating drivers, controllers, and algorithms in complex robotic systems such as humanoid robots. Ach eliminates the Head-of-Line Blocking problem for applications that always require access to the newest message. Ach is efficient, robust, and formally verified. It has been tested and demonstrated on a variety of physical robotic systems, and we discuss the implementation on our humanoid robot Golem Krang. Finally, the source code for Ach is available under an Open Source permissive license.
@inproceedings{dantam2012ach, author = {Neil T. Dantam and Mike Stilman}, title = {Robust and Efficient Communication for Real-Time Multi-Process Robot Software}, booktitle = {International Conference on Humanoid Robots}, pages = {316--322}, publisher = {IEEE}, year = {2012} }
We demonstrate the automatic transfer of an assembly task from human to robot. This work extends efforts showing the utility of linguistic models in verifiable robot control policies by now performing real visual analysis of human demonstrations to automatically extract a policy for the task. This method tokenizes each human demonstration into a sequence of object connection symbols, then transforms the set of sequences from all demonstrations into an automaton, which represents the task-language for assembling a desired object. Finally, we combine this assembly automaton with a kinematic model of a robot arm to reproduce the demonstrated task.
@inproceedings{dantam2012assem, author = {Neil T. Dantam and Irfan Essa and Mike Stilman}, title = {Linguistic Transfer of Human Assembly Tasks to Robots}, booktitle = {Intelligent Robots and Systems}, pages = {237-242}, publisher = {IEEE}, year = {2012} }
This work combines semantic maps with hybrid control models, generating a direct link between action and environment models to produce a control policy for mobile manipulation in unstructured environments. First, we generate a semantic map for our environment and design a base model of robot action. Then, we combine this map and action model using the Motion Grammar Calculus to produce a combined robot-environment model. Using this combined model, we apply supervisory control to produce a policy for the manipulation task. We demonstrate this approach on a Segway RMP-200 mobile platform.
@inproceedings{dantam2012composition, author = {Neil T. Dantam and Carlos Nieto-Granda and Henrik Christensen and Mike Stilman}, title = {Linguistic Composition of Semantic Maps and Hybrid Controllers}, booktitle = {International Symposium on Experimental Robotics}, pages = {699--714}, year = {2012} }
This paper provides a method for deriving provably correct controllers for Hybrid Dynamical Systems with Context-Free discrete dynamics, nonlinear continuous dynamics, and nonlinear state partitioning. The proposed method models the system using a Context-Free Motion Grammar and specifies correct performance using a Regular language representation such as Linear Temporal Logic. The initial model is progressively rewritten via a calculus of symbolic transformation rules until it satisfies the desired specification.
@inproceedings{dantam2012calc, author = {Neil T. Dantam and Mike Stilman}, title = {The Motion Grammar Calculus for Context-Free Hybrid Systems}, booktitle = {American Control Conference}, pages = {5294--5301}, year = {2012} }
We present and analyze the Motion Grammar: a novel unified representation for task decomposition, perception, planning, and control that provides both fast online control of robots in uncertain environments and the ability to guarantee completeness and correctness. The grammar represents a policy for the task which is parsed in real-time based on perceptual input. Branches of the syntax tree form the levels of a hierarchical decomposition, and the individual robot sensor readings are given by tokens. We implement this approach in the interactive game of Yamakuzushi on a physical robot resulting in a system that repeatably competes with a human opponent in sustained game-play for the roughly six minute duration of each match.
@inproceedings{dantam2011yama, author = {Neil T. Dantam and Mike Stilman}, title = {The Motion Grammar: Linguistic Perception, Planning, and Control}, booktitle = {Robotics: Science and Systems}, year = {2011} }
We introduce the Motion Grammar, a powerful new representation for robot decision making, and validate its properties through the successful implementation of a /physical/ human-robot game. The Motion Grammar is a formal tool for task decomposition and hybrid control in the presence of significant online uncertainty. In this paper, we describe the Motion Grammar, introduce some of the formal guarantees it can provide, and represent the entire game of human-robot chess through a single formal language. This language includes game-play, safe handling of human motion, uncertainty in piece positions, misplaced and collapsed pieces. We demonstrate the simple and effective language formulation through experiments on a 14-DOF manipulator interacting with 32 objects (chess pieces) and an unpredictable human adversary.
@inproceedings{dantam2011chess, author = {Neil T. Dantam and Pushkar Kolhe and Mike Stilman}, title = {The Motion Grammar for Physical Human-Robot Games}, booktitle = {International Conference on Robotics and Automation}, pages = {5463-5469}, publisher = {IEEE}, year = {2011} }
This paper presents three effective manipulation strategies for wheeled, dynamically balancing robots with articulated links. By comparing these strategies through analysis simulation and robot experiments, we show that contact placement and body posture have a significant impact on the robot's ability to accelerate and displace environment objects. Given object geometry and friction parameters we determine the most effective methods for utilizing wheel torque to perform non-prehensile manipulation.
@inproceedings{kolhe2010pushing, author = {Pushkar Kolhe and Neil T. Dantam and Mike Stilman}, title = {Dynamic Pushing Strategies for Dynamically Stable Mobile Manipulators}, booktitle = {International Conference on Robotics and Automation}, pages = {3745-3750}, publisher = {IEEE}, year = {2010} }
Sampling-based algorithms generate plans using samples from implicit representations of high dimensional configuration spaces. These samples contain unexploited information, which can be use to build a better understanding towards the configuration space's connectivity. We aim to use these samples as data to learn explicit infeasibility proofs. Previous work has shown success in up-to 4 DoF manipulation scenes. In this work, we focus on improving the learning step of the previous algorithm and show experimental results on a 5-DoF manipulator.
@inproceedings{li2022implicit, author = {Sihui Li and Neil T. Dantam}, title = {Learning Explicit Infeasibility from Implicit Configuration Space Connectivity}, booktitle = {RSS Workshop on Implicit Representations for Robotic Manipulation}, year = {2022} }
Heterogeneous (multi-accelerator) chips are becoming commonplace in many robotics applications due to their ability to accelerate multiple types of computational workloads. By effectively using these devices, robotic tasks can be performed faster or with less resource usage. However, robotic systems present a variety of requirements for scheduling on such devices, especially on mobile robots. These include both time-critical requirements based on physical constraints (e.g., computing fast enough to safely handle unexpected obstacles) and resource constraints (e.g., battery life). By considering these limitations while scheduling, it is possible to expand the safe operation area of a robot. Additionally, changing environments can modify these constraints. In planning situations, this can then impact the optimal plan, as the physical environment may impact viable schedules. Thus, situations can arise where a longer path uses less resources by allowing more efficient schedules. Currently, there is no general, formalized way to reason about these constraints and the tradeoffs they present. A more formal system of reasoning about these tradeoffs also allows them to be considered in higher level tasks, such as Task and Motion Planning. To solve this problem, we propose the creation of a structured system, the Constrained Autonomous Workload Scheduler (CAuWS). By using a representative language (AuWL), Timed Petri nets, and mixed-integer linear programming, CAuWS offers novel capabilities to represent heterogeneous computation alongside physical constraints, optimization criteria, and motion planning. This enables robots to optimally leverage new computational platforms. We demonstrate CAuWS with a simulation of a drone running vision tasks under multiple physical constraints, showing CAuWS is practical for dynamic environments and obeys physical constraints.
@inproceedings{mcgowen2022representations, author = {McGowen, Justin and Dagli, Ismet and Mehmet E. Belviranli and Neil T. Dantam}, title = {Representations for Scheduling of Heterogeneous Computation to Support Motion Planning}, booktitle = {RSS Workshop on Implicit Representations for Robotic Manipulation}, year = {2022} }
Planning under uncertainty presents numerous challenges in expressivity and scalability. We propose a new algorithm that extends the constraint-based planning framework to address state uncertainty. The approach has produced optimal plans in simulation, and we are implementing it on a physical robot.
@inproceedings{schack2019bayesianplanning, author = {Matthew A. Schack and Neil T. Dantam}, title = {Bayesian-Markov Feedback in Constraint-based Planning}, booktitle = {ICRA Workshop on Human-Robot Teaming Beyond Human Operational Speeds}, year = {2019} }
In real-world systems, failures happen. Reliably executing manipulation tasks depends on handling errors and faults. Typical approaches for logical task planning identify a single execution path, without considering faults. This eases computation, but presents a challenge when the robot must respond to unexpected conditions. The principal challenge in explicitly representing policies for logical domains is handling the state space that is potentially in the number of propositions. We can address this challenge by using a linguistic, hierarchical representation for manipulation task policies. Using a language-based policy representation, we can compactly encode desired execution, potential faults, and the appropriate response. We consider two types of errors and how they may be handled. First, we consider subtask or action failure, such as a missed grasp, where a chosen action does not produce the desired effect. Second, we consider uncontrollable events, such a force limits, where some transition occurs unpredictable or unavoidably. In both of these cases, we can use a language-based approach to continue and recover.
@inproceedings{dantam2014fault, author = {Neil T. Dantam and Heni Ben Amor and Henrik Christensen and Mike Stilman}, title = {Fault Recovery in Logical Manipulation Policies}, booktitle = {Workshop on Human versus Robot Grasping and Manipulation, RSS}, year = {2014} }
Producing reliable software for robotic systems requires formal techniques to ensure correctness. Some popular approaches model the discrete dynamics and computation of the robot using finite state automata or linear temporal logic. We can represent more complicated systems and tasks, and still retain key guarantees on verifiability and runtime performance, by modeling the system instead with a context-free grammar. The challenge with a context-free model is the need for a more advanced software synthesis algorithm. We address this challenge by adapting the LL(*) parser generation algorithm, originally developed for program translation, to the domain of online robot control. We demonstrate this LL(*) parser generation implementation in the Motion Grammar Kit, permitting synthesis for robot control software for complex, hierarchical, and recursive tasks.
@inproceedings{rouhani2013software, author = {Rouhani, Arash and Neil T. Dantam and Mike Stilman}, title = {Software-Synthesis via LL(*) for Context-Free Robot Programs}, booktitle = {4th Workshop on Formal Methods for Robotics and Automation, RSS}, year = {2013} }
Using both formal language and differential equations to model a robotic system, we introduce a calculus of transformation rules for the symbolic derivation of hybrid controllers. With a Context-Free Motion Grammar, we show how to test reachability between different regions of state-space and give several symbolic transformations to modify the set of event strings the system may generate. This approach lets one modify the language of the hybrid system, providing a way to change system behavior so that it satisfies linguistic constraints on correct operation.
@inproceedings{dantam2010talk, author = {Neil T. Dantam and Magnus Egerstedt and Mike Stilman}, title = {Make Your Robot Talk Correctly: Deriving Models of Hybrid System}, booktitle = {RSS Workshop on Grounding Human-Robot Dialog for Spatial Tasks}, year = {2011} }
Expanding the capabilities of robots to achieve complex objectives in new environments requires novel reasoning systems. Everyday tasks in the physical world, such as table setting, couple discrete decisions about objects and actions with geometric decisions about collision free motion. Robotics has traditionally treated these issues---task planning and motion planning---in isolation, thus missing their potential interactions. Instead, the joint approach of Task--Motion Planning (TMP) addresses this inherent coupling. Moreover, reasoning in concert about overall objectives and concrete motions enables the high-level specification of behavior, mitigating typically intensive system integration efforts required in robotics. We address the need for underlying models and principles in integrated robot manipulation with a new planning and execution framework that is adaptable to new robots, domains, and algorithms.
@techreport{dantam2016tr1602, author = {Neil T. Dantam and Swarat Chaudhuri and Lydia E. Kavraki}, title = {The Task Motion Kit}, number = {TR16-12}, institution = {Department of Computer Science, Rice University}, year = {2016} }
We describe several algorithms used for the inference of linguistic robot policies from human demonstration. First, tracking and match objects using the Hungarian Algorithm. Then, we convert Regular Expressions to Nondeterministic Finite Automata (NFA) using the McNaughton-Yamada-Thompson Algorithm. Next, we use Subset Construction to convert to a Deterministic Finite Automaton. Finally, we minimize finite automata using either Hopcroft's Algorithm or Brzozowski's Algorithm.
@techreport{dantam2012algorithms, author = {Neil T. Dantam and Irfan Essa and Mike Stilman}, title = {Algorithms for Linguistic Robot Policy Inference from Demonstration of Assembly Tasks}, number = {GT-GOLEM-2012-002}, institution = {Georgia Insitute of Technology}, year = {2012} }
We present a new Inter-Process Communication (IPC) mechanism and library. Ach is uniquely suited for coordinating perception, control drivers, and algorithms in real-time systems that sample data from physical processes. Ach eliminates the Head-of-Line Blocking problem for applications that always require access to the newest message. Ach is efficient, robust, and formally verified. It has been tested and demonstrated on a variety of physical robotic systems. Finally the source code for Ach is available under an Open Source BSD-style license.
@techreport{dantam2011achtech, author = {Neil T. Dantam and Mike Stilman}, title = {Ach: IPC for Real-Time Robot Control}, number = {GT-GOLEM-2011-003}, institution = {Georgia Insitute of Technology}, year = {2011} }
This paper derives the equations of motion for Sparky, a mobile manipulator robot. These equations are used in the manipulation analysis, simulation, and experiments of "Dynamic Pushing Strategies for Dynamically Stable Mobile Manipulators."
@techreport{dantam2010equations, author = {Neil T. Dantam and Pushkar Kolhe and Mike Stilman}, title = {Equations of Motion for Dynamically Stable Mobile Manipulators}, number = {GT-GOLEM-2010-002}, institution = {College of Computing. Georgia Institute of Technology}, year = {2010} }
We present the Motion Grammar: a novel unified representation for task decomposition, perception, planning, and hybrid control that provides a computationally tractable way to control robots in uncertain environments with guarantees on completeness and correctness. The grammar represents a policy for the task which is parsed in real-time based on perceptual input. Branches of the syntax tree form the levels of a hierarchical decomposition, and the individual robot sensor readings are given by tokens. We implement this approach in the interactive game of Yamakuzushi on a physical robot resulting in a system that repeatably competes with a human opponent in sustained game-play for matches up to six minutes.
@techreport{dantam2010yamatech, author = {Neil T. Dantam and Mike Stilman}, title = {The Motion Grammar: Linguistic Perception, Planning, and Control}, number = {GT-GOLEM-2010-001}, institution = {College of Computing. Georgia Institute of Technology}, year = {2010} }