ICRA 2025 Workshop on Robot safety under uncertainty from “intangible” specifications

Important Details

Robot and automation systems are expected to carry out increasingly sophisticated tasks in complex environments. This places stringent expectations on their autonomy stack to enforce operational constraints and safety guarantees. Safe control synthesis methods, including model predictive control, reachability analysis, and control barrier functions, commonly rely on explicit mathematical definitions of safety constraints with known system and environment models. Nevertheless, many current applications incorporate perception uncertainty or semantic concepts in the task specifications, and safety requirements may be specified implicitly, abstractly, or incompletely.

This workshop aims to investigate the concept and formulation of such “intangible” safety constraints. These constraints may not be specified explicitly as mathematical expressions but described through natural language, semantic concepts, observed by sensor data anomalies, or inferred from demonstrations and prior experience. The techniques required to either identify such safety constraints or to propagate uncertainty through them are largely unexplored. Our workshop is interested in various facets of these techniques including: identification of safety constraints from demonstration, compatibility of learned constraints and control design methodology, interaction with humans or vision-language models for safety information, and associated uncertainty quantification. The workshop invites experts in the field to discuss the formulation, optimization, and guarantees with intangible safety constraints.

We invite paper submissions from a range of topics related to the design of safe robot control from imprecise safety specifications, including but not limited to, from natural language, from demonstrations, in terms of sensor anomalies, or through semantic concepts. The topics of interest include the formulation of mathematical representations of such safety specifications, quantification of ambiguity in safe set specifications, human-machine interaction on specifying safety concerns, and autonomous interpretation of safety consideration from abstractly specified safety tasks. Priority will be given to posters that bridge the gap between “intangible” safety specifications and actionable safe control.

A paper submission should be 2-4 pages in length (excluding references) and should outline the results being presented, their novelty, and their relevance to the workshop. All papers must be submitted in PDF format and must follow the standard IEEE conference formatting guidelines. Information and templates are available here.

The papers will be reviewed by a Program Committee, assembled from the organizers, the invited speakers, and other experts in the field. The Program Committee will judge the contributions based on the following questions.

  1. What is the approach for interpreting imprecise safety specifications and how can the results lead to actionable control formulation?
  2. How is the ambiguity in the safety specifications quantified?
  3. How novel is the proposed approach and how likely it will eventually lead to an impactful publication?
  4. What are the key research questions that the community should address in future work and whose resolution will significantly impact your work?

Accepted papers will be published on the workshop website and the authors will be invited to present their work during one of the workshop poster sessions.

Submission link is available on the top of workshop website https://iscicra25.github.io.

Call for Talk Proposals

We invite junior researchers, who are either close to completing their PhD or are recent graduates, to share their PhD research work and research vision on robot safety at our workshop as a 30-min talk.

Applicants must have either defended their PhD thesis after May 2023 or must in their 3+ years of PhD study. Applicants are invited to submit a talk proposal in the form of an extended abstract of up to 2 pages (excluding references) summarizing their PhD research on a topic of interest to the workshop.

The extended abstract is expected to contain and will be evaluated on the following aspects: 1) motivation behind the research question(s) addressed in the applicant’s research; 2) clarity in defining and scoping the problem; 3) alignment with the topic of the workshop; 4) brief review of related work on the aforementioned research question(s); 5) description of techniques contributed by the applicant, their novelty and potential advantages over existing ones; 6) overview of the future research directions. Good examples of these research statements can be found at the RSS Pioneers Workshop.

The submitted talk proposals will be reviewed by the workshop Program Committee and one will be selected for presentation based on quality and relevance to the workshop topic. Any submitted talk proposal, by default, will also be considered for a poster presentation (see the call for workshop papers above).

Submission link is available on the top of the workshop website https://iscicra25.github.io.

Time

Topic

08:30-08:45 AM

Registration, welcome, and opening remarks

Session I

Capturing “Intangible” Safety Specifications

08:45-09:30 AM

Andrea Bajcsy

Invited talk: Andrea Bajcsy, Carnegie Mellon University

Title: Towards "Open-World" Robot Safety: Unifying Generative AI and Control Systems Safety

Abstract: Robot safety is a nuanced concept. We commonly equate safety with collision-avoidance, but in complex, real-world environments (i.e., "open world") it can be much more: for example, a safe manipulator should understand when it is not confident about a requested task, and it should hand sharp objects to someone with the sharp edge facing away. Foundational methods in safe control characterize safety constraints as arbitrary sets in state space, and ensure that the robot will always take a best-effort action to avoid entering the safety specification. While in theory this mathematical model is highly expressive, in practice it has been limited to highly controlled robot interactions. The reason why is that, to-date, we have found it challenging to mathematically codify safety in more diverse, nuanced interactions like those that arise in autonomous driving, household, or personal robotics. In this talk, I will provide an overview of my lab's work towards generalizing robot safety by systematically uniting generative AI models and control-theoretic safety techniques.

09:30-10:00 AM

Junior researcher talk

10:00-10:30 AM

Coffee break with poster session 1

11:00-11:45 AM

Necmiye Ozay

Invited talk: Necmiye Ozay, University of Michigan

Title: Learning Temporal Logic Constraints from Multi-Modal Human Data

Abstract: Planning and alignment with human intent, while preserving notions of safety, is crucial for deploying AI-enabled autonomous systems in safety-critical applications. In this talk, I will present our recent work that infuses temporal logic with learning for safety and alignment. In the first part of the talk, I will present a method for learning multi-stage tasks from a small number of demonstrations by learning the logical structure and atomic propositions of a consistent linear temporal logic (LTL) formula. In the second part of the talk, I will show how one can learn to rank different behaviors consistent with a given safety specification from human preferences, while ensuring that rule-violating behaviors are never ranked higher than rule-satisfying ones. These methods will be illustrated with applications in robotics and autonomous driving.

Session II

Safe Control of Stochastic and Learning-Enabled Systems

11:45-12:30 PM

Luca Laurenti

Invited talk: Luca Laurenti, TU Delft

Title: Safety of Stochastic Systems: From Stochastic Barrier Functions to Uncertain Abstractions
Abstract: Providing safety guarantees for stochastic dynamical systems has become a central problem in many fields, including control theory, machine learning, and robotics. In this talk I will present our recent work on providing safety guarantees for non-linear stochastic dynamical systems, including dynamical systems with neural networks in the loop. I will focus on two different approaches to quantify safety for stochastic systems: Stochastic Barrier Functions (SBFs) and abstractions to uncertain Markov models. While SBFs are analogous to Lyapunov functions to prove (probabilistic) set invariance, abstraction-based approaches approximate the stochastic system into a finite model for the computation of safety probability bounds. I will illustrate pros and cons of both methods. I will then conclude the talk illustrating how recent results from optimal transport and stochastic optimization could be employed to complement both methods to finally provide scalable safety guarantees for non-linear uncertain systems.

12:30-01:45 PM

Lunch break

01:45-02:30 PM

Lars Lindemann

Invited talk: Lars Lindemann, University of Southern California

Title: Safe Control of Learning-Enabled Autonomous Systems using Conformal Prediction

Abstract: Learning-enabled autonomous systems promise to enable many future technologies such as autonomous driving, intelligent transportation, and robotics. Accelerated by algorithmic and computational advances in machine learning and AI, there has been tremendous success in the design of learning-enabled autonomous systems. However, these exciting developments are accompanied by new fundamental challenges that arise regarding the safety and reliability of these increasingly complex control systems in which sophisticated algorithms interact with unknown dynamic environments. Imperfect learning and unknowns in the environment require control techniques to rigorously account for such uncertainties. I advocate for the use of conformal prediction (CP) — a statistical tool for uncertainty quantification — due to its simplicity, generality, and efficiency as opposed to existing optimization techniques that are either conservative or subject to scalability issues. I first provide an accessible introduction to CP for the non-expert. My goal is then to show how we can use CP to design probabilistically safe motion planning algorithms in dynamic environments. Specifically, we will design a model 

02:30-03:00 PM

Poster session 2

03:00-03:30 PM

Coffee Break 2

Session III

Safety in Robot Perception and Real-World Robot Deployment

03:30-04:15 PM

Ayoung Kim

Invited talk: Ayoung Kim, Seoul National University

Title: Safety in Robot Navigation using Sensors beyond the Visible Spectrum
Abstract: This talk delves into enhancing long-term robustness in SLAM by focusing on both perception and representation improvements. We will explore how extending sensor capabilities to include radars and thermal cameras can significantly enhance all-day, all-weather perception, which is crucial for maintaining safety in robotic operations, especially in challenging or degraded environments. By integrating these advanced sensors, we aim to address the challenges of operating in varied conditions, ensuring reliable and safe navigation.

04:15-05:00 PM

Georgios Fainekos

Invited talk: Georgios Fainekos, Toyota Motor

Title: Searching Perception Data Streams For Testing/Training Behaviors
Abstract: Fortunately, most everyday driving is rather uneventful. Therefore, we can collect an abundance of perception and action data under regular driving conditions for training autonomous vehicles. However, this data is not useful when encountering rare but dangerous situations, such as unusual accidents and near misses. We have developed an abstract language, inspired by regular expressions, to search perception data streams for interesting driving scenarios. These scenarios can form the basis for a data-driven approach to defining and identifying the boundary between safe/desirable and unsafe/undesirable behaviors. To fully automate the process, we demonstrate how perception data can be translated into scenarios for simulation-guided testing or training, and how natural language can be used to query the perception data streams. We conclude by identifying the open challenges and opportunities in searching for interesting driving scenarios and robustifying the autonomy against such scenarios

05:00-05:30 PM

Asynchronous Q&A and closing remarks

Paper and talk submissions will be reviewed by the following program committee.

Contact

Should you have any questions, please do not hesitate to contact the organizers Vikas Dhiman (vikas.dhiman@maine.edu). Please include ICRA 2025 Workshop in the subject of the email.

Acknowledgements

Vikas Dhiman is supported by NSF STORM Award # 2316399.