Ali Baheri

Assistant Professor | Rochester Institute of Technology

I am an assistant professor in the Department of Mechanical Engineering at Rochester Institute of Technology where I lead the Safe AI Lab. Our lab focuses on research at the intersection of autonomy, controls, and machine learning. Before joining RIT, I was a visiting scholar at Stanford Intelligent Systems Laboratory (SISL). Prior to that, I was an assistant professor (in the research track) at West Virginia University. Before that, I was a postdoctoral research fellow at the University of Michigan—Ann Arbor. I received my Ph.D. at the University of North Carolina at Charlotte in 2018.

Our vision is to make progress toward safe, certified, and efficient autonomy with the application of intelligent systems on the ground or in the air. To achieve that goal, we leverage tools from artificial intelligence, data-driven optimization techniques, and control theory. Our work has been generously funded by NSF, FAA, and NASA.

news

Mar 12, 2024 [paper] New preprint out: BEACON: A Bayesian Evolutionary Approach for Counterexample Generation of Control Systems
Feb 24, 2024 [paper] New preprint out: Concurrent Learning of Policy and Unknown Safety Constraints in Reinforcement Learning
Jan 18, 2024 [paper] New arXiv paper out: The Synergy Between Optimal Transport Theory and Multi-Agent Reinforcement Learning
Dec 20, 2023 [award] Received a seed grant from RIT AI Seed Fund!
Nov 18, 2023 [paper] Our paper titled “Exploring the Role of Simulator Fidelity in the Safety Validation of Learning-Enabled Autonomous Systems” has been accepted by the AI Magazine!
Nov 1, 2023 [paper] Our paper titled “LLMs-Augmented Contextual Bandit” has been accepted by the Foundation Models for Decision Making workshop at NeurIPS 2023!
Oct 27, 2023 [paper] Our paper titled “Understanding Reward Ambiguity Through Optimal Transport Theory in Inverse Reinforcement Learning” has been accepted by the Optimal Transport and Machine Learning workshop at NeurIPS 2023!
Sep 12, 2023 [paper] Our paper titled “Risk-Aware Reinforcement Learning Through Optimal Transport Theory” has been accepted by the 3rd RL-CONFORM workshop at IROS 2023!
Apr 27, 2023 [paper] Both papers submitted by the Safe AI Lab to the “Bridging the Gap Between AI Planning and Reinforcement Learning (PRL)” workshop at ICAPS 2023 have been accepted!
Mar 10, 2023 [paper] Our paper titled “Falsification of Learning-Based Controllers through Multi-Fidelity Bayesian Optimization” has been accepted for the 2023 European Control Conference (ECC)!
Feb 9, 2023 [talk] I gave a talk at the AAAI 2023 New Faculty Highlights Program entitled “On the Role of Fidelity in the Safety Evaluation of Learning-Based Autonomous Systems”.
Nov 22, 2022 [award] Selected for the AAAI 2023 New Faculty Highlights Program.
Jul 6, 2022 [workshop] We co-organize “Machine Learning for Autonomous Driving (ML4AD)” workshop at NeurIPS, 2022.
Jun 13, 2022 [workshop] We co-organize the “NASA EPSCOR HACKWEEK” workshop at West Virginia University.
May 23, 2022 [talk] I gave a talk at the ICRA 2022 Workshop on the Verification of Autonomous Systems (VAS).
Apr 6, 2022 [talk] I gave a talk at the University of North Texas entitled “Safe Decision-Making in Evolving Environments for Safety-Critical Autonomous Systems”.
Jan 17, 2022 [award] Our project “Safety Validation of Autonomous Systems from Multiple Sources of Information” has been funded by the National Science Foundation (NSF).
Jan 12, 2022 [paper] Our recent work on safe reinforcement learning with mixture density network has been accepted by the Results in Control and Optimization journal. 
Sep 1, 2021 [award] Our project “Safety Verification Framework for Learning-based Aviation Systems” has been funded by the Federal Aviation Administration (FAA).
Aug 9, 2021 [award] Our project “Verification of Multi-Agent Autonomous Planning and Control” has been funded by the West Virginia University Research Office Program.