Researchers Develop System to Verify Safety and Stability of AI-Driven Control Systems

a computer chip with the letter a on top of it

Insider Brief

  • Researchers at the University of Waterloo have developed a new framework combining mathematics and machine learning to verify the safety and stability of AI systems that control critical infrastructure such as power grids and autonomous vehicles.
  • The study, published in Automatica and supported by Waterloo’s TRuST Scholarly Network, uses neural networks to learn Lyapunov functions—key mathematical tools in control theory—paired with logic-based verification to ensure AI controllers behave safely under dynamic conditions.
  • By automating the generation and validation of mathematical safety proofs, the system could help engineers design more trustworthy AI for real-world applications, with the team planning to release the framework as open-source software for broader adoption.

Artificial intelligence is increasingly being trusted to manage systems where human safety is at stake — from steering autonomous vehicles to balancing national power grids. Yet one of the central questions facing researchers is how to ensure that these intelligent controllers behave safely under all possible conditions.

A new study from the University of Waterloo proposes a mathematical and machine learning framework to rigorously test whether AI systems can be trusted to keep complex physical systems stable and secure.

“Any time you’re dealing with a dynamic system — something that changes over time, such as an autonomous vehicle or a power grid — you can mathematically model it using differential equations,” noted lead researcher Dr. Jun Liu, professor of applied mathematics and Canada Research Chair in Hybrid Systems and Control.  

The research, supported by the University of Waterloo’s TRuST Scholarly Network and published in Automatica under the title “Physics-informed neural network Lyapunov functions: PDE characterization, learning, and verification,” describes a method that combines deep learning with formal mathematical verification. The university pointed out the work bridges a gap between theory and real-world safety for AI systems managing dynamic environments.

At the heart of the approach is a concept from control theory known as the Lyapunov function — a mathematical construct that predicts whether a system will naturally move toward a stable, safe state. Engineers have long used Lyapunov functions to assess the stability of systems that evolve over time, such as flight control systems or electric grids, according to Waterloo. In simple terms, the function acts like an energy landscape showing whether a system will behave like a ball rolling into a bowl and coming to rest, or one perched precariously on a hill that could tip into chaos. However, identifying the correct Lyapunov function for complex systems is mathematically difficult and often intractable for modern AI-driven applications.

Liu’s team tackled this challenge by training a neural network to learn Lyapunov functions that satisfy the strict conditions required for stability. The neural network essentially learns how to prove, in mathematical terms, that a given AI controller will not cause the system it manages to become unstable. To make these proofs trustworthy, the researchers paired the learning system with a second verification layer — a logic-based reasoning engine that rigorously checks the neural network’s results against formal mathematical rules. This dual framework allows one AI to generate safety proofs while another independently confirms their validity.

“To be clear, no one is attempting to create factories or systems run entirely by AI without any human input,” Liu added. “There are areas such as ethics that will always be guided by human judgment. What these AI controllers and proof assistants are doing is taking over computation-intensive tasks, like deciding how to deploy power in a grid or constructing tedious mathematical proofs, that will be able to free up humans for higher-level decisions.” 

In tests on a range of control problems — including nonlinear and hybrid dynamic systems — the Waterloo framework matched or surpassed conventional stability analysis methods. Its success suggests that neural networks, when properly constrained by physics and logic, can be powerful tools for designing and validating safety-critical AI.

Liu’s group plans to release the framework as an open-source toolbox and expand collaborations with industry partners focused on autonomous systems, robotics, and energy infrastructure.

Greg Bock

Greg Bock is an award-winning investigative journalist with more than 25 years of experience in print, digital, and broadcast news. His reporting has spanned crime, politics, business and technology, earning multiple Keystone Awards and a Pennsylvania Association of Broadcasters honors. Through the Associated Press and Nexstar Media Group, his coverage has reached audiences across the United States.

Share this article:

AI Insider

Discover the future of AI technology with "AI Insider" - your go-to platform for industry data, market insights, and groundbreaking AI news

Subscribe today for the latest news about the AI landscape