Congratulations to Prof. He Zhu, who has received an NSF grant for his project titled "SHF:Small: Formal Symbolic Reasoning of Deep Reinforcement Learning Systems", for an amount of $499,995, covering a three-year period starting 06/15/2020. He is the sole investigator on the project.
Prof. He Zhu receives NSF grant for his project Formal Symbolic Reasoning of Deep Reinforcement Learning Systems
- Referenced People:
- Zhu, He
Deep reinforcement learning, a type of artificial intelligence, has become pervasive and is being deployed in decision-making systems such as autonomous vehicles. Deep reinforcement learning models are, however, subject to instability in both their training process and their run-time performance. Despite much progress in boosting public trust amid rising concerns about the safety of artificial intelligence, there remain significant challenges to rigorously guarantee the safety of deep reinforcement learning in safety-critical systems. This project integrates a range of symbolic reasoning tasks – rigorous abstraction and verification – enabled by formal methods technology into reinforcement learning to secure the public's trust in such systems. The project's impact is to establish new paradigms and lay foundations for provably safe deep reinforcement learning that is capable of making trustworthy decisions in complex real-world environments.
The project's novelty is to augment the training loop of reinforcement learning with a formal verification module that reasons about system-level safety properties. First, the project investigates techniques to construct formal and differentiable abstractions of reinforcement learning agents and environments. By reducing the loss between safety properties and differentiable abstractions using optimization techniques, reinforcement learning can now provide formal assurances of correctness at training-time. Second, the project develops environment modeling and monitoring algorithms to capture environment conditions at run-time. Reinforcement learning agents are safely adapted to environment changes guaranteed by formal verification. Moreover, this project provides safety guarantees for vision-based deep reinforcement learning systems by encoding each high-dimensional visual input into a symbolic representation that is suitable for formal verification.
More details can be found on the National Science Foundation's webpage at https://www.nsf.gov/awardsearch/showAward?AWD_ID=2007799