The research in my group strives to develop theories that make machine learning applicable in real-world large scale engineering systems. Our research is interdisciplinary in nature where we develop new mathematical tools in machine/reinforcement learning, control theory, optimization, network science and apply these tools to cyber physical systems, power systems, transportation systems, robotics and beyond, with provable performance and resilience guarantee.

Some of research projects are listed below.

Learn to Stabilize

Machine learning has been applied to control systems to learn to control an unknown system with provable performance guarantee (e.g. regret, competitive ratio). However, in addition to performance, an equally important property of control systems is stability, without which there is no performance to even talk about. In this project, we investigate the ``learn to stabilize’’ problem for an unknown system, and study fundamental questions like sample complexity.

Learning and Control for Networked Systems

Reinforcement Learning (RL) has achieved many sucess in single-agent systems, but its application to large scale networked systems face a major obstacle: scalability. Put more concretely, the scalibity issue lies in that the state or action space of such networked systems can be exponentially large in the number of nodes. In this project, we investigate how we can use the network structure to make RL scalable for networked systems.

As an application, we have also applied RL to power systems.

Even without learning, the control of a networked systems is already a challenging problem. To that end, I have developed fundamental theories regarding how to design distributed algorithms for control and optimization of networked systems using only local information and local communication.

Model Predictive Control

Model Prediction Control (MPC) is one of the most popular and flexible controller design approaches, yet its performance guarantee has long been not well understood, particularly when it comes to time-varying systems and systems with constraints. In this project, we propose a general perturbation analsyis framework that bounds the regret of MPC.

Bridging Model-based and Model-Free Methods

Traditional controller synthesis typicalls starts with a first-princeples model and designs a controller with provable stability and robustness guarantee. In contrast, recent RL approaches do not assume a model and learns a controller (often times neural network based) in a data driven manner, which experimentally can perform well even for complex dynamical systems. However, the RL approach is often data and computation heavy, requires extensive tuning, and lacks provable guarantess. In this project, we seek to combine both approaches and achieve the best of both worlds.

Application: power systems

Many of our research is inspired by applications in power systems, particularly distributed control and coordination of distributed energy resources. Here is a list of relevant power system publications.

Optimization Theory