This book covers the most recent developments in adaptive dynamic programming (ADP). The text begins with a thorough background review of ADP making sure that readers are sufficiently familiar with the fundamentals. In the core of the book, the authors address first discrete- and then continuous-time systems. Coverage of discrete-time systems starts with a more general form of value iteration to demonstrate its convergence, optimality, and stability with complete and thorough theoretical analysis. A more realistic form of value iteration is studied where value function approximations are assumed to have finite errors. Adaptive Dynamic Programming also details another avenue of the ADP approach: policy iteration. Both basic and generalized forms of policy-iteration-based ADP are studied with complete and thorough theoretical analysis in terms of convergence, optimality, stability, and error bounds. Among continuous-time systems, the control of affine and nonaffine nonlinear systems is studied using the ADP approach which is then extended to other branches of control theory including decentralized control, robust and guaranteed cost control, and game theory. In the last part of the book the real-world significance of ADP theory is presented, focusing on three application examples developed from the authors’ work: • renewable energy scheduling for smart power grids;• coal gasification processes; and• water–gas shift reactions. Researchers studying intelligent control methods and practitioners looking to apply them in the chemical-process and power-supply industries will find much to interest them in this thorough treatment of an advanced approach to control.
SIAM J Control Optim 44(2):495–514 Lewis FL (1992) Applied optimal control and estimation. ... 19(10):1648–1660 Parisini T, Zoppoli R (1998) Neural approximations for infinite-horizon optimal control of nonlinear stochastic systems.
This book presents a class of novel optimal control methods and games schemes based on adaptive dynamic programming techniques.
A comprehensive look at state-of-the-art ADP theory and real-world applications This book fills a gap in the literature by providing a theoretical framework for integrating techniques from adaptive dynamic programming (ADP) and modern ...
This book presents a class of novel, self-learning, optimal control schemes based on adaptive dynamic programming techniques, which quantitatively obtain the optimal control schemes of the systems.
In this chapter, we propose a framework of robust adaptive dynamic programming (for short, robust ADP), which is aimed at computing globally asymptotically stabilizing control laws with robustness to dynamic uncertainties, ...
This book intends to report new optimal control results with critic intelligence for complex discrete-time systems, which covers the novel control theory, advanced control methods, and typical applications for wastewater treatment systems.
This book reports on the latest advances in adaptive critic control with robust stabilization for uncertain nonlinear systems.
This groundbreaking book uniquely integrates four distinct disciplines—Markov design processes, mathematical programming, simulation, and statistics—to demonstrate how to successfully model and solve a wide range of real-life problems ...
With a simple approach that includes real-time applications and algorithms, this book covers the theory of model predictive control (MPC).
Reinforcement Learning and Dynamic Programming Using Function Approximators, CRC Press, N. Y. [BBN04] Bertsekas, D. P., Borkar, V., and Nedic, A., 2004. “Improved Temporal Difference Methods with Linear Function Approximation,” in ...