Optimal nonlinear control is related to determining a control law for a given system, such that a cost functional (performance index) is minimized; it is usually formulated as a function of the state and input variables. The major drawback for optimal nonlinear control is the need to solve the associated Hamilton–Jacobi–Bellman (HJB) equation. The HJB equation, as far as we are aware, has not been solved for general nonlinear systems. It has only been solved for the linear regulator problem, for which it is particularly well-suited.
This book presents a novel inverse optimal control for stabilization and trajectory tracking of discrete-time nonlinear systems, avoiding the need to solve the associated HJB equation, and minimizing a cost ...
Get Discrete-Time Inverse Optimal Control for Nonlinear Systems now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.