This chapter presents fundamental issues on the optimal control theory. The Hamilton–Jacobi–Bellman (HJB) equation is introduced as a means to obtain the optimal control solution; however, solving the HJB equation is a very difficult task for general nonlinear systems. Then, the inverse optimal control approach is proposed as an appropriate alternative methodology to solve the optimal control, avoiding the HJB equation solution.
Optimal control is related to finding a control law for a given system such that a performance criterion is minimized. This criterion is usually formulated as a cost functional, which is a function of state and control variables. The optimal control can be solved using Pontryagin’s maximum principle (a ...
Get Discrete-Time Inverse Optimal Control for Nonlinear Systems now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.