Article | REF: AF1374 V1

Optimal Control

Author: J. Frédéric BONNANS

Publication date: April 10, 2015

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


Overview

Français

ABSTRACT

Optimal control theory analyzes how to optimize dynamical systems with various criteria: reaching a target in minimal time or with minimal energy, maximizing the efficiency of an industrial process, etc. This involves the optimization of both time-independent parameters and of the control variables that are a function of time. This article analyzes the first- and second-order optimality conditions, and how to solve them by time discretization, the shooting algorithm, or dynamic programming.

Read this article from a comprehensive knowledge base, updated and supplemented with articles reviewed by scientific committees.

Read the article

AUTHOR

  • J. Frédéric BONNANS: INRIA Research Director - INRIA and Center for Applied Mathematics, Ecole Polytechnique, Palaiseau

 INTRODUCTION

A dynamic system is said to be controlled if it can be acted upon by time-dependent variables, known as commands. Let's illustrate this concept in the case of a spacecraft, described by position and velocity variables (in 3 ) h and V, and a mass m > 0, i.e. 7 state variables. The dynamics are, omitting the time argument, h˙=V , mV˙=F(h,V)+u and m˙=c|u| . Here c is a positive constant and F (h, V) corresponds to the forces of gravity and (where applicable) aerodynamics. The control is the applied force, whose Euclidean norm is denoted by |u| , subjected to a constraint of the type |u|U . Given a fixed initial point, we seek to reach a target (part of state space) by minimizing a compromise between travel time and energy expended.

For the real-time implementation of a control system, it is necessary to take into account the means of observation and the reconstitution of the state, while considering aspects of signal processing and the choice of control electronics. In contrast, in this article, we consider only the upstream study, in which a deterministic framework is used, and an optimal control is calculated off-line. The shape of the latter can guide the design of the real-time controller.

The presentation will first follow the approach of Lagrange and Pontriaguine, which consists in studying the variations of an optimal trajectory to determine its properties. First- and second-order optimality conditions will be analyzed, in connection with the shooting algorithm, with particular attention...

You do not have access to this resource.

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


The Ultimate Scientific and Technical Reference

A Comprehensive Knowledge Base, with over 1,200 authors and 100 scientific advisors
+ More than 10,000 articles and 1,000 how-to sheets, over 800 new or updated articles every year
From design to prototyping, right through to industrialization, the reference for securing the development of your industrial projects

KEYWORDS

dynamical systems   |   path following   |   minimal time   |   shooting algorithm   |   dynamical programming


This article is included in

Mathematics

This offer includes:

Knowledge Base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

Practical Path

Operational and didactic, to guarantee the acquisition of transversal skills

Doc & Quiz

Interactive articles with quizzes, for constructive reading

Subscribe now!

Ongoing reading
Optimum control
Outline