##### Featured Video

*Evolution of the controls and of the state from the initial value () to the final one () in the minimal computed time .*

##### Introduction

In this short tutorial we explain how to use IpOpt in order to solve time optimal control problems. We refer to [1,4,5] for a survey on numerical methods in optimal control and how to implement them efficiently according to the context.

##### Solving Problem (P0)

- IpOpt is an interior-point optimization routine.

Basically, it numerically solves optimisation problems of the form:

(P0)

IpOpt can be used together with Matlab, FreeFem++…and can be used directly in

**.cpp**codes. - AMPL is an automatic differentiation and the modelling language. The interest of using AMPL together with IpOpt is that, the gradient of the cost function and the constraints is automatically generated. Solving problems with IpOpt and AMPL can be made online through the NEOS solvers.

Let us write in AMPL language the problem (P0):

# The variable of the problemvarx {i in 1..n} # The cost function to be minimizedminimizecost : f(x); # The inequality constraintssubject toinequality_constraints : g(x)<=0; # The equality constraintssubject toequality_constraints: h(x)=0; # Set IpOpt as solveroption solveripopt; # Set options for IpOpt, such as the maximal number of iterationsoptionipopt_options "max_iter=20000 linear_solver=mumps halt_on_ampl_error yes"; # Solve the problemsolve; # Display the cost valueprintf: "# cost = %24.16e\n", cost;printf: "# Data\n"; # Display the optimal values of xprintf{i in 0..n} : "%24.16e\n", x[i]; # Quit AMPLend;

##### Solving a time optimal control problem with constraints

Let us now turn to a time optimal control problem.

Given a dynamical system,

(1)

some initial condition and some terminal condition and some bound , the aim is to find the minimal time such that there exist a control such that,

(2)

and the solution of (1) satisfies

(3)

where is given and define constraints of the state variable . Of course to be able to solve this system, one needs to have and .

The optimal control problem (1,2,3), can be recast as an optimisation problem under constraints similar to (P0). More precisely, it is

In order to handle numerically, this problem, we will use a time discretization. Let us explain it with the explicit Euler method. But any other time discretization can be used.

Fix some parameter (the number of time steps) and for given, define the estimation of for . The explicit Euler scheme, gives the relation,

with . Then the state and control constraints are replaced by:

Consequently, in discretized version, we end up with a finite dimensional control problem under constraints, whose AMPL version is:

# Define the parameters of the problem # number of time step discretization pointsparamNt=100; # bound on the controlparamM =1; # initial conditionparamy0=1; # final conditionparamy1=0; # Define variables of the problemvary {i in 0..Nt}; # The control shall be in [-M,M]varu {i in 0..Nt} >=M, <=M; # The time T shall be nonnegativevarT >=0; # The cost function is the time Tminimizecost: T; # Set the constraints # y is solution of (1)subject toy_dyn {i in 0..Nt-1} : y[i+1]=y[i]+T/Nt*f(y[i],u[i]); # y(0)=y0subject toy_init : y[0] =y0; # y(T)=y1subject toy_end : y[Nt]=y1; # g(y(t))<=0subject tostate_constraint {i in 1..Nt-1} : g(y[i])<=0; # Solve with IpOptoption solveripopt;optionipopt_options "max_iter=20000 linear_solver=mumps halt_on_ampl_error yes";solve; # Display solutionprintf: "# T = %24.16e\n", T;printf: "# Nt = %d\n", Nt;printf: "# Data\n";printf{i in 0..Nt} : " %24.16e\n", u[i];printf{i in 0..Nt} : " %24.16e\n", y[i];end;

##### Application to the constrained heat equation

Now we can turn to the control of the heat equation with nonnegative state constraint.

To this end, we consider the controlled 1D heat equation with Neumann boundary control.

(4)

To this problem, we add the control constraints,

with some given and we add the state constraint,

In [3], it has been proved that every positive constant state can be steered to some over positive constant state in a large enough time . Our goal here is to solve this system numerically. Firstly, we will use a space discretization to reduce the system (4). To this end, we define and for every , . Based on this discretization of , we will discretized (4) using centered finite differences. That is to say, given and , a vector approaching , is solution of

(5)

with the initial condition

(6)

the target state,

(7)

the state constraint,

(8)

and the control constraint,

(9)

where we have set ,

Now we can discretize (5) using explicit Euler scheme and similarly to the previous example, we obtain an optimisation problem of finite dimension with constraints whose AMPL formulation is:

# Parameters of the problemparamNx=30; # Number of space discretisation pointsparamNt=300; # Number of time discretisation points # One have to check a posteriori that the number of time steps is large enough so that the # CFL condition, T*Nx^2/Nt <=1/2, is satisfiedparamdx=1/Nx; # Space stepparamM =20; # Bound on the controls # Variables of the system # i stands for the time index and j for the space indexvary {i in 0..Nt, j in 0..Nx} >=0; # State of the control problem # Neuman controls in 0 and 1. The controls are in [-M,M].varv0 {i in 0..Nt} >=-M, <=M;varv1 {i in 0..Nt} >=-M, <=M;varT >=0; # Control timevardt=T/Nt; # Time step # Define the cost functionminimizecost: T; # Define the constraints # y is solution of the discretize systemsubject toy_dyn {i in 0..Nt-1, j in 1..Nx-1}: (y[i+1,j]-y[i,j])*(dx)^2=(y[i,j-1]-2*y[i,j]+y[i,j+1])*dt; # Neuman boundary conditions in 0 and 1subject toleft_boundary {i in 1..Nt-1}: y[i,1]-y[i,0] =v0[i]*dx;subject toright_boundary {i in 1..Nt-1}: y[i,Nx]-y[i,Nx-1]=v1[i]*dx; # y(0)=y0 and y(T)=y1subject toy_init {j in 0..Nx}: y[0,j] =5;subject toy_end {j in 0..Nx}: y[Nt,j]=1; # Solve with IpOptoption solveripopt;optionipopt_options "max_iter=2000 linear_solver=mumps halt_on_ampl_error yes";solve; # Write the solution in the file out.txtprintf: " # T = %24.16e\n", T > out.txt;printf: " # Nx = %d\n", Nx >> out.txt;printf: " # Nt = %d\n", Nt >> out.txt;printf: " # Data\n" >> out.txt;printf{i in 0..Nt}: " %24.16e\n", v0[i] >> out.txt;printf{i in 0..Nt}: " %24.16e\n", v1[i] >> out.txt;printf{i in 0..Nt, j in 0..Nx}: " %24.16e\n", y[i,j] >> out.txt;end;

Once the file **out.txt** is written, it can be for instance read by **Scilab** with the following code:

fid=mopen('out.txt','r'); // Open out.txt T =mfscanf(fid,'%s %s %s'); // Read ``# T ='' T =mfscanf(fid,'%f'); // Read value of T Nx =mfscanf(fid,'%s %s %s'); // Read ``# Nx ='' Nx =mfscanf(fid,'%d'); // Read value of Nx Nt =mfscanf(fid,'%s %s %s'); // Read ``# Nt ='' Nt =mfscanf(fid,'%d'); // Read value of Nt s =mfscanf(fid,'%s %s'); s=[]; // Read ``# Data'' v0 =mfscanf(Nt+1,fid,'%f'); v0=v0'; // Read the Nt+1 values of v0 v1 =mfscanf(Nt+1,fid,'%f'); v1=v1'; // Read the Nt+1 values of v1 y =mfscanf((Nt+1)*(Nx+1),fid,'%f'); // Read the (Nt+1)*(Nx+1) values of ymclose(fid); // Close out.txt y =matrix(y,Nx+1,Nt+1); y=y'; // Reshape y as a matrix, line i is the solution at time i/Nt x = 0:1/Nx:1; t = 0:T/Nt:T; // Define the space and time discretisationsprintf('time:\t%f\n',T); // Display the control timeplot(t,[v0;v1]);sleep(2000);clf(); // plot controls and wait 2splot2d(x,y(1,:),rect=[0 0 1 10]);sleep(100); // plot the initial state and wait 0.1sfori=2:1:Nt,plot(x,y(i,:),rect=[0 0 1 10]);sleep(10); // plot the state at each time instantsendplot(x,y($,:),rect=[0 0 1 10]); // plot the final state

###### References

**[1]** J. T. Betts. *Practical methods for optimal control and estimation using nonlinear programming.* 2nd ed. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM),2nd ed. edition, 2010.

**[2]** R. Fourer, D. M. Gay, and B. W. Kernighan. *A modeling language for mathematical programming.* Manage. Sci., 36(5):519–554, 1990.

**[3]** J. Lohéac, E. Trélat, and E. Zuazua. *Minimal controllability time for the heat equation under state constraints.* In preparation.

**[4]** E. Trélat. *Contrôle optimal. Théorie et applications.* Paris: Vuibert, 2005.

**[5]** E. Trélat. *Optimal control and applications to aerospace: some results and challenges.* J.Optim. Theory Appl., 154(3):713–758, 2012.

**[6]** A. Wächter and L. T. Biegler. *On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming.* Math. Program., 106(1 (A)):25–57, 2006.