##### Introduction

We are interested in optimal control problems subject to a class of diffusion-reaction systems that describes the growth and spread of an introduced population of organisms

(1)

(2)

is the reaction term that represents local reactions, and is the state of the system. Here and are two real parameters.

State represents the local population density. The “growth” of is subject to an Allee effect (described by the reaction term ) in addition to migration (described by the term ). Allee effect exists for a wide variety of reasons such as less efficient feeding at low densities and reduced effectiveness of vigilance and anti-predator defenses.

The value of represents the reproductive rate, and the parameter is the local critical density or Allee threshold that determines the sign (positive or negative) the population growth. Note that, in some literature, the parameter (Allee threshold) has been supposed to be a dynamic parameter that changes with respect to the evolution of the species. Therefore, by means of biological control (e.g. importation of predators), environmental control (e.g. food supply), modern technology (e.g. DNA manipulations), the birth rate and the Allee threshold should be able to be modified. That is to say, we can consider the parameters and as the control of the system (1).

Note that the reaction term has three zeros , , and , which correspond to three constant solutions of the system (1).

For system (1), there is a propagation phenomenon: one of the state , or , or , propagates in the space.

This phenomenon is generally described by \emph{traveling wave solution} of the form,

(3)

which connects two of the three constant solutions of the system (1). Here the constant is the wave speed, and is called the wave profile. Typically, the wave speed and the wave profile depend on the parameters and .

Given a bounded domain , our optimal problem is then to choose optimal (control) parameters and such that the system (1) goes from a given initial state , to a final state , which minimizes the distance between this final state and an expected traveling wave solution of the form (3).

##### Optimal control problem

We consider the following optimal control problem .

Let and be the given domain and final time, respectively. Find that minimizes the cost functional

(4)

(5)

, where , and the control satisfies , where is a desired traveling wave solution of the form (3),

are negative constants, and are constants between and .

To solve this problem numerically with AMPL+(an optimization) solver, we need to discretize the problem and transform it into a nonlinear optimization problem.

Let and be two positive integers. Define a subdivision of time and a subdivision of space . For any integer and , let and be the time and space step size, respectively. Hence, we get a grid of points in the plane. Denote then the value of at the grid point . Without loss of generality, we assume that the subdivisions are uniform, i.e., all the time (resp. space) intervals are equal, and we denote the time step by and the space step by .

We can now write the discrete version of the cost functional (4) by

(6)

Then, we need to write discrete the state equation (5). For the stability of numerical calculations, we use implicit finite difference schemes. Recall that the basic idea of finite difference schemes is to replace derivatives by finite differences. Here, we use Crank-Nicolson Scheme, which is second-order accurate, and thus we approximate equation (5) by

(7)

for .

Let us denote the discretized optimal problem by . Then the problem is to minimize the discretized cost functional (6), such that the dynamical constraints (7), initial conditions

boundary conditions

and control constraints

are all satisfied.

Now we have transformed the original optimal control problem into a nonlinear finite-dimensional optimization problem . Note that in problem , the unknowns are the state and the control at each discretization point. We solve this problem by programming in AMPL language, combined with IPOPT optimization solver. Before describing the AMPL program in detail, we give a numerical example.

##### A numerical example

Let , , and the initial data to be a step function,

Let us set, moreover, . The optimal solution can then be obtained within , and the final cost is about , with its first term . The obtained optimal solution is illustrated in the Figure 1. In the right subfigure, and .

##### AMPL code

In this section, we introduce how to solve the optimal control problem with AMPL. First, we need to define parameters that will be used, for example, the values of , etc.

# Parameters of discretization param Nt := 250*2; param Nx := 50; param L := 30; param tf := 50; param dx := 2*L/Nx; param dt := tf/Nt; param ymax := 0.6; set kx ordered := 0..Nx; set kt ordered := 0..Nt; param x{j in kx}; let {j in kx} x[j] := -L+j*dx; param t{i in kt\}; let {i in kt} t[i] := i*dt; # Parameters of control problem param u1max := 5; # a_max param u1min := -5; # a_min param u2max := 1; # tet_max param u2min := 0; # tet_min param tet0 := 0.7; param tetf := 0.4; param a0 := -1; param af := -0.2; param Ktet := 0.01; param Ka := 0.01; param x1 := 0; # Parameters of initial data param A0 := sqrt(-a0); param c0 := -A0*sqrt(2)*(1/2-tet0); param y0 {j in kx}; let {j in 0..Nx/2} y0[j] := 0; let {j in Nx/2+1..Nx} y0[j] := tet0-0.01; # Parameters of desired solution param Af := sqrt(-af); param cf := -Af*sqrt(2)*(1/2-tetf); param yobj{i in kx}; param x2 := x1+cf*Nt*dt; let {j in kx} yobj[j] := 0.35 ; # Parametre for initialisation \par # (0 => rien; 1 => constant; 2 => init.txt) param Init_Type = 1;

After defining all parameters, we can now define the optimal control problem ., including the cost functional and all constraints on the state and on the control.

# Declare the variables and their bounds var a{i in kt} >= u1min, <= u1max; var tet{i in kt} >= u2min, <= u2max; var y{i in kt, j in kx} >= 0, <= 1; # Specify the objective function minimize obj: sum{j in kx} ( ( y[Nt,j] - yobj[j]) + Ktet*sum{j in kt diff{0}} (tet[j]-tet[j-1]) + Ka*sum{j in kt diff{0}} (a[j]-a[j-1])

# Contraints of the control subject to c1: tet[0] - tet0 = 0; subject to c2: a[0] - a0 = 0; subject to c3: tet[Nt] - tetf = 0; subject to c4: a[Nt] - af = 0; # Initial data subject to i1 {j in kx}: y[0,j] = y0[j] ; # Dynamical constraints (Backward )subjectto d1 {i in kt diff{Nt}, j in kx diff {0,Nx}}: (y[i+1,j] - y[i,j]) - 1/2*dt*( y[i+1,j+1]-2*y[i+1,j]+ y[i+1,j-1]+y[i,j+1]-2*y[i,j]+y[i,j-1])/- 1/2*dt*( a[i+1]*y[i+1,j]*(1 - y[i+1,j]) *(tet[i+1]- y[i+1,j])+ a[i]*y[i,j]*(1 - y[i,j]) *(tet[i]- y[i,j])) = 0; # Boundary constraints subject to b1 {i in kt diff{0}}: y[i,1] - y[i,0] = 0; subject to b2 {i in kt diff{0}}: y[i,Nx] - y[i,Nx-1] = 0;

# Initialization of state and control variablesif(Init_Type == 1) then {let{j in kt} tet[j] := tetf;let{j in kt} a[j] := 0;for{i in kt diff{0},j in kx}{lety[i,j] := ymax*exp(Af*(-L+j*dx-cf*i*dt)/sqrt(2))/ (1 + exp(Af*(-L+j*dx-cf*i*dt)/sqrt(2)));}};# Initialisation avec (only state and control)if(Init_Type == 2) then {read{i in kt} (a[i],tet[i]) < init.txt;read{i in kt, j in kx} (y[i,j]) < init.txt;};

Now the optimal control problem is defined, we can finally solve it. Here we use IPOPT solver, which implements a primal-dual

interior point method. Recall that an interior point method is a linear or nonlinear programming method. Of course, one can also choose other appropriate optimization solvers.

# tell ampl to use the ipopt executable as a solver # make sure ipopt is in the path! option solver ipopt; # solve the problem option ipopt_options 'max_iter=1000 tol=1e-6'; solve;

To see the optimization result, one can print results into txt files, for example:

# print the solution to out.txtoptiondisplay_precision 6;printf: out.txt;printf: out.txt;