User Interface to Present the Objective Function

Linear Quadratic Regulator

To avoid generating trajectories too different from the initial trajectory, we can use a Linear Quadratic Regulator as an objective. This has the form:

\[\begin{aligned} & \frac{1}{2} (x_N - x_{N, ref})^\top Q (x_N - x_{N, ref}) + \\ & \frac{1}{2} \Sigma_{n = 1}^{N-1} (x_n - x_{n, ref})^\top Q (x_n - x_{n, ref}) + (u_n - u_{n, ref})^\top R (u_n - u_{n, ref})) \\ \end{aligned}\]

Currently the module only supports Q and R matrices that are identical at every timestep, but this can be changed in the future. However, the framework does allow a QP that only uses the Q matrix on the last timestep and ignores it elsewhere.

In the example below, we build a simple diagonal Q and R matrix.

lqrQMat = 0.0001 * Diagonal(I, size(rocketStart, 1))
lqrRMat = 0.0025 * Diagonal(I, Int64(size(rocketStart, 1) / 2))

We reference the trajectory to the initial trajectory.

initTraj = initializeTraj(rocketStart, rocketEnd, uHover, uHover, NSteps)

costFun = makeLQR_TrajReferenced(lqrQMat, lqrRMat, NSteps, initTraj)
LQR_QP_Referenced(TrajOptSOCPs.LQR_QP(
  [1  ,   1]  =  0.0001
  [2  ,   2]  =  0.0001
  [3  ,   3]  =  0.0001
  [4  ,   4]  =  0.0001
  [5  ,   5]  =  0.0025
  [6  ,   6]  =  0.0025
  [7  ,   7]  =  0.0001
  [8  ,   8]  =  0.0001
  [9  ,   9]  =  0.0001
  ⋮
  [356, 356]  =  0.0001
  [357, 357]  =  0.0001
  [358, 358]  =  0.0001
  [359, 359]  =  0.0025
  [360, 360]  =  0.0025
  [361, 361]  =  0.0001
  [362, 362]  =  0.0001
  [363, 363]  =  0.0001
  [364, 364]  =  0.0001), [2.0; 20.0; … ; 0.0; 0.0])

For more information, see