Optimal control. A one-dimensional dynamic process is governed by a difference equation
$x(k + 1) = (x(k), u(k), k)$
with initial condition $x(0) = x0$. In this equation the value $x(k)$ is called the state at step $k$ and $u(k)$ is the control at step $k$. Associated with this system there is an objective function of the form $J=\sum_{k=0}^{N}\psi(x(k), u(k), k)$ In addition, there is a terminal constraint of the form
$g(x(N +1)) = 0$.
The problem is to find the sequence of controls $u(0), u(1), u(2), . . . , u(N)$ and corresponding state values to minimize the objective function while satisfying the terminal constraint. Assuming all functions have continuous first partial derivatives and that the regularity condition is satisfied, show that associated with an optimal solution there is a sequence $\lambda(k), k = 0, 1, . . . , N$ and a such that $\lambda(k – 1) = \lambda(k)\phi(x(k), u(k), k) + \psi(x(k), u(k), k), k = 1, 2, . . . , N, \lambda(N) = gx(x(N + 1))$ $\psi(x(k), u(k), k) +\lambda(k)\phi_u(x(k), u(k), k) = 0, k = 0, 1, 2, . . . , N$ where the subscripts x and u denote gradients.
Could anyone help me please? I tried to use the first-order and second-order conditions for equality constrained problem