The following question is from an exam for the course Model Reduction. There are no answers and I don't really know what steps to take. Looking for an expert that can show me how it's done.
The viscous Burgers’ equation
$$\frac{\partial w}{\partial t}+w \frac{\partial w}{\partial x}=2\frac{\partial^2 w}{\partial x^2}+e(x)u(t) \qquad \qquad (1)$$
represents the velocity $w(x,t)$ of an incompressible fluid a t location $x \in [0, L]$ and time $t > 0$. Here, $e:[0, L] \rightarrow \mathbb{R}$ is an indicator function for the location at which an external force with input $u(t)$ is applied to the fluid flow. Solutions $w$ of $(1)$ are approximated by the finite expansions
$$w_r(x,t)=\sum_{k=1}^{r}a_k(t)\varphi_k(x)$$ where $\{\varphi_k \ | \ k=1,\dots,r\}$ is an orthonormal set of square integrable functions in the Hilbert space $L_2([0, L]).$
Derive expressions for the coefficients $a_k$(t) such that $w_r$ is a solution of the Galerkin projection of $(1)$ on the space spanned by {$\varphi_k \ | \ k = 1,\dots,r $}.
Thanks very much in advance.
Ok, so I know that the Galerkin projection of the model requires 2 projections. One for the signal $w$ that solves $(1)$ which is given by $$w_r(x,t)=\sum_{k=1}^{r}a_k(t)\varphi_k(x)$$ and one for the residual expression of the partial differrential equation which is given by: $$\left \langle \frac{\partial w_r}{\partial t}+w_r \frac{\partial w_r}{\partial x}-2\frac{\partial^2 w_r}{\partial x^2}-e(x)u(t), \varphi_n \right \rangle$$ where $n = 1,...,r$. Combining these projections means substituting $w_r$ in the latter expression, which leads to: $$\left \langle \frac{d}{dt}\sum_{k=1}^{r}a_k(t)\varphi_k(x)+\frac{d}{dx}\sum_{k=1}^{r}\big( a_k(t)\varphi_k(x)\big)^2 -2 \frac{d^2}{dx^2}\sum_{k=1}^{r}a_k(t)\varphi_k(x)-e(x)u(t),\varphi_n \right \rangle$$ Splitting this bigger inner product results in: $$\left \langle \frac{d}{dt}\sum_{k=1}^{r}a_k(t)\varphi_k(x),\varphi_n \right \rangle+ \left \langle \frac{d}{dx}\sum_{k=1}^{r}\big( a_k(t)\varphi_k(x)\big)^2,\varphi_n \right \rangle - \left \langle 2 \frac{d^2}{dx^2}\sum_{k=1}^{r}a_k(t)\varphi_k(x),\varphi_n \right \rangle - \left \langle e(x)u(t),\varphi_n \right \rangle$$ The summations can be taken out of the inner products because of linearity. The constants can also be taken out. Furthermore the time derivative can be written as a dot. This leads to: $$\sum_{k=1}^{r} \dot{a}_k(t) \langle \varphi_k(x), \varphi_n \rangle + \sum_{k=1}^{r} a_k(t) \left \langle \bigg( \frac{d}{dx} \varphi_k(x) \bigg)^2, \varphi_n \right \rangle - 2\sum_{k=1}^{r} a_k(t) \left \langle \frac{d^2 \varphi_k}{dx^2}, \varphi_n \right \rangle - \langle e(x)u(t), \varphi_n \rangle $$ The basis is orthonormal so $k = n = 1$ and $k \neq n = 0$ which means $\langle \varphi_n, \varphi_k \rangle = 1$ so the Galerkin projection becomes: $$0 =\sum_{k=1}^{r} \dot{a}_k(t)+\sum_{k=1}^{r} a_k(t) \left \langle \bigg( \frac{d}{dx} \varphi_k(x) \bigg)^2, \varphi_n \right \rangle - 2\sum_{k=1}^{r} a_k(t) \left \langle \frac{d^2 \varphi_k}{dx^2}, \varphi_n \right \rangle - \langle e(x)u(t), \varphi_n \rangle $$
which should be the GAlerkin projection i think