On your attempt
It is not clear to me how Stolz's theorem (I'll call this the Cesaro-Stolz theorem, or CS theorem , or just CS from now on) was used. The proofs right up till the estimates for $b_n$ are correct, but after that point I'm lost because I don't know where $\sqrt n$ has come from, and where $\sqrt 3$ has come from. Thus, I will borrow the efforts you have made in the bounds, but beyond that I will need an explanation from you. Either way, you do seem to have derived what are seemingly correct upper and lower bounds, since your bounds correspond exactly to the situations $\lambda=0$ and $\lambda = 1$.
But I'm not going to ask for further details, because I can see that the problem is simple. The bounds don't quite take $\lambda$ into account, somewhere the information about $\lambda$ is lost. In fact, that place is quite easy to locate as well : the minute you bring in $\lambda a_n + (1-\lambda)a_{n-1} \leq \max\{a_n,a_{n-1}\}$, you have lost the effect that $\lambda$ brings to the recurrence. That's why you can only estimate what happens when $\lambda$ takes on the extreme values.
What you need is a better inequality, that is capable of hitting the mark better and doesn't forget $\lambda$. So maximums are out ,when it comes to the fine limit estimation we are doing.
For now, though, I'll begin the entire problem all over, and cover it from scratch.
Introduction
First of all, what are we expecting to see from $a_n$? Let's forget the fact that we actually know what happens to it (basically, let's forget that we know what happens at the end), and let's try to guess it instead.
We will use the fact that $\sin x <x$ for $x \in (0,1)$. This is easily derived using the mean value theorem.
Note that if there has to be a limit for $a_n$, then it has to be zero , since $0 \leq a_n \leq 1$ for all $n$ (proven by obvious induction) and taking the limits on both sides of the recurrence, gives that for a candidate limit $L$ we must have $L = \sin L$, the only solution of which is $L=0$.
Using your work, we get that $b_n = \max\{a_{n+3},a_{n+2}\}$ satisfies $b_{n+1} \leq \sin \sin b_n$. Therefore, applying the same logic as for $a_n$, we know that $b_n$ is bounded and decreasing. But it must converge to $0$ by letting $n \to \infty$ in this inequality and observing that it can't hold if the limit is greater than $0$. So $b_n \to 0$, and hence $a_n \to 0$ as well, since $0 \leq a_n \leq b_{n-2}$ for all $n$, and we can use the squeeze theorem.
Thus, $a_n \to 0$.
At this point, let me also remark that there is a link between the recurrences for $\lambda=0$ and $\lambda=1$. Indeed, note that for the $\lambda=1$ case we have $a_{n+2} = \sin(a_n)$, which following a rewrite as $a_{2n+2} = \sin(a_{2n})$ and setting $b_n = a_{2n}$ gives $b_{n+1} = \sin(b_n)$ which is the $\lambda=0$ case. Therefore, if , according to $\lambda=0$ we have $\sqrt{n}b_n \to C$, then $\sqrt{n}a_{2n} \to C$ so $\sqrt{2n}a_{2n} \to \sqrt{2}C$. In other words, the answer for the case $\lambda=1$ is exactly $\sqrt 2$ times the answer for the case $\lambda = 0$.
Thus, we now only assume $0 \leq \lambda <1$, and leave out $\lambda = 1$.
How do we plan to get the asymptotics for $a_n$?
Well, there are many ways of guessing the asymptotics. Various kind of "ansatz" (assumptions that are made from generally observed patterns) can be tried, and we can use Cesaro-Stolz freely (since any kind of limit ,including infinite,is admissible under the theorem) to see when the limit is finite.
However, instead of trying out obvious ansatz, we get to the Taylor series of the sine. Why? Well, if we are trying to understand the asymptotics, then the bound we used for $a_n$ is not good enough, because it goes only one derivative deep into the sine. We need to go into more derivatives into the sine, because we know that to apply a Cesaro-Stolz type argument we effectively need insights into differences between the $a_{n}$, which result in the cancellation of first-order terms and exposure of lower-order terms (as far as a Taylor expansion is concerned). So we need to get hold of lower-order terms!
This is achieved by the following result that we get either from the Taylor series or from L'Hopital. You can find many proofs of the given result here :
$$
\lim_{x \to 0} \frac{\sin(x)- x}{x^3} =-\frac 16
$$
which quantifies, up to the third order, the behaviour of the sine. Now, we use this limit with the sequence $\lambda a_{n} + (1-\lambda)a_{n+1}$ (which converges to $0$ as $n \to \infty$) to get :
$$
\lim_{n \to \infty} \frac{a_{n+2} - \lambda a_n - (1-\lambda)a_{n+1}}{ (\lambda a_n + (1-\lambda)a_{n+1})^3} = -\frac 16
$$
we also use the result $\frac{\sin x}{x} \to 1$ as $x\to 0$ with the same sequence to get
$$
\lim_{n \to \infty} \frac{a_{n+2}}{(\lambda a_n + (1-\lambda)a_{n+1})} = 1
$$
which coupled with the earlier limit gives us :
$$
\lim_{n \to \infty} \frac{a_{n+2} - \lambda a_n - (1-\lambda)a_{n+1}}{a_{n+2}^3} =-\frac 16
$$
I isolate the below paragraph for its lack of rigour : but also because it contains the brilliant jump we are going to make. I use $\sim$ instead of $=,\approx$ etc. to denote "vaguely similar/equal to" in what follows here , because I'm scared of making proper mathematical statements at this point. Nothing below is rigorous.
Now let's be unrigorous, and assume that $a_{n}$ is extended to a function $a_x \sim f(x)$ for some very nice function $f$. Note that by the intermediate value theorem, $\lambda f(n) + (1-\lambda) f(n+1)$ will be $f(n+h)$ for some $0 \leq h \leq 1$ . Then, the ratio above (after neglecting the $\lambda$ as a constant using some scaling) looks "like" $\frac{f(x) - f(x-h)}{f(x)^3}$ (where $x=n+2$ so we get $h>0$), which we expect to behave like $\frac{hf'(x)}{f(x)^3}$ (from the estimate that $f(x) - f(x+h) \sim hf'(x)$). Therefore, $\frac{hf'(x)}{f(x)^3} \sim -\frac 16$. Rewrite this as $\frac{-f'(x)}{f(x)^3} \sim C$ for some constant $C$, having put all constants to one side. So (super-unrigorous alert!):
$$
\frac{-df}{dx} \sim C f^3 \implies \frac{-df}{f^3} \sim Cdx \implies \int \frac{-df}{f^3} \sim C \int dx \\ \implies f^{-2} \sim Cx \implies f \sim \frac 1{\sqrt{C x}}
$$ and so we get that
$f(x) \sim \frac{C'}{\sqrt x}$. So, $a_x \sim \frac{C'}{\sqrt x}$ is
expected!
The above paragraph ,along with the previously obtained estimates, tells us that the correct order of decay for $a_n$ is $\frac 1{\sqrt n}$. In other words, the correct quantity expected to have a non-zero limit is $\sqrt n a_n$.
Developing the intuition for such manipulations is very important. Using the approximation of the mean value theorem to connect a differential equation with a recurrence, is in particular a magnificent idea by itself, and helps not just with asymptotics of deterministic sequences, but even random variables.
The quotient limit, and a new technique for evaluating limits
We want to get our act together. So when are we going to do CS?
Well, we need to do some preparation. For starters, let's get something useful out of the limit $$
\lim_{x \to 0} \frac{a_{n+2} - \lambda a_n - (1-\lambda)a_{n+1}}{a_{n+2}^3} = - \frac 16
$$
What we do, is multiply this by the convergent sequence $a_{n+2}^2$, which we know goes to zero. Then we get :
$$
\lim_{x \to 0} 1 - \lambda \frac{a_n}{a_{n+2}} - (1-\lambda)\frac{a_{n+1}}{a_{n+2}} = 0
$$
Now, suppose that $\lim_{n \to \infty} \frac{a_n}{a_{n+1}} = L$ existed. Then, it must solve $1-\lambda L^2 - (1-\lambda)L=0$ from above (note that $\frac{a_n}{a_{n+2}} = \frac{a_n}{a_{n+1}}\frac{a_{n+1}}{a_{n+2}}$ so that is where $L^2$ comes from). This has two roots, one negative which can be ruled out, and then $L=1$.
We need to prove that the limit exists, of course.
Claim : $\lim_{n \to \infty} \frac{a_{n}}{a_{n+1}} =1$.
This proof, by itself, is one of the most wonderful and rewarding pieces of mathematics that you can read. Warning : it's not easy. Not one bit. Took long, long to figure out! I'm going to dedicate a section to it.
Proving that $\lim_{n \to \infty} \frac{a_n}{a_{n+1}} = 1$.
Proof : First, let me go through the intuition. Let $d_n = \frac{a_{n+1}}{a_n}$.(It's the reciprocal of what we want, so we'll show that $d_n\to 1$ and we are done!) Recall the facts $\frac{\sin x}{x} \to 1$ as $x \to 0$, and $\sin x < x$ for all $x \in \mathbb R$. Using the second fact tells us that $\frac{a_{n+2}}{a_{n+1}} \leq \lambda\frac{a_{n}}{a_{n+1}} + (1-\lambda))$ for all $n$. Rewriting this in terms of the $d_n$ gives $d_{n+1} \leq \frac{\lambda}{d_n} + (1-\lambda)$.
Now, we also know that $a_n \to 0$, and hence $\frac{\sin (\lambda a_n + (1-\lambda)a_{n+1})}{\lambda a_n + (1-\lambda)a_{n+1}} \to 1$. This, in particular, means that for all $C<1$ there is an $M(C)$ such that $n>M(C)$ implies $a_{n+2} > C(\lambda a_n + (1-\lambda)a_{n+1})$, and therefore $\frac{a_{n+2}}{a_{n+1}} > C(\lambda \frac{a_{n}}{a_{n+1}} + C(1-\lambda))$ for $n>M(C)$. Putting $d_n$ in its place we get :
$$
d_{n+1} > \frac{C\lambda}{d_n} + C(1-\lambda) \quad \forall\ n>M(C)
$$
Combining, for all $C<1$ and $n>M(C)$ the two bounds read :
$$
C(1-\lambda) + \frac{C\lambda}{d_n} \leq d_{n+1} \leq \frac{\lambda}{d_n} + (1-\lambda)
$$
But how can we manipulate this information? Well, we do it using the concept of a "squeeze" sequence.
Basically, we create a sequence out of the bounds specified by making those bounds equal, and then comparing the emerging sequence with $d_n$. Then, if we can say something about the sequence for which equality is attained , we may be able to say something about $d_n$ itself! What a beautiful, beautiful technique.
Let's see how it is done. First, fix $C<1$ arbitrary. Then we have $M(C)$ given according to the limit statement earlier. We define a sequence $e_n$ as follows : $e_{m} = d_{m}$ for $m=1,2,....,2M(C)$ ($2M(C)$ is just chosen to be arbitrarily above $M(C)$ and even, although I'm not sure even that makes a difference!) , and for $m > 2M(C)$, either $m = 2k+1$ or $m = 2k+2$ for some $k>M(C)$, so we define :
$$
e_{2k+1} = C (1-\lambda) + \frac{C\lambda}{e_{2k}} \\
e_{2k+2} = (1-\lambda) + \frac {\lambda}{e_{2k-1}}
$$
Now, what do we expect from $e_{k}$? We expect $e_k$ to squeeze $d_k$ from both sides. Indeed, we'll prove that $e_{2k+1} \geq d_{2k+1}$ and $e_{2k+2} \leq d_{2k+2}$ for all $k$. This is simple : note that $$
e_{2k} \leq d_{2k} \implies C(1-\lambda) + \frac{C\lambda}{e_{2k}} \geq C(1-\lambda) + \frac{C\lambda}{d_{2k}} \implies e_{2k+1} \geq d_{2k+1} \\
e_{2k+1} \geq d_{2k+1} \implies (1-\lambda) + \frac{\lambda}{e_{2k+1}} \leq (1-\lambda) + \frac{\lambda}{d_{2k+1}} \implies e_{2k+2} \leq d_{2k+2}
$$
and we are done by induction. But what can we say about $e_n$? Well, take the even-odd subsequences $e_{2n},e_{2n+1}$ which were discussed above. These subsequences form recurrences by themselves. We see from some simple algebra that:
$$
e_{2k+2} = (1-\lambda) +\frac{\lambda e_{2k}}{C(1-\lambda)e_{2k} + C\lambda}
$$
and
$$
e_{2k+1} = C(1-\lambda) + \frac{C\lambda e_{2n-1}}{e_{2k-1}(1-\lambda) + \lambda}
$$
Where do we expect $e_{2n+2}$ and $e_{2n+1}$ to converge? To some roots of the following two equations respectively ($e$ for even, $o$ for odd) : $$
L_{e} = (1-\lambda) +\frac{\lambda L_e}{C(1-\lambda)L_e + C\lambda}\quad ; \quad L_{o} = C(1-\lambda) + \frac{C\lambda L_o}{L_o(1-\lambda) + \lambda}
$$
$$
(we now write $L_e = L_e(C)$ to explicitly indicate dependence on $C$) we know we can rule out negative roots by showing that $e_n >0$ by induction or otherwise. so $L_o(C) = 1$ comes out easily. The other one, let's say it's complicated! Well, it's easy to use Vieta's formula and the intermediate value theorem, and see that the one root is positive and the other negative.
Now, how do we show convergence of $e_{2n}$ and $e_{2n+1}$? These are iterative mappings, in the sense that $e_{2n+2} = f(e_{2n})$ and $e_{2n+1} = g(e_{2n-1})$ for some maps $f,g$ that you can easily write down. Furthermore, $f$ and $g$ are special : they are of the form $\frac{Ax+B}{Dx+E}$ for constants $A,B,D,E$. In this case, one can use multidimensional recurrences to actually solve these recurrences. See here for example. Now, for the case $L_o= 1$ things are easy. For $L_e(C)$ they are much harder. I'm not going to show the details, but you can prove the following : $L_e(C)$ exists, $L_e(C) \to 1$ as $C \to 1$, and $L_o=1$.
Now, let's go back to $d_n$. We know that $d_{2n}\geq e_{2n}$ and $d_{2n+1} \leq e_{2n+1}$. It follows that $\liminf d_{n} \geq L_e(C)$ and $\limsup d_{n} \leq 1$. Since $C$ was arbitrary, sliding $C$ to $1$ gives that $\lim_{n \to \infty} d_n = 1$.
So $\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = 1$.
For beginning CS
Now, we make a very, very nice observation for CS.
$$
a_{n+2} - (\lambda a_n + (1-\lambda) a_{n+1}) = a_{n+2}+\lambda a_{n+1} - (a_{n+1} - \lambda a_n)
$$
is actually the difference of consecutive terms of some sequence. Let's call $b_n = a_{n+2} + \lambda a_{n+1}$ then $a_{n+2} - (\lambda a_n + (1-\lambda)a_{n+1}) = b_{n} -b_{n-1}$.
We will find $\lim_{n \to \infty} \sqrt{n} b_n$. For this, we note that it is sufficient to compute $\lim_{n \to \infty} \frac 1{n b_n^2}$ (and show it's non-zero). Writing this as $\frac {\frac 1{b_n^2}}{n}$, CS would find a limit for this, provided we found a limit for $\frac 1{b_n^2} - \frac 1{b_{n-1}^2}$. So let's do that!
The intermediate CS computation
So we have to find $$
\lim_{n \to \infty} \frac 1{b_n^2} - \frac 1{b_{n-1}^2} = \lim_{n \to \infty} \frac {b_{n-1}^2 - b_n^2}{b_{n}^2b_{n-1}^2}
$$
Let's use what we know. We know that $\frac{b_n - b_{n-1}}{a_{n+2}^3}$ has a limit, namely $-\frac 16$. Note that $a_{n+2} = b_n - \lambda a_{n+1}$. Furthermore, we may factorize the top expression using a difference-of-squares. All this, and a little bit of grouping in anticipation of finite limits, leads to :
$$
\frac {b_{n-1}^2 - b_n^2}{b_{n}^2b_{n-1}^2} = -\left(\frac{b_n - b_{n-1}}{(b_n - \lambda a_{n+1})^3}\right) \left(\frac{a_{n+2}^3}{b_n^3}\right) \left(\frac {b_n + b_{n-1}}{b_{n-1}}\right)\left(\frac{b_n}{b_{n-1}}\right)
$$
(See the $-$ in front of the RHS) Great! Let's call these terms $(1),(2),(3),(4)$ in order of appearance. Limit $(1)$ is $-\frac 16$. Limit $(2)$ is :
$$
\frac{a_{n+2}^3}{b_n^3} = \frac{1}{\left(1 + \lambda \frac{a_{n+1}}{a_{n+2}}\right)^3} \to \frac {1}{(1+\lambda)^3}
$$
I leave you to show that limit $(3)$ is $2$ and limit $(4)$ is $1$. Just like for limit $(2)$, divide top and bottom by some $a_n$, and use the quotient rule for limits.
With all that, we get :
$$
\lim_{n \to \infty} \frac 1{b_n^2} - \frac 1{b_{n-1}^2} = \frac 1{3(1+\lambda)^3}
$$
It follows from Cesaro-Stolz, that $\frac 1{nb_n^2} = \frac 1{3(1+\lambda)^3}$.
Finishing the proof
By the quotient rule, the above yields $nb_n^2 \to 3(1+\lambda)^3$.
Now, note that $na_n^2 = nb_n^2 \frac{a_n^2}{b_n^2}$. So it's enough to prove that $\frac{a_n^2}{b_n^2}$ converges to a limit as $n \to \infty$. What is that limit going to be? Well, we do the usual :
$$
\frac{a_n^2}{b_n^2} = \frac{1}{(\frac{a_{n+2}}{a_n} + \lambda\frac{a_{n+1}}{a_n})^2} \to \frac 1{(1+\lambda)^2}
$$
and done! Hence, we get that $na_n^2 \to \frac{3(1+\lambda)^3}{(1+\lambda)^2}$, and following cancellation and taking the square-root , our answer is $\sqrt{3(1+\lambda)}$.
Key points
Convergence analysis of the sequence itself, using weak bounds for the RHS.
Ratio analysis using a "squeeze" sequence, along with standard facts for linear recurrences.
Using the Taylor series to constrain the behaviour of the right hand side, and hence the sequence.
Using the notion of a "connected" differential equation to link asymptotics of recurrence relations to asymptotics of functions.
Cleverly fitting in Cesaro-Stolz by using an intermediate variable, using a quotient bound.