I don't know where you can find a good reference for this sort of thing, but I can present several such interesting solutions that actually might arise from calculus. I'm not sure this is exactly what you're looking for, because the methods are slightly peculiar, and more attacking "Let's do this without derivatives" than "Let's do this elegantly" - but there are still insights to be gleamed, but they're not the sort that give immediate gratification without too much thought.
AM-GM by Symmetry:
Suppose we wished to minimize $f(x)=x+\frac{1}x$. We could note that this has the symmetry
$$f(x)=f\left(\frac{1}x\right).$$
Along with the fact that we can prove $f\left(\frac{x+y}2\right)\leq \frac{f(x)+f(y)}2$ (i.e. that $f$ is convex), which is easily proven algebraically, suffices to show that $f$ has a unique minimum point, but by the symmetry, if $x$ is a minimum, so is $\frac{1}x$. Thus, for the only possible location of the minimum must be where $x=\frac{1}x$. Similar things work for even functions, though such a manipulation can be slightly trivial.
In multivariable calculus, we can see much more interesting symmetries - for instance, the expression $a\cdot x$ for a constant vector $a$ and vector $x$ constrained to the unit sphere can be simplified by noting that any reflection that preserves $a$ preserves $a\cdot x$ when it is applied to $x$, and this, along with some fact to show a unique minimum and maximum shows that the above expression is maximized when $x$ and $a$ are parallel. If your students might be familiar with such material, this is worth noting.
Cubics by Double Roots:
Firstly, in my experience of calculus, I recall frequently being asked word problems which liked to reduce to
Minimize $f(x)=ax^3+bx^2+cx+d$.
I got tired of doing this with derivatives. What I came up with as an alternative was to consider that if we translate an extrema down to the $x$-axis, it will be a double root of the polynomial. You could prove this rigorously, but that's beside the point - the important bit here is that double roots look like extrema. To be formal, if $x_0$ is an extrema of $f$, then we must be able to write $f$ in the form:
$$f(x)=a(x-x_0)^2(x-k)+f(x_0)$$
for some $k\in\mathbb R$. When we expand this, we get:
$$ax^3+bx^2+cx+d=ax^3+a(-k-2x_0)x^2+a(2kx_0+x_0^2)x-ax_0^2k+ax_0^3+bx_0^2+cx_0+d$$
Now, we can see that the leading term and constant term are going to be useless to us, so if we equate the coefficients of $x^2$ and $x$ to get:
$$b=a(-k-2x_0)$$
$$c=a(2kx_0+x_0^2)$$
If we multiply the first equation by $2x_0$ and add it to the second equation, we get:
$$2x_0b + c = ax_0^2 - 4ax_0^2$$
$$3ax_0^2 + 2x_0b + c = 0$$
Now we simple solve for $x_0$, and this tells us where we can write $f$ in the appropriate "double-root form". Notably, it happens that the left side of the equation is $f'(x_0)$, which is a nice connection, since we got there without differentiating.
You could generalize this to any degree polynomial, but if you use this method on a polynomial of degree $n$, you end up trying to solve:
$$f(x)=a(x-x_0)^2(x-k_1)(x-k_2)\ldots(x-k_{n-2})+f(x_0)$$
which, after expanding and identifying coefficients of $x^i$ will yield $n-1$ simultaneous equations and, after eliminating the irrelevant $k_i$ terms, you'll be left with something equivalent to $f'(x_0)=0$ - so derivatives probably are easier.
Square Roots by An Explicit Upper Bound:
Usefully, we can prove the bound $\sqrt{x+a}\leq \sqrt{x}+\frac{a}{2\sqrt{x}}$ without calculus. In particular, if you want to prove that, simply notice that $$x+a \leq x+a+\frac{a^2}{4x}=\left(\sqrt{x}+\frac{a}{2\sqrt{x}}\right)^2$$
and take the square root of both sides. Using this would allow you to optimize something like
$$f(x)=\sqrt{x}+\alpha\sqrt{1-x}$$
by noting that the identities imply
$$f(x+x_0)=f(x)+x_0\left(\frac{1}{2\sqrt{x}}-\frac{\alpha}{2\sqrt{1-x}}\right).$$
Now, if we could just get that coefficient of $x_0$ to be $0$, this would read $f(x+x_0)\leq f(x)$, which'd be great! We can just set the coefficient of $x_0$ to $0$ to get that, at a maximum we should have
$$\frac{1}{2\sqrt{x}}-\frac{\alpha}{2\sqrt{1-x}}=0$$
which yields, after some work:
$$4x=\frac{4(1-x)}{\alpha}$$
which can easily be solved to find the maximum. This method can extend to any sum of linear factors under square roots (although again it may involve awful polynomials if you take it too far) - and there's no particular reason you couldn't try to extend it to polynomials under square roots (but then you get a polynomial upper bound) - or even, more generally, to algebraic functions - but this starts to involve unwieldily computations and unsolvable polynomials pretty quickly.
A Classic Problem:
Now, for an example which doesn't involve copious amounts of algebra, but is really specific. I'm not sure where this problem comes from, but it's pretty elegant:
Suppose you are a distance of $d_0$ away from a river, which follows a straight line and you wish to get some water from the river and then walk to your house, which is a distance of $k$ upstream and $d_1$ away from the river. What is the shortest route you can take?
Naively, you can solve this by phrasing it as trying to find the point on the river $(c,0)$ minimizing the sum of its distances to $(0,d_0)$ and $(k,d_1)$, and you can do this with calculus (or the algebraic techniques I've been talking about should work too). However, a lovely insight is that we don't change the problem if we reflect the house over the river to the point $(k,-d_1)$ - indeed, if we looked at the expression for the sum of the distance of $(c,0)$ to each point, the sign of the second coordinate is irrelevant. But this modification makes the question easy! The shortest path from $(0,d_0)$ to $(k,-d_1)$ is a line, and this intersects the river at some point $(c,0)$, which solves the minimization problem, since no path can be shorter than a line.