Standard c++ std::sort implements quicksort algorithm, switching to mergesort if recursion is too deep.
Quicksort algorithm takes $\Omega(n*log(n))$ time in the best case, so you could assume repeating it $O(n)$ times will give a $O(n^2*log(n)$ time.
The thing which just might break our lower bound is the structure of your algorithm. If quicksort takes the last/first element (new element added to sorting on each iteration) it will break into sorting the sorted array on 1st recursive subcall, invoking its worst case performance of $O(n^2)$.
This will bring your algorithms complexity up to $O(n^3)$.
Note that this fault is valid only if quicksort does always use first/last pivot. If randomization is doing the work of selecting a pivot, the runtime in the average and best case is $O(n^2*log(n))$, and $O(n^3)$ in worst case (rare).
Let us assume that your std::sort uses insertion sort.
Insertion sort performs best on already sorted algorithms, and just needs to insert the last element added into its place in $O(n)$. This would make the whole algorithm run in $O(n^2)$.
This example is a perfect explanation of why it is imporant to learn underlaying aspects of sorting algorithms before relying on particular boxed function.
std::sort
routine. – David Richerby Aug 15 '16 at 22:44sum1()
is not recursive. – midenok Aug 16 '16 at 05:16