While babou offers one possible solution, that one depends on $c_3/r$ not being too large. If it is large, then you might approach the problem like this:
A data structure for solving this problem would need to provide us with 3 (or 5) operations: we need to be able to add numbers, given a number we need to find the predecessor (next smaller number) and the successor (next larger number).
One possible data structure that supports these operations is a binary search tree, and this gives a quite straightforward algorithm: generate the numbers $\{a_n\}$ one-by-one, and insert them in to the tree, and after each insertion query for the successor and predecessor values of the newly inserted number to see if they are in the specified range. If you use a balanced binary search tree (such as an AVL tree or red-black tree) each operation takes $O(\log n)$ and you obtain an $O(n \log n)$ algorithm where $n$ is the number of iterations before $\{a_n\}$ comes in to the specified range. Using a Van Emde Boas tree you could improve this to $O(n \log \log n)$. This approach uses only $O(n)$ memory so it is much more efficient than babou's approach if $c_3/r$ is large.
Another approach that is remarkably simple and might work very well in practice is based off insertion sort: if you could keep the elements of $\{a_n\}$ generated so far in a sorted array, given a new value $a_n$ you could find its position in the array using binary search and find the next larger and smaller values just by looking one position to the left and right. The problem with this approach is of course inserting new elements, since that requires expanding the array, and moving over all of the elements larger than the newly generated one one spot to the right.
However, you could store $n$ elements in an array of size $2n$, doubling up every element. You could still use binary search to find a new element's position, but inserting it becomes possible without extra work: you can simply have it take the place of a duplicated element. Of course, as the array gets fuller (with less duplicates) collisions might happen where you need to insert an element at a position where there are no more duplicates. This can be resolved in a similar manner to insertion sort, you can push elements to the right to make space for the newly generated element. Unlike insertion sort, this pushing stops much earlier: as soon as a duplicate element is encountered.
If the elements of $\{a_n\}$ are distributed somewhat evenly (which they are, by virtue of the way they are generated) then the chances of needing a long (and expensive) move step are small, and most insertions will be dealt with quickly. As the array becomes too full such collisions will become more frequent. When you get to the point where (say) the number of distinct elements stored becomes $1.5n$ you might double up the size of the array to $4n$. Even though this is a very expensive step (it requires moving all of the elements to a new array) it only happens very infrequently (since you double the size of the array each time, you only do this expensive step $O(\log n)$ times).
As I mentioned before, babou has already provided a hint in the right direction of how you can make the array smaller, this answer is just to offer a more complete solution to the problem (in case somebody ever comes along and needs guidance on how to solve this problem in a more practical setting without bounds on $c_3/r$). It's not the one you are intended to use, though if you managed to implement either one you would probably really impress the course staff and really get me in trouble :-)