There is a Wikipedia-type website of a fixed size of $S$ number of articles. You start at any article on Wikipedia. You then start to press the "random article" button and count the number of times $N$ that it takes for you to randomly generate the original page again.
The goal is to find the best estimate for $S$ given only the number $N$.
Assume that the "random article" function generates any particular article with a uniform probability and that each click is a (EDIT: independent) random event. The page that you start on does not count as a click and is thus not counted in $N$.
This problem reminds me of the solutions to the locomotive problem and the German tank problem. Since the button is clicked N times, the case with the highest number of articles has all N pages being different pages, yielding an estimate of (I think) $2N+1$. This would mean that the best estimate would be less than $2N+1$, because it is always possible for $N$ to contain an article, other than the one you were looking for, to be repeated multiple times.
Another possible solution is based off of the method of mark and recapture to estimate the size of animal populations in the wild. This method gives the estimate as $S=N$. This would because out of $N$ samples taken during the "recapture", exactly $1/N$ of them are articles viewed during the "mark" phase (original article that is remembered). Since $1/N$ of the pages are known to be picked from the group of one particular page, then the estimate would be that there would be $N$ pages in total.