2

I have to define an asymptotic upper and lower bound of the recursive relation $T(n)=5 T(\frac{n}{5})+\frac{n}{ \lg n}$.

I thought that I could use the master theorem,since the recursive relation is of the form $T(n)=aT(\frac{n}{b})+f(n)$

$$a=5 \geq 1 , b=5>1 , f(n)=\frac{n}{ \lg n}$$

$$f'(n)=\frac{ \lg n-1}{ \lg^2 n}>0 \Rightarrow \lg n >1 \Rightarrow n>2$$

So, $f(n)$ is asymptotically positive and increasing $\forall n>2$.

$$n^{\log_b a}=n^{\log_5 5}=n$$

We see that $f(n) < n$

$$f(n)=O(n^{ \log_b a- \epsilon})=O(n^{1- \epsilon})$$

But how can we find the $\epsilon$ ? Or can't we apply in this case the master theorem?

EDIT:

Could I do it maybe,using the substitution method,like that:

$$m=\log_5 n \Rightarrow n=5^m$$

$$\frac{T(5^m)}{5^m}=\frac{T(5^{m-1})}{5^{m-1}}+\frac{1}{m}$$

Let $S(m)=\frac{T(5^m)}{5^m}$

Then: $$S(m)=S(m-1)+\frac{1}{m} \\ S(m-1)=S(m-2)+\frac{1}{m-1} \\ \dots \\ S(2)=S(1)+\frac{1}{2}$$

So,we get:

$$S(m)=S(1)+\frac{1}{2}+\frac{1}{3}+ \dots + \frac{1}{m-1}+\frac{1}{m}$$

But,how can I continue?

evinda
  • 7,823
  • You just need an upper bound, so $\mathcal{O}(n)$ should suffice for $f(n).$ – Ehsan M. Kermani Jul 19 '14 at 00:19
  • There exists no $\varepsilon > 0$ such that $\frac{n}{\log n} = O(n^{1-\varepsilon})$, since $\frac{n}{(\log n) n^{1-\varepsilon}} = \frac{n^{\varepsilon}}{\log n}$ diverges as $n \to \infty$. $\frac{n}{\log n} = O(n)$, however. (Note the distinction between $O$ and $\Theta$ notation.) – Ian Jul 19 '14 at 00:19
  • @EhsanM.Kermani I am also asked to find a lower bound.. :/ – evinda Jul 19 '14 at 00:35
  • @Ian So,we cannot apply the master theorem,right? How did you conclude that $\frac{n}{ \log n}=O(n)$ ? – evinda Jul 19 '14 at 00:37
  • As best I can tell, none of the hypotheses of the master theorem hold in this case. $\log_5(5) = 1$, so for case 1 we would need $f(n) = O(n^{1-\varepsilon})$ for some $\varepsilon > 0$. This isn't true here. For case 2 we would need $f(n) = \Theta(n \log^k(n))$ for some $k \geq 0$, which again is not true here. (It would be with $O$, but the hypothesis involves $\Theta$.) For case 3 we would need $n = o(f(n))$, which again is not true. – Ian Jul 19 '14 at 01:30
  • Erm, sorry, for case 3 we would need $n^{1+\varepsilon} = o(f(n))$. But again, still not true. – Ian Jul 19 '14 at 01:38
  • By the way, an essentially equivalent example ($5$ replaced by $2$) is actually mentioned explicitly on the Wikipedia page as being a case where the master theorem does not apply. – Ian Jul 19 '14 at 01:40
  • It may be of use to note that this same recurrence (with base 2 instead of base 5) is discussed at this MSE link. – Marko Riedel Jul 19 '14 at 02:32
  • Could I maybe use the substitution method,as I did at my post,that I edited? If so, how could I continue? – evinda Jul 19 '14 at 13:24

1 Answers1

2

Here's some intuition about the master theorem.

Case $1$: If $f(n)$ is polynomially smaller than $n^{\log_b(a)}$, then it is negligible, and the asymptotic behavior is the same as what you would have without it included.

Case $2$: If $f(n)$ is either of the same order as or exactly logarithmically larger than $n^{\log_b(a)}$, then the two terms compound each other, and you pick up an additional asymptotic factor of $\log n$ on top of $f(n)$ itself.

Case $3$: If $f(n)$ is polynomially larger than $n^{\log_b(a)}$, then the recursive term is negligible, and the asymptotic behavior is the same as what you would have without it included.

In your case, $f(n)$ is asymptotically but not polynomially smaller than $n$, so it is bigger than case $1$ but smaller than case $2$. This means that you should expect $T(n)$ to be larger than what you would get in case $1$ (i.e. larger than $\Theta(n)$) but smaller than anything you might get from case $2$ (i.e. smaller than $\Theta(n \log n)$).

You can get a sharper result using the Akra-Bazzi method (http://en.wikipedia.org/wiki/Akra%E2%80%93Bazzi_method), which is a generalization of the master theorem. The sharp result is

$$T(n) = \Theta(n \log \log n)$$

which is consistent with the discussion above.

Ian
  • 101,645
  • A ok..I haven't get taught the Akra-Bazzi method.Could I do it maybe,using the substitution method,like that:

    $$m=\log_5 n \Rightarrow n=5^m$$

    $$\frac{T(5^m)}{5^m}=\frac{T(5^{m-1})}{5^{m-1}}+\frac{1}{m}$$

    Let $S(m)=\frac{T(5^m)}{5^m}$

    Then: $$S(m)=S(m-1)+\frac{1}{m} \ S(m-1)=S(m-2)+\frac{1}{m-1} \ \dots \ S(2)=S(1)+\frac{1}{2}$$

    So,we get:

    $$S(m)=S(1)+\frac{1}{2}+\frac{1}{3}+ \dots + \frac{1}{m-1}+\frac{1}{m}$$

    But,how can I continue?

    – evinda Jul 19 '14 at 13:20
  • The sum $1/2 + 1/3 + ... + 1/m$ is $\Theta(\log m)$ (e.g. by an integration argument). So $S(m) = \Theta(\log m)$, $T(5^m) = \Theta(5^m \log m)$, so $T(n) = \Theta(n \log \log n)$, as Akra-Bazzi predicts. Nice. – Ian Jul 19 '14 at 13:33
  • Could you explain me further how you concluded that the sum $\frac{1}{2}+\frac{1}{3}+ \dots +\frac{1}{m}$ is $\Theta(\log m)$ ? – evinda Jul 19 '14 at 13:38
  • To be precise, for each $n$, $1/(n-1) \leq \int_{n-1}^n 1/x dx \leq 1/n$. Then sum this inequality from $n=2$ to $m$. – Ian Jul 19 '14 at 13:55
  • 1
    Erm, I made an obvious error. $1/n \leq \int_{n-1}^n 1/x dx \leq 1/(n-1)$. Summing from $2$ to $m$ we get $\sum_{k=2}^m 1/k \leq \ln(m) \leq \sum_{k=1}^{m-1} 1/k$. Let $N(m) = \sum_{k=2}^m 1/k$, then $\sum_{k=1}^{m-1} 1/k = N(m) + 1 - 1/m$, so we have $N(m) \leq \ln(m) \leq N(m) + 1 - 1/m$. So $\ln(m) = N(m) + O(1)$ which implies $N(m) = \Theta(\log m)$. – Ian Jul 19 '14 at 15:15
  • Do we conclude that $\ln m=N(m)+O(1)$ because $\frac{1}{m}\to 0$ ? – evinda Jul 21 '14 at 16:21
  • 1
    It's correct to do so, but you don't need that much. We have $\ln m = N(m) + h(m)$ for some unknown function $h$. From the inequalities we know $0 \leq h(m) \leq 1-1/m$ for all $m$. Since $1-1/m \leq 1$, this means $h(m)$ is bounded, so $h(m)=O(1)$. Thus $\ln m = N(m) + O(1)$. – Ian Jul 21 '14 at 16:39
  • Ian is this the only way to show that $N(m)=\Theta(\log m)$ ? – evinda Aug 02 '14 at 01:14
  • Well, $N(m) = \Theta(\log m)$ is equivalent to $h(m) = O(\log m)$. I am not sure how you would prove the latter without proving that $h(m) = O(1)$, since in fact $h(m) = \Theta(1)$. And I am not sure how you would show that $h(m) = O(1)$ without some kind of integration argument. – Ian Aug 02 '14 at 02:52