7

I have a function $S(x)$ which is bounded above and below as follows:

$f(x) + C_1 + \mathcal{O}(g(x)) < S(x) < f(x) + C_2 + \mathcal{O}(g(x))$ as $x \rightarrow \infty$

Can I conclude that $$S(x) = f(x) + C + \mathcal{O}(g(x))$$ as $x \rightarrow \infty$? Can anyone give a short proof for this fact?

EDIT:

(To clear up the confusion, I am stating the problem below. I am wondering if what I did was right.)

Prove that as $x \rightarrow \infty$

$$\displaystyle \sum_{n \leq x} \frac{\log n}{n} = \frac{(\log x)^2}{2} + C + \mathcal{O}(\frac{\log(x)}{x})$$

where $C$ is a constant.

This is how I did it.

Approximate the summation as an integral $\int \frac{\log(x)}{x} dx$ from above and from below. (The usual way of coming up with bounds for $p$ series). Then we will get the lower bound as $\frac{\log([x])^2}{2} + C_1$ and the upper bound as $\frac{\log([x])^2}{2} + C_2$.

Then, $\log([x]) = \log(x) + \log(\frac{[x]}{x}) = \log(x) + \log(1-\frac{\{x\}}{x}) = \log(x) + \mathcal{O}(\frac{\{x\}}{x}) = \log(x) + \mathcal{O}(\frac{1}{x})$.

Plugging this into the above I get the the question I asked.

I did this when I was asked on a timed exam since I could not think of other ways to get to the answer. Other answers and suggestions welcome as always.

Aryabhata
  • 82,206
  • 3
    What does f(x) + C_1 + O(g(x)) < S(x) mean? This seems imprecise to me. – Qiaochu Yuan Dec 06 '10 at 22:01
  • 2
    your solution is incomplete; you need to make the constants match, or else you have only shown that sum (log n)/n = (log x)^2/2 + O(1). You should review more carefully the definition of big-O notation. – Qiaochu Yuan Dec 06 '10 at 23:04
  • Ok. I understand my error. I wasn't convinced with what I wrote on the exam and thats why I wanted to clarify as soon as I came out. Thanks. –  Dec 06 '10 at 23:12
  • Can someone repair the title: "Big O notation." – Neil G Dec 07 '10 at 03:23

4 Answers4

13

A misconception needs to be cleared up here. Recall that $f(x) = O(g(x))$ means that there exists a constant $C$ such that $|f(x)| \le C |g(x)|$ in a neighborhood of whatever value of $x$ you're interested in. It is an abuse of notation that we use the equality symbol here at all; $O(g(x))$ is not a function, so it cannot be "equal" to $f(x)$ in the usual sense. See the discussion at the Wikipedia article.

Based on this definition, people sometimes say that $f(x) = h(x) + O(g(x))$ if $f(x) - h(x) = O(g(x))$. This is an additional abuse of notation, but it generally does not create problems and can be extremely convenient. I guess you could write $f(x) < h(x) + O(g(x))$ if you really wanted to, but this is redundant, since the inequality is already built into the definition of big-O notation.

But I have no idea what $f(x) > h(x) + O(g(x))$ is supposed to mean. If you are trying to say that there is a constant $C$ such that $|f(x) - h(x)| > C |g(x)|$, the correct way to write this is using big-Omega, not big-O, as $f(x) = h(x) + \Omega(g(x))$.

Qiaochu Yuan
  • 419,620
  • 3
    +1: But you probably need $|f(x)| \leq C|g(x)|$. I agree with this post: the definitions and usage around BigOh are a big mess in many places (especially computer science) with differing definitions etc. – Aryabhata Dec 06 '10 at 22:45
  • Accept the incorrect use of notation. Could you let me know how to edit/change the question? –  Dec 06 '10 at 23:00
  • 3
    Note that expressions like $f \in \mathcal{O}(g)$ are formally correct while everything given here is abuse of notation, even those expressions with $<$. – Raphael Dec 06 '10 at 23:03
  • 3
    A very useful post. – Derek Jennings Dec 07 '10 at 08:24
  • 1
    @Qiaochu I have seen some people (including myself!) use, informally at least, $f(x) \geq h(x) + O(g(x))$ when they are interested in only one-sided bounds on $f$. As an example, suppose I could show $f(x) \geq x^2 - 43x$. The problem is that I do not have a tight upper bound, so perhaps $f(x)$ is as large as $x^3$. In this situation saying $f(x) = x^2 + O(x)$ is technically incorrect, so I resort to "$f(x)$ is at least $x^2 - O(x)$". I understand that this notation is pretty non-standard and even technically wrong, but I would like some notation to capture this case. Any suggestions? – Srivatsan Jul 30 '11 at 03:47
  • @Srivatsan: as I said, you can use $f(x) = x^2 + \Omega(x)$. – Qiaochu Yuan Jul 30 '11 at 04:08
  • @Qiaochu I do not think that is correct. For instance, $f(x) = x^2 + 5$ satisfies all my requirements, but we cannot say $f(x) = x^2 + \Omega(x)$. Or am I missing something? – Srivatsan Jul 30 '11 at 04:21
  • @Srivatsan: ah, I didn't see the negative sign. Then I believe $f(x) = x^2 - O(x)$ would work, although it might be worth explaining this notation if you ever plan on using it. – Qiaochu Yuan Jul 30 '11 at 04:28
8

Since you wanted a different proof approach, you can try using Abel's Identity, which has turned out to be quite useful in analytic number theory.

For instance see this: An estimate for sum of reciprocals of square roots of prime numbers.

To apply this to your question:

Since we know that

$\displaystyle \sum_{1 \le n \le x} \frac1{n} = \log x + \gamma + R(x)$, where $\displaystyle R(x) = \mathcal{O}\left(\frac1{x}\right)$

Using Abel's identity we have that

$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = (\log x+ \gamma + R(x))\log x - \int_{1}^{x} \frac{\log t+ \gamma + R(t)}{t} \ \text dt$

i.e.

$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = \frac{\log^2 x}{2} + R(x)\log x - \int_{1}^{x} \frac{R(t)}{t} \ \text dt$

Since $\displaystyle R(x) = \mathcal{O}\left(\frac1{x}\right)$ we have that $\displaystyle \int_{1}^{\infty} \frac{R(t)}{t} \ \text dt = \eta$ exists. We also have that $\displaystyle R(x) \log x \to 0 \ \text{as} \ x \to \infty$.

Thus we have that

$\displaystyle \sum_{1 \le n \le x} \frac{\log n}{n} = \frac{\log^2 x}{2} -\eta + \mathcal{O}\left(\frac{\log x}{x}\right) \ \ \text{as} \ \ x \to \infty$

Another useful approach is to try using the Euler-Maclaurin Summation formula.

Aryabhata
  • 82,206
  • 2
    I don't think you need Euler-Maclaurin. You should be able to do it with the trapezoid rule. – Qiaochu Yuan Dec 07 '10 at 01:01
  • 2
    @Qiaochu: I agree. I just wanted to mention that as a useful result. As you say, there are simpler ways (and I am not claiming otherwise :-)). – Aryabhata Dec 07 '10 at 01:04
  • Thanks. I started with Abel's identity. But for some reason, I wanted to be a bit smart in the exam and did it as I have shown only to realize the error now! –  Dec 07 '10 at 01:09
  • 2
    I'd say that the use of the trapezoidal rule and Euler-Maclaurin are effectively equivalent... – J. M. ain't a mathematician Dec 07 '10 at 15:50
1

I don't think so because $g(x)$ might be too small. Take $f(x)=0, C_1=-1.5, C_2=1.5, g(x)=\exp(-x)$. Then doesn't $S(x)=\sin(x)$ violate it?

Ross Millikan
  • 374,822
  • 2
    What is your definition of f(x) + C_1 + O(g(x)) < S(x)? – Qiaochu Yuan Dec 06 '10 at 22:06
  • @Qiachu Yuan: I took it as there is an M such that f(x)+C_1+M*g(x) is eventually less than S(x) for large x, based on what I saw of Big O. – Ross Millikan Dec 06 '10 at 22:16
  • The way I read it, the class O(g) is all functions that (in absolute value) are dominated by some constant times g as the argument goes to infinity, while o(g) are functions that are dominated by any positive constant times g. But I could easily be confused. – Ross Millikan Dec 06 '10 at 22:48
  • yes, that is one way of interpreting big-O notation, but the notion you're thinking of has a different notation, which I explained in my answer. – Qiaochu Yuan Dec 06 '10 at 23:00
0

If $a<x<b$, then $|x|\le \max \lbrace |a|,|b|\rbrace$.

Therefore, if $C_1=C_2=C$, what you have there implies

$$ |S(x)-f(x)-C|\le K |g(x)| $$

for some $K>0$, i.e. $S(x)=f(x)+C+O(g(x))$.

TCL
  • 14,262