6

Let $p = (p_1, p_2, ... p_k)$ where $p_i = \mathbb{P}(i)$. Here, $p$ is any discrete probability distribution, but I will refer to it as the probability distribution for some $k$ sided unfair dice.

Now let $\pi_i(p) = \sum\limits_{j=1}^k (p_j)^i$.

I conjecture that $\forall i \ge 2, \pi_i(p) \ge (\pi_2(p))^{i-1}$, but I'm not sure how to go about proving it.

The idea behind this conjecture is that $\pi_i(p)$ is the probability that exactly $i$ rolls of the dice have the same outcome, and that the probability of this occurring is greater than or equal to the probability of the 1st roll matching the 2nd, the 2nd roll matching the 3rd, and so on, until the $i-1$th roll matching the $i$th roll (which means matching 2 rolls $i-1$ times).

This property would also allow me to rigorously prove a lower bound for another probability which I already suspect to be true, which is another reason why I believe this property should be true.

====================================================================

Update:

I've found a way to prove a slightly weaker version of the conjecture:

Let $p = (p_1, p_2, ..., p_k)$ be some discrete probability distribution, let $q = (\frac{1}{k}, \frac{1}{k}, ..., \frac{1}{k})$ be a uniform probability distribution, and let $\pi_i$ be defined as above.

Then $\forall i \ge 2, \pi_i(p) \ge (\pi_2(q))^{i-1} = \big(\frac{1}{k}\big)^{i-1}$

Proof:

According to the top answer on Relations between p norms, based on Hölder's inequality, $\forall 0 < a < b$,

$\Vert p \Vert_a \le k^{1/a-1/b} \Vert p \Vert_b$

Letting $a = 1$ and $b = i \ge 2$, we have that

\begin{align} & \Vert p \Vert_1 = 1 \le k^{1-1/i} \Vert p \Vert_i \\ \iff\quad & k^{1/i-1} \le (\pi_i(p))^{1/i} \\ \iff\quad & k^{1-i} \le \pi_i(p) \\ \iff\quad & \Big(\frac{1}{k}\Big)^{i-1} \le \pi_i(p) \end{align}

Additionally, we have that

$\pi_2(q) = \sum\limits_{j=1}^k (q_j)^2 = \sum\limits_{j=1}^k \big(\frac{1}{k})^2 = k \cdot \frac{1}{k^2} = \frac{1}{k}$

So substituting, we have

$\forall i \ge 2, \pi_i(p) \ge (\pi_2(q))^{i-1}$

$$\tag*{$\blacksquare$}$$

Additionally, $(\pi_2(q))^{i-1} = \pi_i(q)$ (this is relatively easy to prove), so this also proves that the probability of getting $i$ identical rolls on an unfair dice is greater than or equal to the probability of getting $i$ identical rolls on a fair dice, which is an interesting result in its own right (at least it is to me).

Jacob R
  • 370

2 Answers2

0

If I am reading the question correctly this is straightforward, since it's true for each term in the sum. Let $q$ be any of the $p_j$. Then when $i \ge 2$ we have $2(i -1) \ge i$. Since $0 \le q \le 1$ it follows that $$ q^i \ge q^{2(i-1)}. $$ Then sum over $j$.

I haven't carefully read your "idea behind the conjecture" but if it's a correct interpretation of the formula then it is in fact a combinatorial proof.

Ethan Bolker
  • 95,224
  • 7
  • 108
  • 199
  • Doesn't this show that $\pi_i(p) \ge \pi_{2(i-1)}(p)$ and not $\pi_i(p) \ge (\pi_2(p))^{i-1}$? – Jacob R Jul 01 '19 at 01:49
  • 1
    @JacobRaymond Oops. Yes, Glad I prefaced my nonanswer with a caveat. You might find something useful by searching for $p$ norm inequaility. https://math.stackexchange.com/questions/218046/relations-between-p-norms . If you succeed, you can answer your own question here. I may delete this post in a while. – Ethan Bolker Jul 01 '19 at 01:57
  • Yeah, I've been working with the $p$ norm in relation to this problem for a bit (for the larger problem I'm looking at it's been useful for proving an upper bound), but the inequality in this case seems to be going in the wrong direction for $p$ norms, so I've been stuck for a while trying to prove it (though experimentally, the inequality in the conjecture seems to be in the right direction). – Jacob R Jul 01 '19 at 02:02
0

A proof courtesy of my faculty mentor:

Let $p = (p_1, p_2, ..., p_k)$ be some probability distribution, and let $X$ be a random variable where $\mathbb{P}(X = p_j) = p_j$. Also, let $i \ge 2$, and $\pi_i(p)$ be defined as in the question.

Now, consider Jensen's inequality for some convex function $\varphi$, $$ \varphi(\mathbb{E}[X]) \le \mathbb{E}[\varphi(X)]$$

Let $\varphi: [0,1] \rightarrow [0,1]$ be defined as $\varphi(x) = x^{i-1}$. Note that on the domain specified, $\varphi$ is convex, so Jensen's inequality applies. Then we have

$$\varphi(\mathbb{E}[X]) = \varphi\left(\sum\limits_{j=1}^k p_j \cdot \mathbb{P}(X = p_j)\right) = \varphi\left(\sum\limits_{j=1}^k p_j \cdot p_j\right) = \varphi(\pi_2(p)) = \pi_2(p)^{i-1}$$

and we have that

$$\mathbb{E}[\varphi(X)] = \sum\limits_{j=1}^k \varphi(p_j) \cdot \mathbb{P}(X = p_j) = \sum\limits_{j=1}^k p_j^{i-1} \cdot p_j = \sum\limits_{j=1}^k p_j^i = \pi_i(p)$$

Substituting into Jensen's inequality, we get that $\forall i \ge 2$,

$$ \pi_i(p) \ge \pi_2(p)^{i-1} $$

Jacob R
  • 370