1

This question is a continuation of this post.


Currently I am reading the paper Excited Random Walk in One Dimension.

At page $8$ right column, the authors obtain equation $(33)$.

The joint probability of eating $r$ consecutive cookies on the right and then $l$ consecutive cookies on the left before the next right cookie is eaten, under the initial condition that the walk starts at the position of the right extreme cookie, is $$P(r,l) = 4pq \frac{\Gamma(L)}{\Gamma(L-2q)} \frac{\Gamma(L+r-1-2q)}{\Gamma(L+r-2p)}\frac{\Gamma(L+l+r-1-2p)}{\Gamma(L+l+r)}.$$

Let $s = r+l.$ Then the authors used the following formula $$\bigg\langle \frac{r}{s} \bigg\rangle = \int_0^\infty ds \int_0^s \frac{r}{r+l} \, \tilde{P}(r,s)\,dr$$ to calculate the average ratio for the number of right cookies to total cookies eaten in a single cycle I am confused with the upper limit of the second integral. Why do we integrate till $s$ instead of $\infty?$ On the other hand,

Question: What is the formula to calculate average ratio for the number of right cookies to total cookies eaten in a single cycle?

Is it $$\sum_{l=1}^\infty \sum_{r=1}^\infty \frac{r}{l+r} P(r,l)?$$

Idonknow
  • 15,643
  • in the paper eq. (33) is different from the one you are reporting – G Cab Oct 25 '18 at 18:13
  • @G Cab Eq $(33)$ is the asymptotic approximation of the one I report above. What I would like to have is the exact calculation of average ratio... – Idonknow Oct 26 '18 at 02:54
  • in the paper I downloaded as per your link, eq.(33) reads as $P(r,l) = 4pq{{\Gamma (L)} \over {\Gamma (L - 2p)}}{{\Gamma (L + l - 1 - 2p)} \over {\Gamma (L + l - 2q)}}{{\Gamma (L + l + r - 1 - 2q)} \over {\Gamma (L + l + r)}}$ – G Cab Oct 26 '18 at 10:42
  • I believe the formula obtained by the authors has some mistake. I could not obtain exactly the same as their. – Idonknow Oct 26 '18 at 12:08
  • I'll try and investigate on this aspect as well. – G Cab Oct 26 '18 at 14:00
  • @G Cab Any progress on your investigation? – Idonknow Oct 29 '18 at 22:35

1 Answers1

2

a) Changing the variables from $(r,\;l)$ to $(r,\;s=r+l)$ means that their range will change as follows $$ \left\{ {\matrix{ {1 \le r < \infty } \cr {1 \le l < \infty } \cr } } \right.\quad \Rightarrow \quad \left\{ {\matrix{ {1 \le r < \infty } \cr {r + 1 \le s = r + l < \infty } \cr } } \right.\quad \Rightarrow \quad \left\{ {\matrix{ {1 \le r < s} \cr {2 \le s = r + l < \infty } \cr } } \right. $$ Note that the map $\left( {r,l} \right)\; \leftrightarrow \;\left( {r,s} \right)$ is bi-jective so that we can freely convert $P(r,l)$ to $P(r,s)$ and v.v. $$ P\left( {r,l} \right)\quad \matrix{ {\buildrel {l\, = \,s\, - \,r} \over \longrightarrow } \cr {\mathrel{\mathop{\kern0pt\longleftarrow} \limits_{s\, = \,r + l}} } \cr } \quad P\left( {r,s} \right) $$ and we can convert a sum in $P(r,l)$ to a sum in $P(r,s)$ by applying the conversion above to the summand, and changing the range of summation as specified above.
Analogously can be done for the integral, since the Jacobian of the transformation equals $1$.

We can then write $$ \eqalign{ & E\left( {{r \over {r + l}}} \right) = \left\langle {r/s} \right\rangle = \sum\limits_{l = 1}^\infty {\sum\limits_{r = 1}^\infty {\left( {{r \over {r + l}}} \right)P(r,l)} } = \sum\limits_{r = 1}^\infty {\sum\limits_{l = 1}^\infty {\left( {{r \over {r + l}}} \right)P(r,l)} } = \cr & = \sum\limits_{r = 0}^\infty {\sum\limits_{l = 0}^\infty {\left( {{{r + 1} \over {r + l + 2}}} \right)P(r + 1,l + 1)} } = \cr & = \sum\limits_{s = 2}^\infty {\sum\limits_{r = 1}^{s - 1} {\left( {{r \over s}} \right)P(r,s)} } = \sum\limits_{s = 0}^\infty {\sum\limits_{r = 0}^s {\left( {{{r + 1} \over {s + 2}}} \right)P(r + 1,s + 2)} } \cr} \tag{a.1}$$

b) Upon rescaling the variables dividing by $L$, then for large values of $L$ same as in the precedent post, you can approximately consider $\tilde r=r/L,\; \tilde s =s/L$ to be continuous, i.e. convert the sum in a Riemann sum in $\Delta r/L, \; \Delta s/L$, and approximate with the corresponding integral.
The multiplicative factor is included in the approximation provided for $P(\tilde r, \tilde s)$, which integrates to $1$.
In passing to the integral the small "deviations" in the integration bounds given by the limits in the sum , will be negligible once divided by $L$.

c) Concerning the possibility to evaluate exactly the expected ratio $r/(r+l)$, we are going to have $$ \left\langle {{r \over {r + l}}} \right\rangle = \sum\limits_{r = 0}^\infty {\sum\limits_{l = 0}^\infty {\left( {{{r + 1} \over {r + l + 2}}} \right)P(r + 1,l + 1)} } \tag{c.1}$$

Taking $P(r,l)$ to read as given in the paper $$ P(r,l) = 4pq{{\Gamma (L)} \over {\Gamma (L - 2p)}}{{\Gamma (L + l - 1 - 2p)} \over {\Gamma (L + l - 2q)}} {{\Gamma (L + l + r - 1 - 2q)} \over {\Gamma (L + l + r)}} \tag{c.2}$$ we get $$ \eqalign{ & \left\langle {{r \over {r + l}}} \right\rangle = \sum\limits_{l = 0}^\infty {\sum\limits_{r = 0}^\infty {\left( {{{r + 1} \over {r + l + 2}}} \right)P(r + 1,l + 1)} } = \cr & = 4pq{{\Gamma (L)} \over {\Gamma (L - 2p)}}\sum\limits_{l = 0}^\infty {{{\Gamma (L + l - 2p)} \over {\Gamma (L + l + 1 - 2q)}} \sum\limits_{r = 0}^\infty {\left( {{{r + 1} \over {r + l + 2}}} \right){{\Gamma (L + l + r + 1 - 2q)} \over {\Gamma (L + l + r + 2)}}} } \cr} \tag{c.3}$$

The inner sum becomes $$ \eqalign{ & \sum\limits_{r = 0}^\infty {\left( {{{r + 1} \over {r + l + 2}}} \right){{\Gamma (L + l + r + 1 - 2q)} \over {\Gamma (L + l + r + 2)}}} = \cr & = \left( {{1 \over {l + 2}}} \right){{\Gamma (L + l + 1 - 2q)} \over {\Gamma (L + l + 2)}}\; {}_3F_{\,2} \left( {2,\,2 + l,\,L + l + 1 - 2q\;;\;3 + l,\,L + l + 2\;;\;1} \right) \cr} \tag{c.4}$$ and, although there might be some other ways to put it, I do not see an easy way to express the double sum.

d) Passing to asymptotics, using for $P(r,l)$ the expression provided at the end of eq.(33) $$ P(r,l) \approx 4q\left( {1 - q} \right)L^{\,2q} \left( {L + r} \right)^{\,1 - 4q} \left( {L + r + l} \right)^{\,2q - 3} \tag{d.1}$$ we get $$ \eqalign{ & \left\langle {{r \over {r + l}}} \right\rangle = \sum\limits_{l = 0}^\infty {\sum\limits_{r = 0}^\infty {\left( {{{r + 1} \over {r + l + 2}}} \right)P(r + 1,l + 1)} } = \cr & = 4q\left( {1 - q} \right)L^{\,2q} \sum\limits_{r = 0}^\infty {\left( {L + r + 1} \right)^{\,1 - 4q} \sum\limits_{l = 0}^\infty {\left( {{{r + 1} \over {r + l + 2}}} \right)\left( {L + r + l + 2} \right)^{\,2q - 3} } } = \cr & = 4q\left( {1 - q} \right)L^{\,2q} \sum\limits_{r = 0}^\infty {\left( {L + r + 1} \right)^{\,1 - 4q} \left( {r + 1} \right) \sum\limits_{l = 0}^\infty {{1 \over {\left( {r + l + 2} \right)\left( {L + r + l + 2} \right)^{\,3 - 2q} }}} } \cr} \tag{d.2}$$

From here, and starting from the inner sum, various approaches can be taken and evaluated against the expected error. It is not this the place to conduct such an analysis, so I will limit to hint some possible approach.

For instance, since $3-2q$ ranges from $1$ to $3$, then we can develop the sum $$ \sum\limits_{k = 0}^\infty {{1 \over {\left( {k + a} \right)\left( {k + b} \right)^{\,n} }}} \,\;\left| {\;n = 1,2,3} \right. $$ into partial fractions, solve in terms of the polygamma functions $\psi^{(0)},\,\psi^{(1)},\,\psi^{(2)}$, and interpolate for the actual value of $3-2q$.

Otherwise we can replace the summand with its asymptotic series for $k \to \infty$ $$ \sum\limits_{k = a}^\infty {{1 \over {k\left( {k + b} \right)^{\,c} }}} = \sum\limits_{k = a}^\infty {\sum\limits_{j = 0}^\infty {{{\left( { - 1} \right)^j b^j c^{\,\overline {\,j\,} } } \over {j!k^{\,c + 1 + j} }}} } $$ appropriately truncate it, use the Hurwitz zeta function, ...

e) If, as you requested, we want to work directly on the sum given in c), then we have better and put it under the version $$ \eqalign{ & \left\langle {r/s} \right\rangle = \sum\limits_{s = 0}^\infty {\sum\limits_{r = 0}^s {\left( {{{r + 1} \over {s + 2}}} \right)P(r + 1,s + 2)} } = \cr & = 4pq{{\Gamma (L)} \over {\Gamma (L - 2p)}}\sum\limits_{s = 0}^\infty {{1 \over {s + 2}} {{\Gamma (L+s+1-2q)} \over {\Gamma (L+s+2)}} \sum\limits_{r = 0}^s {\left( {r + 1} \right){{\Gamma (L + s - r - 2 + 2q)} \over {\Gamma (L + s - r + 1 - 2q)}}} } = \cr & = 4\left( {1 - q} \right)q{{\Gamma (L)} \over {\Gamma (L - 2 + 2q)}} \sum\limits_{s = 0}^\infty {{1 \over {s + 2}}\left( {L + s + 2} \right)^{\,\overline {\, - 1 + 2q\,} } \sum\limits_{r = 0}^s {\left( {r + 1} \right)\left( {L + s - r + 1 - 2q} \right)^{\,\overline {\, - 3 + 4q\,} } } } \cr} \tag{e.1}$$ where $x^{\,\overline {\,a\,} }$ and $x^{\,\underline {\,a\,} } $ denote respectively the Rising and Falling factorials.

Now the matter here is that the exponents in $q$ are in general not integer. Therefore

  • for an "exact" approach I cannot devise other than the approach through the Hypergeometric function already described, for which the second sum would be difficult to express.

  • going asymptotically, for large $L$ and thus large values of the base argument of the Rising factorial, I can devise only a couple of ways

e.1) asymptotics of Gamma by using $$ \eqalign{ & z^{\,\overline {\,w\,} } \quad \left| \matrix{ \;\left| z \right| \to \infty \hfill \cr \;\left| {\arg (z + w)} \right| < \pi \hfill \cr} \right.\quad \propto \cr & z^{\,w} \left( {1 + {{w\left( {w - 1} \right)} \over {2z}} + {{w\left( {w - 1} \right)\left( {w - 2} \right)\left( {3w - 1} \right)} \over {24z^{\,2} }} + O\left( {w^{\,6} /z^{\,3} } \right)} \right) \cr} \tag{e.2}$$ which, when taking the first term only would give the asymptotics reported by them at the end of eq.(33), but when applied to the $P(r,l)$ that you report here in your post, and not that in the same eq.(33).
So it seems you are right for this: I did not analyze this aspect yet.

e.2) interpolation of solutions with integer exponents
since for $q=0,1/4,1/2,3/4,1$ the factorial can be split in a few terms , it might be possible to absorb the $(r+1)$ factor in front of it by partial fractions and express the inner sum with a few terms in $\psi$, to be then interpolated ...

However everything depends on what is your goal: better approximation ?

f) Upon further investigation, the most promising approach seems that can be derived from these integral representations of the Rising Factorial $$ x^{\,\underline {\,y\,} } \quad \left| \matrix{\;x,y \in \; \mathbb R \hfill \cr \; - 1 < y \hfill \cr} \right. = {{\Gamma (y + 1)} \over {2\,\pi }}\int_{ - \,\pi }^\pi {e^{\, - \,i\,y\,t} \left( {1 + e^{\,i\,t} } \right)^{\,x} dt} \tag{f.1}$$ derived from the representation of the Binomial, and $$ z^{\,\overline {\, - w\,} } \quad \left| \matrix{ \;0 < {\mathop{\rm Re}\nolimits} (z) \hfill \cr \;0 < {\mathop{\rm Re}\nolimits} (w) \hfill \cr} \right.\quad = {1 \over {\Gamma (w)}}\int_{\,0}^{\,1} {t^{\,z - 2} \left( {{{1 - t} \over t}} \right)^{\,w - 1} dt} \tag{f.2}$$ derived from that of the Beta function.

Then, for instance, the inner sum in (e.1) becomes $$ \eqalign{ & \sum\limits_{r = 0}^s {\left( {r + 1} \right)\left( {L + s - r + 1 - 2q} \right)^{\,\overline {\, - \left( {3 - 4q} \right)\,} } } \quad \left| {\,q < 3/4} \right.\quad = \cr & = \sum\limits_{r = 0}^s {\left( {r + 1} \right){1 \over {\Gamma (3 - 4q)}}\int_{\,0}^{\,1} {t^{\,L + s - r + 1 - 2q} \left( {{{1 - t} \over t}} \right)^{\,2 - 4q - 1} {{dt} \over {t^{\,2} }}} } = \cr & = {1 \over {\Gamma (3 - 4q)}}\int_{\,0}^{\,1} {\left( {\sum\limits_{r = 0}^s {\left( {r + 1} \right)t^{\, - r - 2} } } \right) \left( {{{1 - t} \over t}} \right)^{\,2 - 4q - 1} t^{\,L + s + 1 - 2q} dt} = \cr & = {1 \over {\Gamma (3 - 4q)}}\int_{\,0}^{\,1} {{d \over {dt}}\left( {\sum\limits_{r = 0}^{s + 1} {t^{\, - r} } } \right) \left( {{{1 - t} \over t}} \right)^{\,1 - 4q} t^{\,L + s + 1 - 2q} dt} = \cr & = {1 \over {\Gamma (3 - 4q)}}\int_{\,0}^{\,1} {\left( {{{\left( {s + 2} \right)t - 1 - s - t^{\,s + 2} } \over {t^{\,s + 2} \left( {1 - t} \right)^{\,2} }}} \right)\left( {{{1 - t} \over t}} \right)^{\,1 - 4q} t^{\,L + s + 1 - 2q} dt} = \cr & = - {1 \over {\Gamma (3 - 4q)}}\int_{\,0}^{\,1} {{{\left( {\left( {s + 1} \right) - \left( {s + 2} \right)t + t^{\,s + 2} } \right)t^{\,L + 2q - 2} } \over {\left( {1 - t} \right)^{\,1 + 4q} }}dt} \cr} $$ Since the exponent of $(1-t)$ is negative, it seems that we cannot convert the integral to a combination of Beta functions: to be checked. However we can take due advantage of the fact that $0 \le t \le 1$ to develop the integral in series.
Similarly for the outer summation.

G Cab
  • 35,272
  • I understand the change of variables in (a). However, when you calculate $E\left( \frac{r}{r+l}\right),$ why do we switch from $P(r,l)$ to $P(r,s)?$ I thought we have formula for $P(r,l)$ but not $P(r,s)?$ Possible to have exact calculation for $E\left( \frac{r}{r+l}\right)?$ – Idonknow Oct 26 '18 at 03:07
  • @Idonknow: added note to a) and para. c) to answer to your comment above – G Cab Oct 26 '18 at 13:57
  • @ G Cab at eq $(36)$, the authors manage to obtain an equation in an asymptotically manner, that is, the equal sign in the equation refers to asymptotically equal. Would that equation help in some way for your double sums to go through? – Idonknow Oct 26 '18 at 15:10
  • @Idonknow: they obtain eq.(36) by integration. Passing to asymptotics, the approximation they give at the end of eq.(33) could be used in the sum. – G Cab Oct 26 '18 at 15:54
  • @G Cab I see. Is it possible to obtain similar result using series provided above? – Idonknow Oct 28 '18 at 08:41
  • @Idonknow: added para. d) to this regard – G Cab Oct 28 '18 at 16:46
  • sorry, there was a typo in formulas in c) : amended – G Cab Oct 28 '18 at 17:34
  • @ G Cab at the beginning of paragraph (d), you used asymptotic approximation of $P(r,l)$, is it possible that we do not use the approximation and conduct analysis on the original $P(r,l)$? – Idonknow Oct 28 '18 at 22:32
  • @G Cab In paragraph (d), you stated that various approaches can be used to evaluate the expected error. Possible to show one approach on how to calculate the error? Thanks. – Idonknow Oct 29 '18 at 01:01
  • @Idonknow: added e) – G Cab Oct 30 '18 at 01:10
  • @G Cab In e.1), you mentioned the following:

    'when taking the first term only would give the asymptotics reported by them at the end of eq.(33), but when applied to the P(r,l) that you report here in your post, and not that in the same eq.(33). So it seems you are right for this: I did not analyze this aspect yet.'

    Do you mean that my $P(r,l)$ reported in my post can give us correct asymptotic equation in the paper?

    – Idonknow Oct 30 '18 at 02:42
  • @G Cab Also, regarding the last question in your answer above, I would like to obtain eq (36) in the paper. Is it possible? – Idonknow Oct 30 '18 at 02:43
  • @G Cab I just realize, in (c), what is $_3 F_2?$ – Idonknow Oct 30 '18 at 03:15
  • @G Cab I found Appell series, (https://en.wikipedia.org/wiki/Appell_series) which is a double hypergeometric series. Would it help for exact form of the series to obtain eq(36)? – Idonknow Oct 30 '18 at 05:10
  • @Idonknow: 1) yes, (e.2) applied to your $P(r,l)$ gives (d.1) 2) I deem the best way is through integrals as they did 3) it is a generalized hypergeometric 4) Appell series is really interesting..let's see if we can fit it in – G Cab Oct 30 '18 at 10:23
  • @G Cab 2) possible to show me how integration leads us to eq 36 of the paper? 4) it would be very interesting if we can use Appell series to obtain eq 36. – Idonknow Oct 30 '18 at 11:38
  • @Idonknow: wandering around the pros&cons of various approaches I finally arrived to the Integral Representation: this looks to me promising for many reasons. So sorry, I left out (for the moment at least) the other possibilities. – G Cab Oct 30 '18 at 15:55
  • @G Cab after representing rising factorial as integration, I fail to understand why we put the summation inside the integral. In other words, I do not know how to evaluate the integral of the sum. May you be able to give me some hints? – Idonknow Oct 30 '18 at 22:51
  • that is just the derivative of $(t - 1/t^(s + 1))/(t - 1)$: finally we get a rational expression with $t$ and $(1-t)$ at non integer exponents. We cannot find the integral directly (too nice!), but we shall wisely manipulate the fraction and expand it into series in $t$ (or $t-1/2$), and we should also be able and find an $O(t^m)$ to bind the error. – G Cab Oct 31 '18 at 01:05
  • @Idonknow: thanks: the problem is interesting and I will continue to look if I can find a better solution – G Cab Nov 01 '18 at 01:28
  • @G Cab I offer another bounty for obtaining a closed form formula for the average ratio at another post. https://math.stackexchange.com/questions/2977213/simplify-sum-l-0-infty-sum-r-0-infty-frac-gammalr-2q-gammalr-1 If you are interested, you can try to solve it. – Idonknow Nov 01 '18 at 07:11