Math Genius: Why does $sum frac{1}{n^{1 + epsilon}}$ converge?

Original Source Link

The proof that the infinite sum of $frac{1}{n}$ diverges seems to have a fair amount of breathing room. We group successive terms in quantities of increasing powers, starting with $frac{1}{2}$, then $frac{1}{3} + frac{1}{4}$, then the next four terms, then the next eight terms, etc., and note that each of the groups is greater than or equal to $frac{1}{2}$, and adding $frac{1}{2}$ forever approaches $infty$.

As extra credit for this proof, each group after the first is strictly greater than $frac{1}{2}$, so the divergence actually occurs faster. Furthermore, we didn’t even need the terms to be that large; adding $frac{1}{1,000,000}$ forever would also approach $infty$. Why then, given this generous cushion in the proof, is it the case that $frac{1}{n^{1 + ε}}$ for some tiny ε converges? Why is the power of $n$ so fragile to nudges in the positive direction given how firmly $frac{1}{n}$ seemed to diverge?

To address the question why a small $epsilon$ is enough, note that analogously to the proof of divergence for the harmonic series, we can say that

$$frac1{2^{1+epsilon}} ge frac1{2^{1+epsilon}}, $$

$$frac1{3^{1+epsilon}} + frac1{4^{1+epsilon}} ge 2frac1{4^{1+epsilon}} = frac1{2^{1+2epsilon}}, $$

$$frac1{5^{1+epsilon}} + frac1{6^{1+epsilon}} + frac1{7^{1+epsilon}} + frac1{8^{1+epsilon}}ge 4frac1{8^{1+epsilon}} = frac1{2^{1+3epsilon}}, $$

and so on. Note that the terms on the right hand side are no longer are constant as they used to be for the harmonic series, instead they form a geometric progression with the factor $frac1{2^epsilon}$. When $epsilon$ is small, that value is just a tiny bit smaller than $1$.

Still any geometric sequence with factor $< 1$ will converge to $0$, even if slowly. That means if we take the sum of the right hand sides, it is no longer an infinite sum of $frac12$ that diverges, but a geometric series that does converge!

So the main “problem” in translating the proof for $epsilon>0$ is that our divering minorante for the harmonic series no longer divererges!

In essence it is just the difference that $sum_{i=0}^{infty}frac12$ diverges, while $sum_{i=0}^{infty}frac1{2^{1+iepsilon}}$ converges.

This insight allows you to actually prove that $sum_{n=0}^{infty}frac1{n^{1+epsilon}}$ converges without the integral methods mentioned in other answers.

That’s because

$$frac1{3^{1+epsilon}} + frac1{4^{1+epsilon}} le 2frac1{2^{1+epsilon}} = frac1{2^{epsilon}}, $$

$$frac1{5^{1+epsilon}} + frac1{6^{1+epsilon}} + frac1{7^{1+epsilon}} + frac1{8^{1+epsilon}}le 4frac1{4^{1+epsilon}} = frac1{2^{2epsilon}}, $$

$$frac1{9^{1+epsilon}} + frac1{10^{1+epsilon}} + frac1{11^{1+epsilon}} + frac1{12^{1+epsilon}} +frac1{13^{1+epsilon}} + frac1{14^{1+epsilon}} + frac1{15^{1+epsilon}} + frac1{16^{1+epsilon}} le 8frac1{8^{1+epsilon}} = frac1{2^{3epsilon}}, $$
a.s.o.

Now our series has a converging majorante $sum_{i=0}^{infty}frac1{2^{iepsilon}}$, so converges itself.

I will address the last paragraph of your post. As long as epsilon is a positive fixed quantity, the series will converge. This can be seen with the Integral test. $1/x$ integrates to $lnx$ and with $x$ going to infinity, the integral and thus the series diverges. But if the exponent is more than $1$, the polynomial term integrates to another polynomial term. I leave it up to you the figure out why that implies convergence because this essentially will answer your last part of the question. Lastly (and not unimportant!) a note on the word “fixed”. If the epsilon is not a fixed positive quantity, but variable, then the series may be divergent. For example the series $frac{1}{n^{(1+1/n)}}$ has a “variable” exponent, but the exponent is more than $1$. Yet this series turns out to be divergent.

Also, the integral test
can be used to show that
$sum dfrac1{nln ln … ln(n)}
$

diverges for
any fixed number of
nexted $ln$.
This is because,
if we define
$ln_0(n) = 1
$

and
$ln_{k+1}(n)
=ln(ln_k(n))
$
,
then
$(ln_k(x))’
=dfrac1{xprod_{j=1}^{k-1}ln_{j}(x)}
$
.

Proof.

$(ln_1(x))’
=(ln(x))’
=dfrac1{x}
$

and
$(ln_2(x))’
=(ln(ln(x)))’
=(ln(x))’dfrac1{ln(x)}
=dfrac1{xln(x)}
$

If
$(ln_k(x))’
=dfrac1{xprod_{j=1}^{k-1}ln_{j}(x)}
$
,
then

$begin{array}\
(ln_{k+1}(x))’
&=(ln(ln_k(x)))’\
&=(ln_k(x))’dfrac1{ln_k(x)}\
&=dfrac1{xprod_{j=1}^{k-1}ln_{j}(x)ln_k(x)}\
&=dfrac1{xprod_{j=1}^{k}ln_{j}(x)}\
end{array}
$

Since
$=ln_k(x)
to infty$

as $x to infty$
for any fixed $k$,
$int dfrac{dx}{xprod_{j=1}^{k-1}ln_{j}(x)}
=ln_k(x)
to infty$

as $x to infty$
so
$sum dfrac1{nprod_{j=1}^{k-1}ln_{j}(n)}
$

diverges by the integral test.

You can similarly show that
$sum dfrac1{nprod_{j=1}^{k-1}ln_{j}(n)ln_{k}^{1+epsilon}(n)}
$

converges for any fixed $k$
and $epsilon > 0$.

Tagged : / / / /

Math Genius: Summation for $sum_{n_1geq 1}sum_{n_2geq 1}sum_{n_3geq 1} frac{cos 2pi n_1x_1cos 2pi n_2x_2cos 2pi n_3x_3}{n_1^2+n_2^2+n_3^2} $

Original Source Link

This question is closely related to this question I posted before, but I will state it concisely. In fact, I want to decompose that
$$sum_{n_1geq 1}sum_{n_2geq 1}sum_{n_3geq 1} frac{cos 2pi n_1x_1cos 2pi n_2x_2cos 2pi n_3x_3}{n_1^2+n_2^2+n_3^2} =F(x_1,x_2,x_3)+R(x_1,x_2,x_3)$$
where $F(x_1,x_2,x_3)$ is a combination of some elementary functions, and $R(x_1,x_2,x_3)$ is a smooth function, i.e. $C^{infty}(mathbb{R}^3)$ function.

By invoking begin{equation}
sum_{n_2geq 1}frac{cos 2pi n_2 x_2}{n_1^2+n_2^2}=left{
begin{aligned}
& -frac{1}{2n_1^2}+frac{pi}{2n_1}frac{cosh pi(2x_2-1)n_1}{sinh pi n_1} &n_1neq 0\
&frac{pi^2}{6}-pi^2x_2+pi^2x_2^2 & n_1=0
end{aligned}right.
end{equation}

$$sum_{n_1geq 1}frac{cos 2pi n_1x_1}{n_1}e^{-2pi n_1x_2}=pi x_2-log 2-frac{1}{2}log (sinh^2 pi x_2 +sin^2pi x_1)$$
the first identity is proved by contour integral, i.e.
$$sum_{n_2geq 1}frac{cos 2pi n_2 x_2}{n_1^2+n_2^2}=frac{1}{2}sum_{n_2 in mathbb{Z}}frac{cos 2pi n_2 x_2}{n_1^2+n_2^2}-frac{1}{n_1^2}$$
and by residue theorem
$$sum_{n_2 in mathbb{Z}}frac{cos 2pi n_2 x_2}{2pi i(n_1^2+n_2^2)}+mathcal{Re}left[mbox{residue of }frac{e^{i2pi x_2z}}{(n_1^2+z^2)(e^{i2pi z}-1)} mbox{ at zeros of }n_1^2+z^2 right]=int_{C_n} frac{e^{i2pi x_2z}}{(n_1^2+z^2)(e^{i2pi z}-1)}dz rightarrow 0$$
the second identity is simply the Taylor expansion of $log(1+z)$.

Now we calculate
begin{equation}
begin{aligned}
sum_{n_1geq 1}cos 2pi n_1x_1&sum_{n_2geq 1}cos 2pi n_2x_2
sum_{n_3geq 1}frac{cos 2pi n_3x_3}{n_1^2+n_2^2+n_3^2} \
= &-frac{1}{2}sum_{n_1geq 1}cos 2pi n_1x_1sum_{n_2geq 1}frac{cos 2pi n_2x_2}{n_1^2+n_2^2} \
& +frac{pi}{2} sum_{n_1geq 1}cos 2pi n_1x_1sum_{n_2geq 1}frac{cos2pi n_2x_2}{sqrt{n_1^2+n_2^2}}frac{cosh pi(2x_3-1)sqrt{n_1^2+n_2^2}}{sinh pi sqrt{n_1^2+n_2^2}} \
= & G_1+frac{pi}{2}G_2
end{aligned}
end{equation}

$G_1$ is easily handled. Now calculate $G_2$:
begin{equation}
begin{aligned}
sum_{n_1geq 1}&cos 2pi n_1x_1sum_{n_2geq 1}frac{cos2pi n_2x_2}{sqrt{n_1^2+n_2^2}}frac{cosh pi(2x_3-1)sqrt{n_1^2+n_2^2}}{sinh pi sqrt{n_1^2+n_2^2}} \
= &sum_{n_1geq 1}cos 2pi n_1x_1sum_{n_2geq 1}frac{cos2pi n_2x_2}{sqrt{n_1^2+n_2^2}}left(frac{cosh pi(2x_3-1)sqrt{n_1^2+n_2^2}}{sinh pi sqrt{n_1^2+n_2^2}}-e^{-2pi sqrt{n_1^2+n_2^2}x_3} right) \
&+sum_{n_1geq 1}cos 2pi n_1x_1sum_{n_2geq 1}frac{cos2pi n_2x_2}{sqrt{n_1^2+n_2^2}}e^{-2pi sqrt{n_1^2+n_2^2}x_3} \
= &G_3+G_4
end{aligned}
end{equation}

the terms in $G_3$ exponential decays, so $G_3$ is smooth. Now my question is how to calculate $G_4$?
$$G_4=sum_{n_1geq 1}cos 2pi n_1x_1sum_{n_2geq 1}frac{cos2pi n_2x_2}{sqrt{n_1^2+n_2^2}}e^{-2pi sqrt{n_1^2+n_2^2}x_3}$$

Tagged : /

Math Genius: Why does $sum frac{1}{n^{1 + epsilon}}$ converge?

Original Source Link

The proof that the infinite sum of $frac{1}{n}$ diverges seems to have a fair amount of breathing room. We group successive terms in quantities of increasing powers, starting with $frac{1}{2}$, then $frac{1}{3} + frac{1}{4}$, then the next four terms, then the next eight terms, etc., and note that each of the groups is greater than or equal to $frac{1}{2}$, and adding $frac{1}{2}$ forever approaches $infty$.

As extra credit for this proof, each group after the first is strictly greater than $frac{1}{2}$, so the divergence actually occurs faster. Furthermore, we didn’t even need the terms to be that large; adding $frac{1}{1,000,000}$ forever would also approach $infty$. Why then, given this generous cushion in the proof, is it the case that $frac{1}{n^{1 + ε}}$ for some tiny ε converges? Why is the power of $n$ so fragile to nudges in the positive direction given how firmly $frac{1}{n}$ seemed to diverge?

To address the question why a small $epsilon$ is enough, note that analogously to the proof of divergence for the harmonic series, we can say that

$$frac1{2^{1+epsilon}} ge frac1{2^{1+epsilon}}, $$

$$frac1{3^{1+epsilon}} + frac1{4^{1+epsilon}} ge 2frac1{4^{1+epsilon}} = frac1{2^{1+2epsilon}}, $$

$$frac1{5^{1+epsilon}} + frac1{6^{1+epsilon}} + frac1{7^{1+epsilon}} + frac1{8^{1+epsilon}}ge 4frac1{8^{1+epsilon}} = frac1{2^{1+3epsilon}}, $$

and so on. Note that the terms on the right hand side are no longer are constant as they used to be for the harmonic series, instead they form a geometric progression with the factor $frac1{2^epsilon}$. When $epsilon$ is small, that value is just a tiny bit smaller than $1$.

Still any geometric sequence with factor $< 1$ will converge to $0$, even if slowly. That means if we take the sum of the right hand sides, it is no longer an infinite sum of $frac12$ that diverges, but a geometric series that does converge!

So the main “problem” in translating the proof for $epsilon>0$ is that our divering minorante for the harmonic series no longer divererges!

In essence it is just the difference that $sum_{i=0}^{infty}frac12$ diverges, while $sum_{i=0}^{infty}frac1{2^{1+iepsilon}}$ converges.

This insight allows you to actually prove that $sum_{n=0}^{infty}frac1{n^{1+epsilon}}$ converges without the integral methods mentioned in other answers.

That’s because

$$frac1{3^{1+epsilon}} + frac1{4^{1+epsilon}} le 2frac1{2^{1+epsilon}} = frac1{2^{epsilon}}, $$

$$frac1{5^{1+epsilon}} + frac1{6^{1+epsilon}} + frac1{7^{1+epsilon}} + frac1{8^{1+epsilon}}le 4frac1{4^{1+epsilon}} = frac1{2^{2epsilon}}, $$

$$frac1{9^{1+epsilon}} + frac1{10^{1+epsilon}} + frac1{11^{1+epsilon}} + frac1{12^{1+epsilon}} +frac1{13^{1+epsilon}} + frac1{14^{1+epsilon}} + frac1{15^{1+epsilon}} + frac1{16^{1+epsilon}} le 8frac1{8^{1+epsilon}} = frac1{2^{3epsilon}}, $$
a.s.o.

Now our series has a converging majorante $sum_{i=0}^{infty}frac1{2^{iepsilon}}$, so converges itself.

I will address the last paragraph of your post. As long as epsilon is a positive fixed quantity, the series will converge. This can be seen with the Integral test. $1/x$ integrates to $lnx$ and with $x$ going to infinity, the integral and thus the series diverges. But if the exponent is more than $1$, the polynomial term integrates to another polynomial term. I leave it up to you the figure out why that implies convergence because this essentially will answer your last part of the question. Lastly (and not unimportant!) a note on the word “fixed”. If the epsilon is not a fixed positive quantity, but variable, then the series may be divergent. For example the series $frac{1}{n^{(1+1/n)}}$ has a “variable” exponent, but the exponent is more than $1$. Yet this series turns out to be divergent.

Also, the integral test
can be used to show that
$sum dfrac1{nln ln … ln(n)}
$

diverges for
any fixed number of
nexted $ln$.
This is because,
if we define
$ln_0(n) = 1
$

and
$ln_{k+1}(n)
=ln(ln_k(n))
$
,
then
$(ln_k(x))’
=dfrac1{xprod_{j=1}^{k-1}ln_{j}(x)}
$
.

Proof.

$(ln_1(x))’
=(ln(x))’
=dfrac1{x}
$

and
$(ln_2(x))’
=(ln(ln(x)))’
=(ln(x))’dfrac1{ln(x)}
=dfrac1{xln(x)}
$

If
$(ln_k(x))’
=dfrac1{xprod_{j=1}^{k-1}ln_{j}(x)}
$
,
then

$begin{array}\
(ln_{k+1}(x))’
&=(ln(ln_k(x)))’\
&=(ln_k(x))’dfrac1{ln_k(x)}\
&=dfrac1{xprod_{j=1}^{k-1}ln_{j}(x)ln_k(x)}\
&=dfrac1{xprod_{j=1}^{k}ln_{j}(x)}\
end{array}
$

Since
$=ln_k(x)
to infty$

as $x to infty$
for any fixed $k$,
$int dfrac{dx}{xprod_{j=1}^{k-1}ln_{j}(x)}
=ln_k(x)
to infty$

as $x to infty$
so
$sum dfrac1{nprod_{j=1}^{k-1}ln_{j}(n)}
$

diverges by the integral test.

You can similarly show that
$sum dfrac1{nprod_{j=1}^{k-1}ln_{j}(n)ln_{k}^{1+epsilon}(n)}
$

converges for any fixed $k$
and $epsilon > 0$.

Tagged : / / / /

Math Genius: Sums of binomials with increasing powers

Original Source Link

I am curious if there is a way to simplify the following:

begin{align}
& 1+a+(a+b)^2+(a+b)^3+cdots+(a+b)^n \[8pt]
= {} &1+a+sum_{k=0}^2 {2 choose k} a^kb^{2-k} +sum_{k=0}^3 {3 choose k}a^kb^{3-k}+cdots
end{align}

Let
$$S=1+(a+b)+(a+b)^2 + dots + (a+b)^n$$
and the required is $S-b$, now
$$S(a+b)=(a+b)+(a+b)^2 + dots + (a+b)^n+(a+b)^{n+1}$$
$$S(a+b)-S=(a+b)^{n+1}-1 iff S=frac{(a+b)^{n+1}-1}{a+b-1}$$
So the required is
$$frac{(a+b)^{n+1}-1}{a+b-1}-b$$

Tagged : /

Math Genius: Probability of different winner in two voting systems

Original Source Link

There are two methods of election that I often hear about: first past the post (whichever candidate gets the most votes wins) and the two-round system. In the two-round system, the top two candidates advance to the next round, where a simple majority then decides the winner. I am looking for the probability that these two systems will elect different people.

I am making several assumptions:

$1$. Voters choose who to cast their vote for randomly.

$2$. Voters will vote for the same candidate in both systems.

$3$. If the candidate a voter votes for advances into the second round of the two-round system, then they will cast their vote for the same person. If their candidate does not advance, then they will randomly choose one of the two remaining candidates.

Let’s say there are $n$ candidates and $m$ voters, $m > n > 2$, although I am most interested in the limit of the probability as $m$ approaches $infty$ for a fixed $n$. The first round for both systems will necessarily have to be the same. Let $P_i$ be the candidate with the $i$th most votes in the first round and $v_i$ be the associated number of votes. The elected person will be different if and only if $(1)$ $v_1 < frac{m}{2}$ and $(2)$ $P_2$ wins overall (i.e. beats $P_1$) in the two-round system.

Each of the rest of the votes ($N = m – v_1 – v_2$) will be randomly parceled out to $P_1$ and $P_2$. $P_2$ will win if $v_2 + a > frac{m}{2}$, where $a$ is the number of votes that $P_2$ gets from the “kicked out” candidates from the first round. Rearranging the inequality yields that $$a > frac{m}{2} – v_2$$

Since $a$ follows a binomial distribution, the probability for this happening is $$sum_{a = frac{m}{2} – v_2}^{N} binom{N}{a} left(frac{1}{2}right)^{N} tag 1$$

This means that $P_{n, m}(mathbb{different} | v_1, v_2) = sum_{a = frac{m}{2} – v_2}^{N} binom{N}{a} left(frac{1}{2}right)^{N}$. This can be approximated by the normal distribution.

Now I need to find the joint distribution of $v_1, v_2$, which is the part where I am getting stuck. Once I have that joint distribution $f_{n, m}(v_1, v_2)$, I think I can do $$sum_{v_1=m/n}^{m} sum_{v_2 = frac{m-v_1}{n-1}}^{min(m-v_1, v_1)} left( f_{n, m}(v_1, v_2)sum_{a = frac{m}{2} – v_2}^{m – v_1 – v_2} binom{m – v_1 – v_2}{a} left(frac{1}{2}right)^{m – v_1 – v_2} right)$$ in order to get the required probability, but I am not completely sure. Any guidance on solving this problem?

Edit: As $m to infty$, I can use the normal distribution to get $$frac{1}{2}left(1+operatorname{erf}left(frac{v_2-v_1+1}{sqrt{2(m-v_1-v_2)}}right)right)$$ for the inner sum.

Let $V_i = v_i/m$ be the portion of voters $P_i$ gets. I can rewrite the double sum using integrals to get $$int_{frac{1}{n}}^{1}int_{frac{1-V_{1}}{n-1}}^{minleft(1-V_{1}, V_{1}right)}f_{n}left(V_{1}, V_{2}right) cdotfrac{1}{2}left(1+operatorname{erf}left(frac{mV_{2}-mV_{1}+1}{sqrt{2left(m-mV_{1}-mV_{2}right)}}right)right)dV_{2}dV_{1}$$

As $m to infty$, the argument of the $mathbb{erf}$ function will approach $-infty$, so $mathbb{erf}$ will approach $-1$. This seems to suggest that the whole integral will be $0$, which doesn’t make sense to me. I was instead expecting a positive probability. Can anyone explain whether I messed up somewhere or if $0$ is the actual probability for all $n$ as $m to infty$?

Tagged : / /

Math Genius: Entropy of partitioning for set of all $binom{n}{k}$ combinations

Original Source Link

Let $binom{[n]}{k}$ represent the set of all $k$-combinations of the set $[n]={1,dotsc,n}$. We will use the notation:
$${x_1, dotsc, x_k} in binom{[n]}{k},$$
where $x_1 < x_2, < dotsc < x_k$. Let $k$ be odd, and let $t = leftlfloor frac{k}{2}right rfloor$. One way to partition this set is to say all elements with the same values of $x_{2i}, ;; i=1,dotsc, t$ belong to the same part.

There are a total of $binom{n-t-1}{t}$ parts in this partitioning. To see this, we let $x_{2i} – i = d_i, ;;i=1,dotsc,t$. We must have that $x_{2i} geq x_{2(i-1)}+2$, and $x_{2i} in {2, dotsc, n-1}$. From these conditions, any set of $d_i$‘s must satisfy $0 < d_1, < d_2, dotsc, < d_t < n-t$. Clearly there are $binom{n-t+1}{t}$ ways this inequality can be satisfied, which is equal to the number of parts.

For each part in the partition, indexed by $mathbf{d} = {d_1, dotsc, d_t}$ let the number of elements in that part be $c (mathbf{d})$. We can show by a simple counting argument that

$$c (mathbf{d}) = d_1(d_2-d_1)dotsc,(d_t – d_{t-1})(n-t-d_t),$$
which is the form of a vendermonde determinant. My question is, if we let $p(mathbf{d}) = frac{c(mathbf{d})}{binom{n}{k}},$
is it possible to analytically calculate the following sum?
$$sum limits_{0 < d_1 < dotsc, < d_t<n-t} p(mathbf{d})logleft( p(mathbf{d}) right)$$

I’ve tried myself, but there doesn’t seem to be any obvious way to do this. Furthermore, if there is no easy way to analytically compute this sum, is there an easy bound for it in terms of $n,k,t$? Obviously, there is the simple lower bound $-log left( binom{n-t-1}{t}right)$, but I’m wondering if there is anything better, which makes use of the structure, and not just the number of parts in the partition.

Somewhat loosely related to this question (Minimum number of $k$-partitions of a set of size $n$ to enumerate all $n choose k$ combinations).

$$ begin{array}{lll}
displaystyle sum_{mathbf{d}} p(mathbf{d})ln p(mathbf{d}) & = &
displaystyle sum_{mathbf{d}}frac{c(mathbf{d})}{binom{n}{k}}lnfrac{c(mathbf{d})}{binom{n}{k}} \[10pt]
& = & displaystyle binom{n}{k}^{-1} sum_{mathbf{d}} left[ c(mathbf{d})ln c(mathbf{d})-c(mathbf{d})lnbinom{n}{k}right] \[10pt]
& = & displaystyle binom{n}{k}^{-1}left[-binom{n}{k}lnbinom{n}{k}+sum_{mathbf{d}}c(mathbf{d})ln c(mathbf{d})right] \[10pt]
& = & displaystyle -lnbinom{n}{k}+binom{n}{k}^{-1}sum_{mathbf{d}}c(mathbf{d})ln c(mathbf{d})
end{array} $$

Introduce the change of variables $c(mathbf{d})=b_1cdots b_tb_{t+1}$ (and later $mathbf{b}’=(b_1,cdots,b_t)$) so

$$ begin{array}{lll}
displaystyle sum_{mathbf{d}}c(mathbf{d})ln c(mathbf{d}) & = &
displaystyle sum_{mathbf{b}}b_1cdots b_{t+1} ln(b_1cdots b_{t+1}) \[10pt]
& = & displaystyle sum_i sum_{mathbf{b}} b_1cdots b_{t+1}ln b_i \[10pt]
& = & displaystyle (t+1)sum_{mathbf{b}} b_1cdots b_{t+1}ln b_{t+1} \[10pt]
& = & displaystyle (t+1)sum_{b_{t+1}=1}^{n-t} left(sum_{mathbf{b}’}b_1cdots b_tright)b_{t+1}ln b_{t+1}
end{array} $$

Evaluate the inner sum generatingfunctionologically (set $k=b_{t+1}$):

$$ begin{array}{lll}
displaystyle sum_{mathbf{b}’}b_1cdots b_t & = &
displaystyle [x^{n-t-k}]left(sum_{b_1ge1} b_1x^{b_1}right)cdotsleft(sum_{b_tge1}b_tx^{b_t}right) \[10pt]
& = & displaystyle [x^{n-t-k}](1x+2x^2+cdots)^t \[1pt]
& = & displaystyle [x^{n-t-k}]left(frac{x}{(1-x)^2}right)^t \[10pt]
& = & displaystyle [x^{n-2t-k}](1-x)^{-2t} \[10pt]
& = & displaystyle binom{-2t}{n-2t-k}(-1)^{n-k}
end{array} $$

Therefore

$$ sum_{mathbf{d}} p(mathbf{d})ln p(mathbf{d})=-lnbinom{n}{k}+frac{t+1}{binom{n}{k}}sum_{k=1}^{n-t} binom{-2t}{n-2t-k}(-1)^{n-k} kln k $$

I’m not sure what else can be done besides express it as a rational combination of $ln$s.

I’m not sure this is useful for getting an estimate either.

Tagged : / / /

Math Genius: Is there a solution to this ? $f_0(n)=2n−1,\ f_{k+1}(n)=sum_{i=1}^nf_k(i)$

Original Source Link

after developing a formula, I came up against this :

$$f_0(n)=2n−1,\
f_{k+1}(n)=sum_{i=1}^nf_k(i)$$

So, for example :$$f_1(n)= n^2\
f_2(n) = frac{n(n+1)(2n+1)}{6}$$

Can we find $f_n(n)$ ?

Thanks to Jens Renders for his advice of rewriting it in a more logical way

Applying the Hockey-Stick Identity repeatedly gives
$$
f_1(n)=sum_{k=1}^nf_0(k)=binom{n}{2}+binom{n+1}{2}
$$

$$
f_2(n)=sum_{k=1}^nf_1(k)=binom{n+1}{3}+binom{n+2}{3}
$$

$$
f_3(n)=sum_{k=1}^nf_2(k)=binom{n+2}{4}+binom{n+3}{4}
$$

Following this pattern, we get
$$
f_k(n)=binom{n+k-1}{k+1}+binom{n+k}{k+1}
$$

Thus, setting $k=n$, we get
$$
f_n(n)=binom{2n-1}{n+1}+binom{2n}{n+1}
$$

Tagged :

Math Genius: Is there a solution to this ? $f_0(n)=2n−1,\ f_{k+1}(n)=sum_{i=1}^nf_k(i)$

Original Source Link

after developing a formula, I came up against this :

$$f_0(n)=2n−1,\
f_{k+1}(n)=sum_{i=1}^nf_k(i)$$

So, for example :$$f_1(n)= n^2\
f_2(n) = frac{n(n+1)(2n+1)}{6}$$

Can we find $f_n(n)$ ?

Thanks to Jens Renders for his advice of rewriting it in a more logical way

Applying the Hockey-Stick Identity repeatedly gives
$$
f_1(n)=sum_{k=1}^nf_0(k)=binom{n}{2}+binom{n+1}{2}
$$

$$
f_2(n)=sum_{k=1}^nf_1(k)=binom{n+1}{3}+binom{n+2}{3}
$$

$$
f_3(n)=sum_{k=1}^nf_2(k)=binom{n+2}{4}+binom{n+3}{4}
$$

Following this pattern, we get
$$
f_k(n)=binom{n+k-1}{k+1}+binom{n+k}{k+1}
$$

Thus, setting $k=n$, we get
$$
f_n(n)=binom{2n-1}{n+1}+binom{2n}{n+1}
$$

Tagged :

Math Genius: Sum of the form $r+r^2+r^4+dots+r^{2^k} = sum_{i=1}^k r^{2^i}$

Original Source Link

I am wondering if there exists any formula for the following power series :

$$S = r + r^2 + r^4 + r^8 + r^{16} + r^{32} + …… + r^{2^k}$$

Is there any way to calculate the sum of above series (if $k$ is given) ?

A formula for $$r+r^2+r^4+cdots+r^{2^k}$$ is $$r+r^2+r^4+cdots+r^{2^k}$$ You can calculate the sum of the series, if $k$ and $r$ are given, by calculating the sum of the series, that is, by computing the individual terms and adding them together.

Presumably, what is desired is a closed form formula for the sum, in terms of familiar functions such as logarithms and exponentials and sines and cosines and powers and roots, and a more efficient way to calculate it. As indicated in the comments, no such formula is known. I don’t know whether it is possible to prove that no such formula exists, but considering how elementary the question is, I am confident that if there were a nice formula for it, Euler would have found it 250 years ago, and we’d all know about it now.

I haven’t been able to obtain a closed form expression for the sum, but maybe you or someone else can do something with what follows.

In Blackburn’s paper (reference below) there are some manipulations involving the geometric series

$$1 ; + ; r^{2^n} ; + ; r^{2 cdot 2^n} ; + ; r^{3 cdot 2^n} ; + ; ldots ; + ; r^{(m-1) cdot 2^n}$$

that might give some useful ideas to someone. However, thus far I haven’t found the identities or the manipulations in Blackburn’s paper to be of any help.

Charles Blackburn, Analytical theorems relating to geometrical series, Philosophical Magazine (3) 6 #33 (March 1835), 196-201.

As for your series, I tried exploiting the factorization of $r^m – 1$ as the product of $r-1$ and $1 + r + r^2 + ldots + r^{m-1}:$

First, replace each of the terms $r^m$ with $left(r^{m} – 1 right) + 1.$

$$S ;; = ;; left(r – 1 right) + 1 + left(r^2 – 1 right) + 1 + left(r^4 – 1 right) + 1 + left(r^8 – 1 right) + 1 + ldots + left(r^{2^k} – 1 right) + 1$$

Next, replace the $(k+1)$-many additions of $1$ with a single addition of $k+1.$

$$S ;; = ;; (k+1) + left(r – 1 right) + left(r^2 – 1 right) + left(r^4 – 1 right) + left(r^8 – 1 right) + ldots + left(r^{2^k} – 1 right)$$

Now use the fact that for each $m$ we have $r^m – 1 ; = ; left(r-1right) left(1 + r + r^2 + ldots + r^{m-1}right).$

$$S ;; = ;; (k+1) + left(r – 1 right)left[1 + left(1 + r right) + left(1 + r + r^2 + r^3 right) + ldots + left(1 + r + ldots + r^{2^{k} – 1} right) right]$$

At this point, let’s focus on the expression in square brackets. This expression is equal to

$$left(k+1right) cdot 1 + kr + left(k-1right)r^2 + left(k-1right)r^3 + left(k-2right)r^4 + ldots + left(k-2right)r^7 + left(k-3right)r^8 + ldots + left(k-3right)r^{15} + ldots + left(k-nright)r^{2^n} + dots + left(k-nright)r^{2^{n+1}-1} + ldots + left(1right) r^{2^{k-1}} + ldots + left(1right) r^{2^{k} – 1}$$

I’m now at a loss. We can slightly compress this by factoring out common factors for groups of terms such as $(k-2)r^4 + ldots + (k-2)r^7$ to get $(k-2)r^4left(1 + r + r^2 + r^3right).$ Doing this gives the following for the expression in square brackets.

$$left(k+1right) + left(kright)r + left(k-1right)r^2 left(1+rright) + left(k-2right)r^4left(1+r+r^2+r^3right) + ldots + ;; left(1right)r^{2^{k-1}} left(1 + r + ldots + r^{2^{k-1} -1} right)$$

Consider the the sequence $a_n$ defined as:
If $n=1$ then $a_n=1$, else if $n>1$ then $a_n=-1$.

This is the sequence $a_n = 1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,…$

The Dirichlet inverse (call it $b_n$) of $a_n$ is the oeis sequence called
“Number of ordered factorizations of n” http://oeis.org/A074206 , starting: $b_n = 0, 1, 1, 1, 2, 1, 3, 1, 4, 2, 3, 1, 8,…$, which is also a Dirichlet series. Notice that the first term: $b_0 = 0$.

For reasons I can not explain clearly, $b_n$ has the ordinary generating function:

$$sumlimits_ {n = 1}^{infty} b_n r^n =
r + sumlimits_ {a =
2}^{infty} r^{a} + sumlimits_ {a =
2}^{infty}sumlimits_ {b =
2}^{infty} r^{ab} + sumlimits_ {a =
2}^{infty}sumlimits_ {b = 2}^{infty}sumlimits_ {c =
2}^{infty} r^{abc} + sumlimits_ {a =
2}^{infty}sumlimits_ {b = 2}^{infty}sumlimits_ {c =
2}^{infty}sumlimits_ {d = 2}^{infty} r^{abcd} + … $$

Evaluate each sum partially to the summation index equal to $2$:

$$r + sumlimits_ {a = 2}^{2} r^{a} + sumlimits_ {a = 2}^{2}sumlimits_ {b =
2}^{2} r^{ab} + sumlimits_ {a = 2}^{2}sumlimits_ {b =
2}^{2}sumlimits_ {c =
2}^{2} r^{abc} + sumlimits_ {a = 2}^{2}sumlimits_ {b =
2}^{2}sumlimits_ {c = 2}^{2}sumlimits_ {d =
2}^{2} r^{abcd} + … $$

Simplify:

$$r + r^{2} + r^{2*2} + r^{2*2*2} + r^{2*2*2*2} + r^{2*2*2*2*2} + …… + r^{2^k}$$

multiply out:

$$r + r^{2} + r^{4} + r^{8} + r^{16} + r^{32} + …… + r^{2^k}$$

call it $S$ and you are done.

The sum:

$$sumlimits_ {n = 1}^{infty} b_n r^n$$

above, probably does not have a closed form other than the multiple sums, and therefore $S$ probably does not have a closed form either. But it points out what the scope of generating functions is.

I have not seen a closed form formula for the sum you talk about, and sense everyone else has already remarked on this fact, I thought I might mention that I have seen several other interesting results involving double exponentials and powers of two if your interested:
$$sum_{k=0}^{n-1}frac{2^k}{x^{2^k}+1}=frac{1}{x-1}-frac{2^n}{x^{2^n}-1}$$
$$prod_{k=0}^{n-1}(x^{2^k}+1)=frac{x^{2^n}-1}{x-1}$$
$$sum_{k=1}^{2^n}binom{2^n}{k}=2^{2^n}-1$$
$$prod_{n=0}^infty(1+x^{2^n})=frac{1}{1-x}$$

$$sum_{n=0}^infty frac{2^n}{x^{2^n}+1}=frac{1}{x-1}$$
$$sum_{n=0}^inftyfrac{4^nx^{2^n}}{(x^{2^n}+1)^2}=frac{x}{(x-1)^2}$$

$$text{Also for |x| < 1}$$
$$sum_{n=0}^infty frac{(-1)^n2^nx^{2^n}}{x^{2^{n+1}}-x^{2^n}+1}=frac{-2x^2}{x^4+x^2+1} $$
$$text{Let $f(n)$ count how many factors of 2 n has, then we have} $$
$$frac{1}{x-1}-sum_{n=1}^infty frac{f(n)}{x^n}=sum_{n=0}^infty frac{1}{x^{2^n}+1}$$
$$sum_{n=1}^infty frac{f(n)}{n^2}=frac{pi^2}{18}$$
$$sum_{k=1}^{2^n} f(k)=2^n-1$$
$$text{It was also proved by Erdős that both the constants bellow are irrational}$$
$$sum_{n=1}^infty frac{1}{2^n-1}=C_1$$
$$sum_{n=1}^infty frac{1}{2^{2^n}+1}=C_2$$
$$text{The constant $C_1$ is known as the Erdős-Borwein Constant}$$
$$text{The constant $C_2$ is less known and is talked about here:}$$
$$text{http://journal.ms.u-tokyo.ac.jp/pdf/jms080206.pdf}$$

Although I cannot actually give an answer, I have explored many of the avenues for approximation of the sum, which I have illustrated below.

Integral Approximation of the Sum

$$ int_{0}^{infty} r^{2^x} mathrm dx le sum_{i=0}^infty r^{2^i}le int_{-1}^{infty} r^{2^x} mathrm dx $$

We have:
$$ int r^{2^x} mathrm dx = frac{text{Ei}left(2^x log (r)right)}{log (2)} $$

For $r < 1$, this converges to $0$ as $xtoinfty$. Therefore, we have the following bounds:
$$ -frac{text{Ei}left(log (r)right)}{log (2)} le sum_{i=0}^infty r^{2^i} le -frac{text{Ei}left(frac{log (r)}{2}right)}{log (2)} $$

As an example, consider $r = 1/2$. Evaluating the bounds, we have:
$ 0.546307 le 0.816422 le 1.15583 $. It is clear that this bound may not be entirely the best. However, as $rto 1$, the bounds are asymptotically equal.

If we do not let $x to infty$ and instead consider the general case, we have the following bounds (if we stop at $c$):

$$ frac{text{Ei}left(2^{c+1} log (r)right)-text{Ei}(log (r))}{log (2)} le Sigma le frac{text{Ei}left(2^c log (r)right)-text{Ei}left(frac{log (r)}{2}right)}{log (2)} $$

Here is a graph of these bounds.

A graph of the bounds given.

Error Approximation of the Sum

For the partial sum, we have:

$$ (n-m) r^{2^n} le sum _{k=m+1}^n r^{2^k} le frac{r^{2^{m+1} (n-m+1)}-r^{2^{m+1}}}{r^{2^{m+1}}-1} $$

For $m = -1$, we have:

$$ (n + 1) r^{2^n} le sum_{k=0}^n r^{2^k} le frac{r^{n+2}-r}{r-1} $$

The partial sum is necessary if you want an approximation anything near the real thing.

We can use this partial sum to truncate the error of the function. For example, let’s say we want to find:

$$ sum_{k=0}^{n} r^{2^k} $$

With an error no more than $epsilon$. We split this into a partial sum:

$$ sum_{k=0}^{m} r^{2^k} + sum_{k=m+1}^{n} r^{2^k} = sum_{k=0}^{n} r^{2^k} $$

We want to find $m$ such that the partial sum is less than $epsilon$.

$$ sum_{k=m+1}^{n} r^{2^k} le epsilon $$

Apply our bounds to get:

$$ frac{r^{2^{m+1} (n-m+1)}-r^{2^{m+1}}}{r^{2^{m+1}}-1} le epsilon $$

Which, given $n, r,text{and}, epsilon$ reduces our problem to a 1D search.

Tagged : /

Math Genius: Sum of the form $r+r^2+r^4+dots+r^{2^k} = sum_{i=1}^k r^{2^i}$

Original Source Link

I am wondering if there exists any formula for the following power series :

$$S = r + r^2 + r^4 + r^8 + r^{16} + r^{32} + …… + r^{2^k}$$

Is there any way to calculate the sum of above series (if $k$ is given) ?

A formula for $$r+r^2+r^4+cdots+r^{2^k}$$ is $$r+r^2+r^4+cdots+r^{2^k}$$ You can calculate the sum of the series, if $k$ and $r$ are given, by calculating the sum of the series, that is, by computing the individual terms and adding them together.

Presumably, what is desired is a closed form formula for the sum, in terms of familiar functions such as logarithms and exponentials and sines and cosines and powers and roots, and a more efficient way to calculate it. As indicated in the comments, no such formula is known. I don’t know whether it is possible to prove that no such formula exists, but considering how elementary the question is, I am confident that if there were a nice formula for it, Euler would have found it 250 years ago, and we’d all know about it now.

I haven’t been able to obtain a closed form expression for the sum, but maybe you or someone else can do something with what follows.

In Blackburn’s paper (reference below) there are some manipulations involving the geometric series

$$1 ; + ; r^{2^n} ; + ; r^{2 cdot 2^n} ; + ; r^{3 cdot 2^n} ; + ; ldots ; + ; r^{(m-1) cdot 2^n}$$

that might give some useful ideas to someone. However, thus far I haven’t found the identities or the manipulations in Blackburn’s paper to be of any help.

Charles Blackburn, Analytical theorems relating to geometrical series, Philosophical Magazine (3) 6 #33 (March 1835), 196-201.

As for your series, I tried exploiting the factorization of $r^m – 1$ as the product of $r-1$ and $1 + r + r^2 + ldots + r^{m-1}:$

First, replace each of the terms $r^m$ with $left(r^{m} – 1 right) + 1.$

$$S ;; = ;; left(r – 1 right) + 1 + left(r^2 – 1 right) + 1 + left(r^4 – 1 right) + 1 + left(r^8 – 1 right) + 1 + ldots + left(r^{2^k} – 1 right) + 1$$

Next, replace the $(k+1)$-many additions of $1$ with a single addition of $k+1.$

$$S ;; = ;; (k+1) + left(r – 1 right) + left(r^2 – 1 right) + left(r^4 – 1 right) + left(r^8 – 1 right) + ldots + left(r^{2^k} – 1 right)$$

Now use the fact that for each $m$ we have $r^m – 1 ; = ; left(r-1right) left(1 + r + r^2 + ldots + r^{m-1}right).$

$$S ;; = ;; (k+1) + left(r – 1 right)left[1 + left(1 + r right) + left(1 + r + r^2 + r^3 right) + ldots + left(1 + r + ldots + r^{2^{k} – 1} right) right]$$

At this point, let’s focus on the expression in square brackets. This expression is equal to

$$left(k+1right) cdot 1 + kr + left(k-1right)r^2 + left(k-1right)r^3 + left(k-2right)r^4 + ldots + left(k-2right)r^7 + left(k-3right)r^8 + ldots + left(k-3right)r^{15} + ldots + left(k-nright)r^{2^n} + dots + left(k-nright)r^{2^{n+1}-1} + ldots + left(1right) r^{2^{k-1}} + ldots + left(1right) r^{2^{k} – 1}$$

I’m now at a loss. We can slightly compress this by factoring out common factors for groups of terms such as $(k-2)r^4 + ldots + (k-2)r^7$ to get $(k-2)r^4left(1 + r + r^2 + r^3right).$ Doing this gives the following for the expression in square brackets.

$$left(k+1right) + left(kright)r + left(k-1right)r^2 left(1+rright) + left(k-2right)r^4left(1+r+r^2+r^3right) + ldots + ;; left(1right)r^{2^{k-1}} left(1 + r + ldots + r^{2^{k-1} -1} right)$$

Consider the the sequence $a_n$ defined as:
If $n=1$ then $a_n=1$, else if $n>1$ then $a_n=-1$.

This is the sequence $a_n = 1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,…$

The Dirichlet inverse (call it $b_n$) of $a_n$ is the oeis sequence called
“Number of ordered factorizations of n” http://oeis.org/A074206 , starting: $b_n = 0, 1, 1, 1, 2, 1, 3, 1, 4, 2, 3, 1, 8,…$, which is also a Dirichlet series. Notice that the first term: $b_0 = 0$.

For reasons I can not explain clearly, $b_n$ has the ordinary generating function:

$$sumlimits_ {n = 1}^{infty} b_n r^n =
r + sumlimits_ {a =
2}^{infty} r^{a} + sumlimits_ {a =
2}^{infty}sumlimits_ {b =
2}^{infty} r^{ab} + sumlimits_ {a =
2}^{infty}sumlimits_ {b = 2}^{infty}sumlimits_ {c =
2}^{infty} r^{abc} + sumlimits_ {a =
2}^{infty}sumlimits_ {b = 2}^{infty}sumlimits_ {c =
2}^{infty}sumlimits_ {d = 2}^{infty} r^{abcd} + … $$

Evaluate each sum partially to the summation index equal to $2$:

$$r + sumlimits_ {a = 2}^{2} r^{a} + sumlimits_ {a = 2}^{2}sumlimits_ {b =
2}^{2} r^{ab} + sumlimits_ {a = 2}^{2}sumlimits_ {b =
2}^{2}sumlimits_ {c =
2}^{2} r^{abc} + sumlimits_ {a = 2}^{2}sumlimits_ {b =
2}^{2}sumlimits_ {c = 2}^{2}sumlimits_ {d =
2}^{2} r^{abcd} + … $$

Simplify:

$$r + r^{2} + r^{2*2} + r^{2*2*2} + r^{2*2*2*2} + r^{2*2*2*2*2} + …… + r^{2^k}$$

multiply out:

$$r + r^{2} + r^{4} + r^{8} + r^{16} + r^{32} + …… + r^{2^k}$$

call it $S$ and you are done.

The sum:

$$sumlimits_ {n = 1}^{infty} b_n r^n$$

above, probably does not have a closed form other than the multiple sums, and therefore $S$ probably does not have a closed form either. But it points out what the scope of generating functions is.

I have not seen a closed form formula for the sum you talk about, and sense everyone else has already remarked on this fact, I thought I might mention that I have seen several other interesting results involving double exponentials and powers of two if your interested:
$$sum_{k=0}^{n-1}frac{2^k}{x^{2^k}+1}=frac{1}{x-1}-frac{2^n}{x^{2^n}-1}$$
$$prod_{k=0}^{n-1}(x^{2^k}+1)=frac{x^{2^n}-1}{x-1}$$
$$sum_{k=1}^{2^n}binom{2^n}{k}=2^{2^n}-1$$
$$prod_{n=0}^infty(1+x^{2^n})=frac{1}{1-x}$$

$$sum_{n=0}^infty frac{2^n}{x^{2^n}+1}=frac{1}{x-1}$$
$$sum_{n=0}^inftyfrac{4^nx^{2^n}}{(x^{2^n}+1)^2}=frac{x}{(x-1)^2}$$

$$text{Also for |x| < 1}$$
$$sum_{n=0}^infty frac{(-1)^n2^nx^{2^n}}{x^{2^{n+1}}-x^{2^n}+1}=frac{-2x^2}{x^4+x^2+1} $$
$$text{Let $f(n)$ count how many factors of 2 n has, then we have} $$
$$frac{1}{x-1}-sum_{n=1}^infty frac{f(n)}{x^n}=sum_{n=0}^infty frac{1}{x^{2^n}+1}$$
$$sum_{n=1}^infty frac{f(n)}{n^2}=frac{pi^2}{18}$$
$$sum_{k=1}^{2^n} f(k)=2^n-1$$
$$text{It was also proved by Erdős that both the constants bellow are irrational}$$
$$sum_{n=1}^infty frac{1}{2^n-1}=C_1$$
$$sum_{n=1}^infty frac{1}{2^{2^n}+1}=C_2$$
$$text{The constant $C_1$ is known as the Erdős-Borwein Constant}$$
$$text{The constant $C_2$ is less known and is talked about here:}$$
$$text{http://journal.ms.u-tokyo.ac.jp/pdf/jms080206.pdf}$$

Although I cannot actually give an answer, I have explored many of the avenues for approximation of the sum, which I have illustrated below.

Integral Approximation of the Sum

$$ int_{0}^{infty} r^{2^x} mathrm dx le sum_{i=0}^infty r^{2^i}le int_{-1}^{infty} r^{2^x} mathrm dx $$

We have:
$$ int r^{2^x} mathrm dx = frac{text{Ei}left(2^x log (r)right)}{log (2)} $$

For $r < 1$, this converges to $0$ as $xtoinfty$. Therefore, we have the following bounds:
$$ -frac{text{Ei}left(log (r)right)}{log (2)} le sum_{i=0}^infty r^{2^i} le -frac{text{Ei}left(frac{log (r)}{2}right)}{log (2)} $$

As an example, consider $r = 1/2$. Evaluating the bounds, we have:
$ 0.546307 le 0.816422 le 1.15583 $. It is clear that this bound may not be entirely the best. However, as $rto 1$, the bounds are asymptotically equal.

If we do not let $x to infty$ and instead consider the general case, we have the following bounds (if we stop at $c$):

$$ frac{text{Ei}left(2^{c+1} log (r)right)-text{Ei}(log (r))}{log (2)} le Sigma le frac{text{Ei}left(2^c log (r)right)-text{Ei}left(frac{log (r)}{2}right)}{log (2)} $$

Here is a graph of these bounds.

A graph of the bounds given.

Error Approximation of the Sum

For the partial sum, we have:

$$ (n-m) r^{2^n} le sum _{k=m+1}^n r^{2^k} le frac{r^{2^{m+1} (n-m+1)}-r^{2^{m+1}}}{r^{2^{m+1}}-1} $$

For $m = -1$, we have:

$$ (n + 1) r^{2^n} le sum_{k=0}^n r^{2^k} le frac{r^{n+2}-r}{r-1} $$

The partial sum is necessary if you want an approximation anything near the real thing.

We can use this partial sum to truncate the error of the function. For example, let’s say we want to find:

$$ sum_{k=0}^{n} r^{2^k} $$

With an error no more than $epsilon$. We split this into a partial sum:

$$ sum_{k=0}^{m} r^{2^k} + sum_{k=m+1}^{n} r^{2^k} = sum_{k=0}^{n} r^{2^k} $$

We want to find $m$ such that the partial sum is less than $epsilon$.

$$ sum_{k=m+1}^{n} r^{2^k} le epsilon $$

Apply our bounds to get:

$$ frac{r^{2^{m+1} (n-m+1)}-r^{2^{m+1}}}{r^{2^{m+1}}-1} le epsilon $$

Which, given $n, r,text{and}, epsilon$ reduces our problem to a 1D search.

Tagged : /