Math Genius: Coupon collector’s problem as Random Walk

Original Source Link

I see in a book the following as coupon collector’s problem. We have $N$ coupons labelled $1,2,dots,N$ from which we pick with replacement. I could not understand what is the random walk here.

The state space is ${0,1,ldots,N}$ and the steps of the random walk are $0$ and $+1$. When at some state $k$, the transition $kto k+1$ has probability $1-frac{k}N$ and the transition $kto k$ has probability $frac{k}N$. Thus, the transitions $0to1$ and $Nto N$ both have probability $1$ and the state $N$ is absorbing. (Note that this process is not usually described as a (inhomogenous) random walk but rather as a Markov chain.)

You can also see this as a random walk on the complete graph on $N$ vertices with self-loops at each vertex, where you want to know when you’ve visited all vertices. See Find the expected number of steps needed until every point has been visited at least once. for the variant without self-loops.

Tagged : / / /

Math Genius: Markov Matrix with complex eigenvalues

Original Source Link

What properties does a Markov matrix (with real entries) with complex eigenvalues have?

For example, consider this matrix:
0 & 0 & 1\
1 & 0 & 0\
0 & 1 & 0\

If I start in the state $(1,0,0)^T$, this does not have a steady state, right?

Tagged : / / / /

Math Genius: Equivalent condition for positive recurrence

Original Source Link

I am studying continuous time Markov chains on a countable state space and my professor said the following: A state $i$ is positive recurrent if and only if $$liminf_{ttoinfty} frac{1}{t} int_0^t P_{i,i}(s)textit{d}s>0$$ where P is the semigroup associated to the Markov chain. I do not understand why this is true. Might someone have some insights?

Thank you in advance!

Tagged : /

Math Genius: Bounding distance from uniform distribution in a random walk on regular graph

Original Source Link

Following is a question from Arora-Barak’s Computational Complexity, a simple exercise in probability.

Steps to show UPATH is in RL

Please note that a random step here is defined as: either stay back or go to a neighbour randomly.

I get the idea for each of the step but I am only left with proving the bound $Delta(mathbf{p^k}) leq (1 – n^{-10n}) Delta(mathbf{p})$ for some $k$.

It is easy to show that $Delta(mathbf{p^k})$ is a non-increasing sequence and furthermore since $G$ is connected it should strictly decrease after $n$ steps (that is $Delta(mathbf{p^n}) < Delta(mathbf{p})$). The question also assumes that $G$ is non-bipartite but I don’t think so if it’s required.

I have a hunch that $Delta(mathbf{p^n}) leq left(1 – frac1nright)Delta(mathbf{p})$. If I am able to prove this then I am done because $Delta(mathbf{p^{kn}}) leq left(1 – frac1nright)^kDelta(mathbf{p}) leq left(1 – frac{1}{n^k}right)Delta(mathbf{p})$ and so $k=10n$ gives us the desired inequality.

I shall first show that the inequality holds true for $mathbf{p} = mathbf{e_u}$ where $mathbf{e_u}$ is the probability vector with $1$ at vertex $u$ and $0$ elsewhere.

Let $P$ be the transition matrix ($P_{xx} = frac{1}{d+1}$ and $P_{xy} = frac{1}{d+1}$ iff $(x,y) in E$) for our random walk and so $P^k$ is the $k$-step random walk.

Let $M_k$ denote the maximum entry of the $u^{th}$ row of $P^k$. Then notice that for every vertex $v$:
$$(P^{k+1})_{uv} = sum_w (P^{k})_{uw} (P^1)_{wv} leq M_k sum_w P_{wv} = M_k$$

Thus, $M_{k+1} leq M_k$ and so in particular $M_n leq M_1 leq frac{1}{d+1}$.

Finally $mathbf{p^n} = mathbf{p}P^n = $ $u^{th}$ row of $P^n$ and so $max_{mathbf{v}} {mathbf{p^n_v}} = M_n leq frac{1}{d+1} leq left(1 – frac{1}{n}right)^2$ because $frac{1}{d+1} + frac{2}{n} leq 1$ for all $n geq 3$ and $d geq 2$.

(Note: For $n<3$, $mathbf{p^k}$ is uniform distribution for all $k > 1$ and so the result follows immediately. And for $dleq 1$, since $G$ is connected, it follows that $nleq 2$ and so we are done)

Hence $Delta(mathbf{p^n}) = max_{mathbf{v}} {mathbf{p^n_v} – frac1n} leq max_{mathbf{v}} {mathbf{p^n_v}} leq left(1-frac1nright)Delta(mathbf{p})$ (as $Delta(mathbf{p}) = left(1-frac1nright)$) as desired.

Now for any general distribution $mathbf{p}$, we can express it as $mathbf{p} = sum_{mathbf{v}} mathbf{p_v}mathbf{e_v}$. I am stuck at this point. I had in mind that once I have showed for the “good” probability vectors ($mathbf{e_u}$), this case should follow immediately. But it doesn’t seem so.

Any help with proceeding further would be really appreciated.

Note: I know that $mathbf{p^k}$ will converge to uniform distribution (from a theorem in Markov Chains that for irreducible, aperiodic chains distribution converges to the stationary distribution) which will directly give that $Delta(mathbf{p^k})$ converges to $0$. But proof of this general theorem from Markov Chains requires sophisticated methods like coupling argument or spectral theorem. And this question, as it is asked, suggests me that it can be done without using that theorem.

Another question I wanted to ask: The above bound shows that $Delta(mathbf{p^k})$ converges to $0$ and so the maximum entry from each day distribution converges to $frac1n$. But can we conclude from it that $mathbf{p^k}$ converges to uniform distribution?

Tagged : / / /

Math Genius: Cyclic Markov Chain

Original Source Link

Say I have 3 states: $A,B,C$, where the transition probabilities are $P_{Bleftarrow A}=P_{Cleftarrow B}=P_{Aleftarrow C}=p$ and $P_{ileftarrow i}=1-p$ for $i=A,B,C$. All other transition probabilities are zero. The transition matrix is $$P=begin{pmatrix}
1-p & 0 & p\
p & 1-p & 0\
0 & p & 1-p\

Here, $0leq pleq 1$.

I find that the stationary distribution for $P$ is$$rho^*=begin{pmatrix}

If $p=0$, then any initial distribution vector $rho_0$ is a stationary distribution; if $p=1$, however, then the components of any initial $rho_0neq rho^*$ will keep hopping/cycling positions. In other words, if I start in say, state $A$ ($rho_0=(1,0,0)^T$), then the system will never reach a time-independent steady state.

For any $p$ in between 0 and 1, it seems that I will always converge to $rho^*$.

Two Questions:

1) Can whether or not a state is steady be determined by the eigenvalues of $P$, e.g. whether they are complex or not?

(Eigenvalues of $P$: $lambda_1=1, lambda_2=frac{1}{2}(2 – 3 p – isqrt{3}p), lambda_3=frac{1}{2}(2 – 3p + isqrt{3}p)$).

2) Even though $P$ does not satisfy the detailed balance property, it stills converges to an equilibrium state (depending on the value of $p$). Why is this?

Tagged : / / / /

Math Genius: Limit of Markov chain of beta distributions

Original Source Link

x_t &sim mathrm{Beta}(alpha_t,beta_t) \
alpha_{t+1} &= alpha_t + x_t \
beta_{t+1} &= beta_t + 1 – x_t \

My questions are the following:

  1. What is the distribution of $x_t$ in terms of $alpha_0,beta_0$? Does it have a closed form?
  2. What is the distribution of $x_infty$ in terms of $alpha_0,

The graph below shows an approximation of the distribution for $alpha_0 = 2, beta_0 = 4$ after a large $t$.

enter image description here

The true distribution at any given $t$ is a mixture of at most $t$ beta distributions. Here is the Python code I used to generate the graph:

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import beta
from itertools import count

a0 = 2
b0 = 3

trials = 10**4
at = np.zeros(trials) + a0
bt = np.zeros(trials) + b0

μs = np.linspace(0, 1, 10**3)

axes = plt.subplots()[1]
for t in count():
    if t % 100 == 0:
        axes.plot(μs, beta(a0, b0).pdf(μs), linewidth=1, label='$p(x_0)$', color='C0')
        axes.axvline(x=beta(a0, b0).mean(), linewidth=1, linestyle='--', label='$E(x_0)$', color='C0')
        axes.plot(μs, beta(at, bt).pdf(μs[:, None]).mean(-1), linewidth=1, label='$p(x_t)$', color='C1')
        axes.axvline(x=beta(at, bt).mean().mean(-1), linewidth=1, linestyle='--', label='$E(x_t)$', color='C1')
        axes.set_title('{} trials, t = {}'.format(trials, t))
        scale = (a0 + b0) * 2
        axes.plot(μs, beta(a0 * scale, b0 * scale).pdf(μs), linewidth=1, color='C2')

    μ = beta(at, bt).rvs()
    o = np.random.binomial(1, μ)
    at = at + μ
    bt = bt + 1 - μ

Tagged : / / / /

Math Genius: Transformations of stochastic matrix that preserve equilibrium

Original Source Link

I have a stochastic (Markov) matrix $W$. I would like to modify it, such that $W_{i,i}$ increases for all $i$ (and thus other elements decrease). However, I don’t want to change the equilibrium distribution of $W$, ie its leading eigenvector. Are there classes of transform that accomplish this?

For any $0 leq t < 1$, the matrix $(1 – t)W + tI$ is a “lazier” version of your Markov chain that has the same equilibrium distribution.

Tagged : / / /

Math Genius: Covariance of a Markov chain

Original Source Link

Suppose I have a transition density (matrix) $K$, and I want to calculate the covariance between random variables at two different times.

Cov_{mu_0}(X_n,X_{n+k})&=E_{mu_0}(X_n-E_{mu_n}X_n)(X_{n+k}-E_{mu_{n+k}}X_{n+k})\&=E_{mu_n}(X_0-E_{mu_0}X_0)(X_k-E_{mu_k}X_k)\&=intcdotsint(x_k-E_{mu_k}X_k)(x_0-E_{mu_0}X_0)K(x_{k-1},dx_k)K(x_{k-2},dx_{k-1})cdots K(x_0,dx_1)mu_n(dx_0)\&=???

How can I efficiently calculate this (both in theory and in computer simulation)? I know that

E_{mu_0}f(X_k)&=intcdotsint f(x_k)K(x_{k-1},dx_k)cdots K(x_0,dx_1)mu_0(dx_0)\&=???(text{don’t know the intermediate steps})\&=sum_imu^0_iK_{ij}^kf(x_i)\&=mu_0’K^{k}f

Hint: use law of iterated expectation to simplify calculation

Tagged : / / / /

Math Genius: Question on circular random walk [closed]

Original Source Link

A truck transports goods among $10$ points located on a circular route. These goods are carried only from one point to the next with probability $p$, or to the preceding point with probability $q=1-p$.

  1. Write the transition probability matrix.
  2. Find the limiting stationary distribution.
  3. Write your conclusions about this question.

The transition matrix is the circulant matrix $M = q cdot P + p cdot P^T$, where $P$ is the permutation matrix in the link. Computing the stationary distribution can be done by computing the solution to the system $(M – I)x = 0$.

However, rather than solving this system of equations, we can more easily prove that your guess of the stationary distribution $pi = (1/10,dots,1/10)$ is correct by verifying that $pi M = M$. To see that this holds, note that $pi = frac 1{10} (1,dots,1)$, and that the entries of $(1,dots,1)M$ are the column-sums of $M$. The only non-zero entries of a given column of $M$ are $p$ and $q$, which means that every entry of $(1,dots,1)M$ will be $p+q = 1$, which means that we have
(1,dots,1)M = (1,dots,1)M implies pi M = pi.

So, $pi$ is indeed the stationary distribution.

Tagged : / / / /

Math Genius: Let $(X_t)$ be a continuous-time Markov chain and $tau$ the first jump time. Compute $mathbb E_x [a^{tau} phi (X_tau)]$

Original Source Link

Let $(X_t)$ be a continuous-time Markov chain such that

  • The state space $V$ is finite and endowed with discrete topology.

  • The infintesimal generator is $L: V^2 to mathbb R$.


  • $alpha in (0,1)$.

  • $phi$ be a function from $V$ to $mathbb R_+$.

  • $tau$ is the first jump time, i.e. the first time that the chain makes a transition to a new state.

I would like to ask how to compute $$alpha = mathbb E_x [a^{tau} phi (X_tau)]$$ where $mathbb E_x := mathbb E_x [ cdot | X_0 = x ]$.

My attempt:

It’s well-known that given $X_0$, $tau$ is exponentially distributed with parameter $-L(X_0,X_0)$. Then

$$alpha = mathbb E_x [a^{tau} phi (X_tau)] = -int_0^infty a^s L(x,x)phi (X_s) e^{-sL(x,x)} mathrm{d}s$$

I’m stuck because there is $s$ inside $phi(X_s)$. Could you please elaborate on how to compute $alpha$?

Thank you so much!

Thank you so much for @Saad’s invaluable comment! I post it here to close this question:

The issue in the calculation is using the formula$$E(g(τ))=int g(t),mathrm dF_τ(t),$$ which requires that $g$ be a deterministic measurable function of $τ$, but $Χ_τ$ is not deterministically determined by $τ$.

I’m not so familiar with continuous-time Markov chains, but I think there should exist a result similar to that in How $h(z)=color{blue}{alpha}sum_yp_{z y}h(y)$ follows from Markov property?.

Tagged : / / / /