Math Genius: Markov Matrix with complex eigenvalues

Original Source Link

What properties does a Markov matrix (with real entries) with complex eigenvalues have?

For example, consider this matrix:
$$begin{pmatrix}
0 & 0 & 1\
1 & 0 & 0\
0 & 1 & 0\
end{pmatrix}.$$

If I start in the state $(1,0,0)^T$, this does not have a steady state, right?

Tagged : / / / /

Math Genius: Multiple Random Walkers that reflect each other

Original Source Link

Say I have $2$ random walkers that are positioned along a line. One of them will be on the left and the other on the right. Every time they meet, this ordering does not change, namely the two walkers cannot overtake each other, (the left one will remain on the left). Now because of the Markov property, I can just let the two walkers diffuse and at each meeting point, change labels to the walks (the left-most trajectory will always belong to the left walker, etc.).

My question is this: Is there a way to analytically derive the distributions of the two walkers over time? What direction should I take to do that, and eventually scale the problem up to $n$ walkers?

Update: Solution: I essentially want to find the minimum of two random variables that are independent of each other, and both normally distributed in the same way, which is a well-known problem.

Tagged : / /

Math Genius: MDP tabular setting.

Original Source Link

I would be very curious to know if in the tabular MDP setting we have:
$$ E_{tau_1,cdots, tau_{N} sim P^{pi}_{mu}} (1_{A}) = E_{s_1,cdots,s_{N times H} sim d^{pi}_{mu}} E_{a_1 sim pi(.|s_1) cdots a_{N times H} sim pi(.|s_{N times H})}(1_A) $$

With the usual notations: $tau$ are paths of length $H$, $pi$ is a policy, $mu$ is the initial distribution, $d^{pi}_{mu}$ is the state probability of appearance (ie $P(s)$).
$A$ is an event involving all the data points. (like for example observing $(s,a)$ $k$ times). It seems intuitively true, but not sure how to show it.

Thank you!!

Tagged : / /

Math Genius: Question on circular random walk [closed]

Original Source Link

A truck transports goods among $10$ points located on a circular route. These goods are carried only from one point to the next with probability $p$, or to the preceding point with probability $q=1-p$.

  1. Write the transition probability matrix.
  2. Find the limiting stationary distribution.
  3. Write your conclusions about this question.

The transition matrix is the circulant matrix $M = q cdot P + p cdot P^T$, where $P$ is the permutation matrix in the link. Computing the stationary distribution can be done by computing the solution to the system $(M – I)x = 0$.

However, rather than solving this system of equations, we can more easily prove that your guess of the stationary distribution $pi = (1/10,dots,1/10)$ is correct by verifying that $pi M = M$. To see that this holds, note that $pi = frac 1{10} (1,dots,1)$, and that the entries of $(1,dots,1)M$ are the column-sums of $M$. The only non-zero entries of a given column of $M$ are $p$ and $q$, which means that every entry of $(1,dots,1)M$ will be $p+q = 1$, which means that we have
$$
(1,dots,1)M = (1,dots,1)M implies pi M = pi.
$$

So, $pi$ is indeed the stationary distribution.

Tagged : / / / /

Math Genius: How to get $mathbb E[a^{tau_1} phi(X_{tau_1}) | X_0 =x] = mathbb E[a^{tau_2} phi(X_{tau_2}) | X_0 =x]$ from Strong Markov property?

Original Source Link

Consider a continuous-time Markov chain $(X_t)_{t ge 0}$ with respect to a completed right-continuous filtration $(mathcal G_t)_{t ge 0}$. Suppose that

  • The state space $V$ is finite and endowed with discrete topology.

  • $a in (0,1)$ and $phi$ is a measurable function from $V$ to $mathbb R_+$.

  • $tau_1 le tau_2$ are stopping times.

Then my professor said that by Strong Markov property, we have $$mathbb E[a^{tau_1} phi(X_{tau_1}) | X_0 =x] = mathbb E[a^{tau_2} phi(X_{tau_2}) | X_0 =x]$$


Could you please elaborate on how to get the above equality from this version of Strong Markov property?

enter image description here

Thank you so much for @Saad’s comment. I post it here to close this question.


This assertion seems incorrect. E.g. if $τ_1=0$ and $τ_2=1$, it asserts that$$φ(x)=E_x(a^{τ_1}φ(X_{τ_1}))=E_x(a^{τ_2}φ(X_{τ_2}))=aE_x(φ(X_1)),quadforall ain(0,1), xin V$$ which implies that $φ=0$.

Tagged : / / /

Math Genius: How to get $mathbb E[a^{tau_1} phi(X_{tau_1}) | X_0 =x] = mathbb E[a^{tau_2} phi(X_{tau_2}) | X_0 =x]$ from Strong Markov property?

Original Source Link

Consider a continuous-time Markov chain $(X_t)_{t ge 0}$ with respect to a completed right-continuous filtration $(mathcal G_t)_{t ge 0}$. Suppose that

  • The state space $V$ is finite and endowed with discrete topology.

  • $a in (0,1)$ and $phi$ is a measurable function from $V$ to $mathbb R_+$.

  • $tau_1 le tau_2$ are stopping times.

Then my professor said that by Strong Markov property, we have $$mathbb E[a^{tau_1} phi(X_{tau_1}) | X_0 =x] = mathbb E[a^{tau_2} phi(X_{tau_2}) | X_0 =x]$$


Could you please elaborate on how to get the above equality from this version of Strong Markov property?

enter image description here

Thank you so much for @Saad’s comment. I post it here to close this question.


This assertion seems incorrect. E.g. if $τ_1=0$ and $τ_2=1$, it asserts that$$φ(x)=E_x(a^{τ_1}φ(X_{τ_1}))=E_x(a^{τ_2}φ(X_{τ_2}))=aE_x(φ(X_1)),quadforall ain(0,1), xin V$$ which implies that $φ=0$.

Tagged : / / /

Math Genius: Let $(X_t)$ be a continuous-time Markov chain and $tau$ the first jump time. Compute $mathbb E_x [a^{tau} phi (X_tau)]$

Original Source Link

Let $(X_t)$ be a continuous-time Markov chain such that

  • The state space $V$ is finite and endowed with discrete topology.

  • The infintesimal generator is $L: V^2 to mathbb R$.

Let

  • $alpha in (0,1)$.

  • $phi$ be a function from $V$ to $mathbb R_+$.

  • $tau$ is the first jump time, i.e. the first time that the chain makes a transition to a new state.

I would like to ask how to compute $$alpha = mathbb E_x [a^{tau} phi (X_tau)]$$ where $mathbb E_x := mathbb E_x [ cdot | X_0 = x ]$.


My attempt:

It’s well-known that given $X_0$, $tau$ is exponentially distributed with parameter $-L(X_0,X_0)$. Then

$$alpha = mathbb E_x [a^{tau} phi (X_tau)] = -int_0^infty a^s L(x,x)phi (X_s) e^{-sL(x,x)} mathrm{d}s$$

I’m stuck because there is $s$ inside $phi(X_s)$. Could you please elaborate on how to compute $alpha$?

Thank you so much!

Thank you so much for @Saad’s invaluable comment! I post it here to close this question:


The issue in the calculation is using the formula$$E(g(τ))=int g(t),mathrm dF_τ(t),$$ which requires that $g$ be a deterministic measurable function of $τ$, but $Χ_τ$ is not deterministically determined by $τ$.

I’m not so familiar with continuous-time Markov chains, but I think there should exist a result similar to that in How $h(z)=color{blue}{alpha}sum_yp_{z y}h(y)$ follows from Markov property?.

Tagged : / / / /

Math Genius: Let $(X_t)$ be a continuous-time Markov chain and $tau$ the first jump time. Compute $mathbb E_x [a^{tau} phi (X_tau)]$

Original Source Link

Let $(X_t)$ be a continuous-time Markov chain such that

  • The state space $V$ is finite and endowed with discrete topology.

  • The infintesimal generator is $L: V^2 to mathbb R$.

Let

  • $alpha in (0,1)$.

  • $phi$ be a function from $V$ to $mathbb R_+$.

  • $tau$ is the first jump time, i.e. the first time that the chain makes a transition to a new state.

I would like to ask how to compute $$alpha = mathbb E_x [a^{tau} phi (X_tau)]$$ where $mathbb E_x := mathbb E_x [ cdot | X_0 = x ]$.


My attempt:

It’s well-known that given $X_0$, $tau$ is exponentially distributed with parameter $-L(X_0,X_0)$. Then

$$alpha = mathbb E_x [a^{tau} phi (X_tau)] = -int_0^infty a^s L(x,x)phi (X_s) e^{-sL(x,x)} mathrm{d}s$$

I’m stuck because there is $s$ inside $phi(X_s)$. Could you please elaborate on how to compute $alpha$?

Thank you so much!

Tagged : / / / /