Original Source Link

I’m trying to follow a line in a derivation for $P(Z>X+Y)$ where $X,Y,Z$ are independent continuous random variables distributed uniformly on $(0,1)$.

I’ve already derived the pdf of $X+Y$ using the convolution theorem, but there’s a line in the answer that says:

$P(Z>X+Y) = mathbb{E}[ P(Z>X+Y | X+Y ) ]$ where $mathbb{E}$ is the expectation.

I’m not familiar with this result. Could anyone give a pointer to a similar result if one exists?

Thanks.

$$mathbb{P}(Z>X+Y)=mathbb{E}[mathbb{1}(Z>X+Y)]=mathbb{E}[mathbb{E}[mathbb{1}(Z>X+Y)|X+Y]]=mathbb{E}[mathbb{P}(Z>X+Y|X+Y)],$$

where second equality is the following property of conditional expectation:

$$mathbb{E}[mathbb{E}[X|Y]]=mathbb{E}[X]$$

Intuitively, now that you know distribution of $X+Y$, you just need to “range”$^1$ through the values of $X+Y$, and find the probability of $Z>X+Y$ for each such value. This is exactly the expectation of the probability.

$^1$integrate against the density, i.e. $int_0^2mathbb{P}(Z>v)f_{X+Y}(v);dv$

This is not an answer to your question about the justification for the

equation that is puzzling you, but I think the geometrical method

described below for solving the problem that may give you a different

insight into the calculation of the desired probability $P{Z > X+Y}$.

The *random point* $(X,Y,Z)$ is uniformly distributed in the interior of the unit cube with diagonally opposite vertices $(0,0,0)$ and $(1,1,1)$. The cube has unit volume and so the probability that $(X,Y,Z)$ is in some region is just the volume of that region. Thus, $P{Z > X+Y}$ is the volume of the *tetrahedron* with vertices $(0,0,0)$, $(1,0,1)$, $(0,1,1)$ and $(0,0,1)$. If we think of this

as an inverted pyramid whose base is the right triangle with vertices

$(1,0,1)$, $(0,1,1)$ and $(0,0,1)$ and apex $(0,0,0)$ is at

altitude $1$ “above” the base,

then since the base has area $frac{1}{2}$, we get

the volume as

$$P{Z > X+Y} = frac{1}{3}times (text{area of base})times(text{altitude})

= frac{1}{3}times frac{{3}}{2}times1 = frac{1}{6}.$$

Of course, if you have already computed the density of $X+Y$, then it is

straightforward to use the result given by Artiom Fiodorov to get

$$P{Z > X+Y}= int_0^2{P}(Z>v)f_{X+Y}(v);dv

= int_0^1(1-v)cdot v;dv =

left.frac{v^2}{2}-frac{v^3}{3}right|_0^1 = frac{1}{6}.$$

Replacing the question in a larger context might help. Here is a result:

For every event $A$ in $(Omega,mathcal F,mathbb P)$ and every sigma-algebra $mathcal Gsubseteqmathcal F$, $mathbb P(A)=mathbb E(mathbb P(Amid mathcal G))$.

To see this, recall that $U=mathbb P(Amid mathcal G)$ is the unique (up to null events) random variable such that $mathbb E(U;B)=mathbb P(Acap B)$ for every $B$ in $mathcal G$. In particular, $B=Omega$ yields $mathbb E(U)=mathbb P(A)$, as claimed above.

In your setting, $A=[Zgt X+Y]$ and $mathcal G$ is the sigma-algera generated by the random variable $X+Y$ hence $mathbb P( mid mathcal G)=mathbb P( mid X+Y)$ by definition.

A partial justification can be found in the Wikipedia entry on the *Law of Total Probability*.

I think your question best understood using two discrete random variables. Suppose you have two random variables $X$ and $Y$ taking values $0,1,2,ldots,infty$. Now you are asked to compute the probability of the event $A = X > Y$.

So,

$$

begin{eqnarray}

P(A) &=& P(X>Y)\

end{eqnarray}

$$

Here both $X$ and $Y$ are random. To compute this probability we need the notion of conditional probability. Here it is:

$$

P(A cup B) = P(A|B) times P(B)

$$

Now, come to the original problem. We first fix the value of any one random variable, say, $Y = y$. Clearly, $y$ is any value from $0,1,2,ldots,infty$, but $y$ can’t take these values simultaneously. Now we compute $P(X > y|Y = y)$ and $P(Y = y)$.

Hence $P(X > Y)$ is nothing but $P(X > y|Y = y) times P(Y = y)$. But we have probabilities of so many events like this for each and every possible value of $y$, again each of these events are mutually exclusive, because occurrence of any one, say, $y = 1$ prevents the occurrence of others i.e. $y = i, i neq 1$. Therefore, to get the required probability, we need to sum up the probabilities for each of the m.e. events. Thus finally we get,

$$

P(X > Y) = sum_{y = 0}^{infty} left[P(X > y| Y = y) times P(Y = y)right]

$$

If you are familiar with the basic definition of expectation of random variable, then previous expression is actually,

$$

begin{eqnarray}

P(X > Y) &=& sum_{y = 0}^{infty} left[value times text{corresponding probability}right]\

P(X > Y) &=& Eleft[P(X > y| Y = y)right]

end{eqnarray}

$$

Now, to make this result suitable for continuous variable, just replace the sum by integration w.r.t $y; (0 leq y < infty)$ and $P(Y = y)$ by $f_Y(y)$ i.e. density function of $Y$ at the point $y$.

I don’t know if this helps since Dilip has given the answer, but the distribution of X+Y is triangular on [0,2] (isosceles with peak at X+Y = 1). So P(Z>X+Y) is the probability that a uniform on [0,1] is larger than the triangular random variable on [0,2]. If X+Y>1 then Z cannot be >X+Y and the probability that X+Y is greater than 1 is 1/2. Now this is where taking the expectation fo the conditional probability helps in my proof.

P{Z>X+Y) =E[P(Z>X+Y|X+Y)]= ∫u P(Z>u|X+Y=u)du =∫u P(Z>u)du where u is integrated from 0 to 1. The condition X+Y=u gets dropped because Z is independent of X+Y. P(Z>u)=1-u for 0<=u<=1.

hence P(Z>X+Y) =∫u(1-u)du = 1/6. Just as Dilip showed.

Tagged : probability / probability-distributions / uniform-distribution / volume