Math Genius: If $partial _x f$ and $partial _yf$ exist does $frac{d}{dt}f(alpha (t),beta (t))=alpha ‘(t)partial _xf+beta ‘(t)partial _y f$?

Original Source Link

Let $f=f(x,y)$ s.t. $partial _x f$ and $partial _yf$ exist. If $alpha $ and $beta $ are differentiable, does $$frac{d}{dt}fbig(alpha (t),beta (t)big)=alpha ‘(t)partial _xfbig(alpha (t),beta (t)big)+beta ‘(t)partial _y fbig(alpha (t),beta (t)big) ?$$
The reason I’m asking that it’s that indeed for example $$lim_{hto 0}frac{f(alpha (t+h),u)-f(alpha (t),u)}{alpha (t+h)-alpha (t)}=partial _x f(alpha (t),u),$$

whenever $u$ is fixed. But here, I’m not so sure why
$$lim_{hto 0}frac{f(alpha (t+h),beta (t+h))-f(alpha (t),beta (t+h))}{alpha (t+h)-alpha (t)}=partial _x f(alpha (t),beta (t))$$
if there is not a stronger assumption as differentiability.

You’re right to be suspicious. Take

$$f(x,y) = begin{cases} x & y=0 \ y & x=0 \ 0 & text{else} \ end{cases}$$

Here, $f_x$ and $f_y$ exist at $(0,0)$ and equal $1$. However, if we take $alpha(t) = beta(t) = t$, then we get that

$$frac{d}{dt}f(alpha(t),beta(t))Biggr|_{t=0} = 0 neq 2 = alpha'(0)f_x(0,0) + beta'(0)f_y(0,0)$$

Tagged :

Math Genius: Stability conditions of discrete dynamic systems

Original Source Link

I have a system of 6 difference equations with three state variables. Let $X_t = [x_{1t}, x_{2t}, x_{3t}, x_{4t}, x_{5t}, x_{6t}]^T$. The general form of the problem is $g(X_{t+1}, X_t) = 0$. The log-linear form of this system around the steady state $X^*$ is:
begin{equation}
X_{t+1} – X^* = M cdot (X_t – X^*)
end{equation}

The eigenvalues of $M$ are $-32.43$, $0.56 pm 0.94i$, $0.98 pm 0.12i$ and $0.94$. The stability conditions of discrete time system ($|lambda|<1$), which imply that there are three stable roots and the system is saddle-point stable.

If, however, I set up the system in a `differential form’:
begin{equation}
X_{t+1} – X_t = (M – I) cdot (X_t – X^*) = N cdot (X_t – X^*)
end{equation}

The eigenvalues of $N$ are $-33.43$, $-0.44 pm 0.94i$, $-0.02 pm 0.12i$ and $-0.06$. Now, we can apply the stability conditions of differential system ($Re (lambda)<0$), which imply that there are six stable roots and the system has multiple paths to the steady state.

How do I explain the difference in the predictions of the two forms of the same model? Which is the right way to look at the problem?

I will use zero for the eigenvector consisting of 0s.

We are going to assume in this discussion that X* and Y* are zero (which can be justified using a change of variables).

Now let us analyse the first equation:

If you choose $X_0$ to be one of the nice eigenvectors (whose corresponding eigenvalue is $lambda$ of modulus less than 1), it follows then $X_1= lambda X_0$, $X_2= lambda^2 X_0$ and so on and so forth. It follows in particular that $X_t$ goes to zero as $t$ goes to infinity.

It follows that $X_0$ is a linear combination of these three eigenvectors (whose corresponding eigenvalue is $lambda$ of modulus less than $1$) then the corresponding $X_t$ goes to zero rapidly. However if there is even a tiny perturbation which takes $X_0$ outside this space then instead the solution goes to infinity.

Now let us consider the second equation:

Given $X_0$, I do not see a method of finding the solution other than converting the equation into the previous form.

I think the confusion arose because differential and difference equations have been treated in the same way. The corresponding differential equation has a general solution of the form:

$X(t)= c e^{Nt} X_0$ where c is a constant. The eigenvalues of $e^N$ determine the number/stability of solutions. For instance it follows that if all the eigenvalues of $N$ have negative real part then all the eigenvalues of $e^N$ have modulus less than $1$ and $X(t)$ must go to zero irrespective of what $X(0)$ is.

However I do not see a way of proving that there is such a solution in case we replace the differential equation by difference equations and hence can’t see how to use the eigenvalue condition given along with it.

I hope this helps!

Nishant.

Tagged : / / / /

Math Genius: Stuck in a finite summation in a physics problem.r³/(r-1)²

Original Source Link

Actually it was not a hard problem as seen from physics point of view, just some application of Coulomb’s law (multiple charge system) but it finally ended up as the summation of:-
$$
sum_{r=2}^n frac{(r+1)^3}{r^2}
$$

I tried to open things up but it finally took me to summation of $frac{1}{r^2}$ and $frac{1}{r}$, and I know they can be solved but it’s too much for my level I guess (as I am a JEE aspirant). Is there any other way to solve the summation?

We have:
$$frac{(r+1)^3}{r^2}=frac{r^3+3r^2+3r+1}{r^2}=r+3+frac{3}{r}+frac{1}{r^2}$$
Thus:
$$sum_{r=2}^n frac{(r+1)^3}{r^2}=sum_{r=2}^n bigg( r+3+frac{3}{r} frac{1}{r^2} bigg)$$
$$sum_{r=2}^n frac{(r+1)^3}{r^2}=sum_{r=2}^n r +sum_{r=2}^n 3 +3 sum_{r=2}^n frac{1}{r} + sum_{r=2}^n frac{1}{r^2}$$
$$sum_{r=2}^n frac{(r+1)^3}{r^2}=bigg( frac{n(n+1)}{2}-1 bigg) +3(n-1) +3 sum_{r=2}^n frac{1}{r} + sum_{r=2}^n frac{1}{r^2}$$
$$sum_{r=2}^n frac{(r+1)^3}{r^2}=frac{n^2+7n-8}{2}+(3H_n-3)+(H_n^{(2)}-1)=frac{n^2+7n-16}{2}+3H_n+H_n^{(2)}$$

Now, it is not possible to get the exact values for $H_n$ and $H_n^{(2)}$, but for large $n$, you can approximate:
$$lim_{n to infty} H_n – ln{n} = gamma = 0.577 ldots implies H_n approx ln{n}+ gamma$$
$$lim_{n to infty} H_n – frac{pi^2}{6}=0 implies H_n^{(2)} approx frac{pi^2}{6}$$
Hence, we have:
$$sum_{r=2}^n frac{(r+1)^3}{r^2} approx frac{n^2+7n-16}{2}+3 ln{n} + 3 gamma + frac{pi^2}{6}$$

As $n$ gets really large, the difference between the LHS and RHS tends to $0$. This shows that we have a really good approximation.

Tagged : /

Math Genius: Showing extension ring of a field is an integral domain

Original Source Link

Let $F subset K$ both be fields and $S subset K$ be a non-empty set of $K.$ How do I show that $F[S]$ is an integral domain? The case for finite order $S$ is easy because $F[S]$ is just the image of the evaluation homomorphism of the polynomial ring over $F.$ However, I’m pretty clueless on the case of infinitely large $S.$ Is there anything to be said about the structure of $F[S]$ in general?

$F[S]$ is a ring extension of $F$. The elements of $F[S]$ can be viewed as multivariate polynomials $$sum_alpha c_alpha x_1^{alpha_1}cdots x_n^{alpha_n},$$ where $c_alphain F$ and $x_1,ldots,x_nin S$, $alpha_1,ldots,alpha_ngeq 0$, where $ngeq 0$ also varies.

Each such element belongs to $K$ and so $F[S]$ cannot have zero divisors.

Tagged : /

Math Genius: Proving that every finitely generated vector space has a basis.

Original Source Link

I have tried to research more about this following problem, and I have tried to come up with a proof myself, but unfortunately I can’t think of anything that makes sense. I am having trouble with part (2) and (4) of this problem. Would really appreciate any help or clarification on how to begin solving this problem.

Question:
Prove that every finitely generated vector space has a
basis. In fact, every vector space has a basis, but the proof of that is beyond the scope of this course. Let V be a non-zero finitely generated vector space.

(1) Let w, v1,…,vn be in V . Prove w is in Span(v1,…, vn) if and only if Span(w, v1,…, vn) = Span(v1,…, vn).

(2) Prove, using part (1) that there exists a finite set X with nonzero members such that V = SpanX.

(3) Suppose V is spanned by a finite set of cardinality n. By part (2) we can assume 0 not in X. Our claim is V has a basis. We will use induction on n. Prove that the induction base holds. That is,
if n = 1, then V has a basis.

(4) Induction hypothesis: suppose that V has a basis whenever it is spanned by a set of cardinality n. Now suppose V is spanned by a set of cardinality n + 1. Prove that V has a basis.

(2) We are assuming that $V$ is a non-zero finitely generated vector space. What this means is that $Vneq{0}$ and that there is a finite set $X={v_1,ldots,v_n}$ which spans $V$. If $0notin X$, you’re done. Otherwise, if, say, $v_1=0$, then, by (1), $V=operatorname{span}bigl({v_2,ldots,v_n}bigr)$ and ${v_2,ldots,v_n}$ is a set of non-zero vectors.

(4) Suppose that $V$ is spanned by ${v_1v_2,ldots,v_{n+1}}$. There are two possibilities:

  • the set ${v_1v_2,ldots,v_{n+1}}$ is linearly dependent. Then one of the vectors belongs to the span of the others. You can assume without loss of generality that $v_{n+1}inoperatorname{span}bigl({v_1v_2,ldots,v_n}bigr)$. Therefore, $V=operatorname{span}bigl({v_1v_2,ldots,v_n}bigr)$ But then, by the induction hypothesis, $V$ has a basis.
  • the set ${v_1v_2,ldots,v_{n+1}}$ is linearly independent. But then it is a basis.
Tagged : / /

Math Genius: How to solve $int_0^2 int_0^sqrt{4-x^{2}} int_0^sqrt{4-x^2 -y^2} z sqrt{4-x^2 -y^2} , dz , dy , dx$ in spherical coordinate

Original Source Link

$$int_0^2 int_0^sqrt{4-x^{2}} int_0^sqrt{4-x^2 -y^2} z sqrt{4-x^2 -y^2} , dz , dy , dx$$

The task is to solve this integral using spherical coordinate. After I tried to change the variable, I got
$$ int _0^{frac{pi }{2}}int _0^{frac{pi }{2}}int _0^2left(rho :cosleft(phi right)sqrt{4-rho ^2left(sinleft(phi right)right)^2}right):rho ^2sinleft(phi right)drho :dtheta :dphi
$$

Which I think pretty ugly with $sqrt{4-rho ^2left(sinleft(phi right)right)^2}$ . Is there anything I did wrong on the variable changing process? If it’s not, what are the approaches to solve this integral?

The integral simplifies like so

$$int_0^{2}int_0^{frac{pi}{2}}int_0^{frac{pi}{2}}rho^3sinphicosphisqrt{4-rho^2sin^2phi}:dtheta:dphi:drho = frac{pi}{4}int_0^{2}int_0^{frac{pi}{2}} rhosqrt{4-rho^2sin^2phi} :d(rho^2sin^2phi):drho$$

$$= frac{pi}{6}int_0^2 -rho left[4-rho^2sin^2phiright]^{frac{3}{2}}biggr|_0^{frac{pi}{2}}:drho = frac{pi}{6}int_0^2 8rho-rho(4-rho^2)^{frac{3}{2}}:drho = frac{8pi}{5}$$

Tagged : / / / /

Math Genius: If $partial _x f$ and $partial _yf$ exist does $frac{d}{dt}f(alpha (t),beta (t))=alpha ‘(t)partial _xf+beta ‘(t)partial _y f$?

Original Source Link

Let $f=f(x,y)$ s.t. $partial _x f$ and $partial _yf$ exist. If $alpha $ and $beta $ are differentiable, does $$frac{d}{dt}fbig(alpha (t),beta (t)big)=alpha ‘(t)partial _xfbig(alpha (t),beta (t)big)+beta ‘(t)partial _y fbig(alpha (t),beta (t)big) ?$$
The reason I’m asking that it’s that indeed for example $$lim_{hto 0}frac{f(alpha (t+h),u)-f(alpha (t),u)}{alpha (t+h)-alpha (t)}=partial _x f(alpha (t),u),$$

whenever $u$ is fixed. But here, I’m not so sure why
$$lim_{hto 0}frac{f(alpha (t+h),beta (t+h))-f(alpha (t),beta (t+h))}{alpha (t+h)-alpha (t)}=partial _x f(alpha (t),beta (t))$$
if there is not a stronger assumption as differentiability.

You’re right to be suspicious. Take

$$f(x,y) = begin{cases} x & y=0 \ y & x=0 \ 0 & text{else} \ end{cases}$$

Here, $f_x$ and $f_y$ exist at $(0,0)$ and equal $1$. However, if we take $alpha(t) = beta(t) = t$, then we get that

$$frac{d}{dt}f(alpha(t),beta(t))Biggr|_{t=0} = 0 neq 2 = alpha'(0)f_x(0,0) + beta'(0)f_y(0,0)$$

Tagged :

Math Genius: Distributing values into a histogram

Original Source Link

Let $b_1(n), b_2(n), ldots, b_k(n)$ be the height of the $k$ bars of a fixed-width histogram with $n$ data points. Assume $k$ is constant, and $ngeq k$ is an integer which is increasing as we collect new data. Let $d_n$ be the value received at time $n$. I would like to calculate the height of $b_i(n)$ for each $i = 1, 2, ldots, k$, with the data uniformly scaled to fit the width.

For example in case $k = 3$ and $n=6$, then each bar $b_i(6)$ will consist of an average of two data values: $$b_1(6) = frac{1}{2}(d_1 + d_2), b_2(6) = frac{1}{2}(d_3 + d_4), text{and } b_3(6) = frac{1}{2}(d_5 + d_6).$$

Continuing the example with $k=3$, if $n=7$, I think we should set:
$$b_1(7) = frac{3}{7}d_1+frac{3}{7}d_2 + frac{1}{7}d_3 $$
$$b_2(7) = frac{2}{7}d_3+frac{3}{7}d_4 + frac{2}{7}d_5 $$
$$b_3(7) = frac{1}{7}d_5+frac{3}{7}d_6 + frac{3}{7}d_7 $$

The problem is to find a general formula to calculate $b_i(n+1)$ as a function of the constant $k$ and the values $n, d_1, ldots d_{n+1}$.

We can follow a guide from an old joke, saying that in order to quickly count sheep in a field we have to count a number of their legs and divide it by four.

Assume that we have the values $c_1,dots, c_{nk}$ such that
$$c_1=c_2=dots=c_k=d_1,$$
$$c_{k+1}=c_{k+2}=dots=c_{2k}=d_2$$
$$dots,$$
$$c_{(n-1)k+1}= c_{(n-1)k+2}=dots=c_{nk}=d_n.$$

Then for each $i=1,2,dots, k$ we have $$nb_i(n)=c_{(i-1)n+1}+c_{(i-1)n+2}+dots+c_{in}.$$

We can calculate $b_i(n)$ in terms of $d_j$ as follows. Let $(i-1)n=jk+r$ and $in=j’k+r’$, where $j$ and $j’$ are integers and $0<r,r’<k$. Then $$nb_i(n)=(k-r)d_{j+1}+ksum_{ell=j+2}^{j’} d_ell+r’d_{j’+1}.$$

Tagged : / /

Math Genius: Find generating function for the numeric function.

Original Source Link

The numeric function is:
$$0cdot5^0, 1cdot5^1, 2cdot5^2, ldots, rcdot5^r,ldots$$

My solution is:

$$begin{align*}
&frac5{1-5z}=1+5z+(5z)^2+ldots\\
&frac{d}{dz}left(frac1{1-5z}right)= 0 + 1cdot5 + 2cdot 5^2 cdot z +ldots& [text{diffentiating wrt }z]
end{align*}$$

Multiply above by $z$ and we get the generating function, which can be written as:

$$zcdotfrac{d}{dz}left(frac1{1-5z}right) = frac{5z}{(1-5z)^2}$$

But, the ans given in ans book is:

$$frac{z}{5(1-(z/5))^2};.$$

Is my solution wrong?

Also, please provide solution for these numeric functions as well:

  1. $1, -2, 3, -4, 5,ldots$

  2. $1, frac23,frac39,frac4{27},ldots,frac{r+1}{3^r},ldots$

  3. $1,1,2,2,3,3,4,4,ldots$

  4. $0cdot1, 1cdot2, 2cdot3, 3cdot4,ldots$

$$A(z)= 0.5 + 1.5z + 2.5z^2 + …$$
You can take 5z out from the equation.
It’ll turn out to be like
$$A(z)=5z[1 + 2.5z+3.5z^2+…]$$
Using the series expansion
$$(1-x)^y = 1 +2x +3x^2 + 4x^3 +…$$
where y= -2 (I couldn’t display it properly in the above equation so yea, had to put a y there). We will end up with
$$A(z)=5z(1-5z)^y$$
where y= -2, again.
Therefore,
$$A(z)=5z/(1-5z)^2$$

Tagged : /

Math Genius: Expected value of sum of different geometric random variables

Original Source Link

$n$ players play a game where they flip a fair coin simultaneously. If a player has a different result than the others, that player is out, and then the remaining $n – 1$ players continue until there are two players left and they are the winners. For example, for $n=3$, a result $(H,T,T) $ makes the first player lose and the other two to win, and $(H,H,H)$ will make them toss again.

I’ll define the variable: $$Y =text{number of rounds until there are two players left out of } n$$

I’m looking for $E(Y), VAR(Y)$. What I did was:

Define a random variable $$X_i = text{number of rounds until one player out of } i text{ is out}$$

so is follows that: $X_i sim{mathrm{Geo}(2cdotfrac{i}{2^{i}} = frac{i}{2^{i-1}})}$ , since we have to choose a player and a value for the coin
$$Y =sum_{i=3}^{n}X_i$$
$$E(Y) = E(sum_{i=3}^{n}X_i)=sum_{i=3}^{n}E(X_i)=sum_{i=3}^{n}frac{2^{i}}{i}$$
$$VAR(Y) = sum_{i=3}^{n}VAR(X_i) = sum_{i=3}^{n} dfrac{dfrac{2^{i-1}-i}{2^{i-1}}}{dfrac{i^2}{2^{2(i-1)}}} = sum_{i=3}^{n} dfrac{2^{i-1}(2^{i-1}-i)}{i^2}$$

Is there a closed form solution to this problem?

Tagged : / / /