Math Genius: How to prove $f$ is Lipschitz?

Original Source Link

Define $$f(t):=int_{-t}^{t} g(t, y)dy$$ where $g(t,y)$ is Lipschitz and $g(t,y)=phi(x(t,y))=x^2/(1+x^2)$ is smooth function. How to prove $f$ is Lipschitz?

I try to use the mean value theorem for integral,
$$left|f(t_1)-f(t_2)right|leqleft|int_{-t_1}^{t_1} g(t,y)dy-int_{-t_2}^{t_2}g(t,y)dyright|leqleft|2t_1g(xi,y)-2t_2g(phi,y)right|$$ where $xiin(-t_1,t_1)$ and $phiin(-t_2,t_2)$.

Why the last term is bounded by $K|t_1-t_2|$ for a constant $K$?

This is not true. Take $g(t,x)=x^{2}$. This is smooth but $f(t)=2t^{3}/3$ is not Lipschitz.

Tagged : / / /

Server Bug Fix: Showing that Lebesgue Dominated convergence theorem is false in case of Riemann integration.

Original Source Link

I was reading Tom Apostol book called “Mathematical Analysis” and I read this statement: the Lebesgue Dominated convergence theorem is false in case of Riemann integration.

Here is the statement of LDCT:

enter image description here

My question is:

Could someone give me an example that shows that LDCT is false in the case of Riemann integration, please?

Every Riemann-integrable function is Lebesgue-integrable, so the only way in which the DCT could possibly fail for Riemann-integrable functions is in concluding that the limit function is Riemann-integrable. It is not too hard to cook up an example of a (dominated) sequence of Riemann-integrable functions whose limit is not Riemann-integrable:

Let ${r_1,r_2,dotsc,r_n,dotsc}$ be an enumeration of $mathbb{Q}cap[0,1]$. Consider the sequence of functions $f_n=1_{{r_1,dotsc,r_n}}$ on $[0,1]$. Each $f_n$ has only finitely many discontinuities, so is Riemann-integrable. Furthermore, the sequence ${f_n}_n$ is obviously dominated by the constant $1$ function, which is Riemann-integrable. However, the sequence ${f_n}_n$ converges everywhere on $[0,1]$ to the function $1_{mathbb{Q}cap[0,1]}$, which is a standard example of a non-Riemann-integrable function, hence the DCT fails.

A bounded function defined on a compact interval is Riemann integrable iff its set of discontinuities have Lebesgue measure zero. Let $P$ be the fat Cantor set with positive Lebesgue measure.

Construct a sequence $(f_{n})_{n = 1}^{infty}$ on $[0, 1]$ such that $f_{n} = max(0, : 1 – n cdot text{dist}(x, P))$ for all $n in mathbb{N}$. Each $f_{n}$ is continuous, and thus Riemann integrable, by the continuity of the $text{max}$ and $text{dist}$ functions.

By construction, $|f_{n}| leq 1$ for each $n$. However, its pointwise limit is the indicator function $1_{P}$, which is not even Riemann integrable as its set of discontinuities is $P$.

Tagged : / / / /

Math Genius: If $exp(t(A + B)) = exp(tA) exp(tB)$ for all $t geq 0$ then $A,B$ commute

Original Source Link

Let $A,B$ be complex valued square matrices. If $exp(t(A + B)) = exp(tA) exp(tB)$ for all $t geq 0$ then $A,B$ commute.

The converse of this statement can be an easy application of the Cauchy product rule and the binomial theorem.

Note that this statement doesn’t hold, if we restrict ourselves to $t = 1$.

So far I have been trying to use the fact, that $A$ and $B$ are infinitesimal generators to the semigroups ${exp(tA)}$ and ${exp(tB)}$ but I have had no success. Do you have any other hints?


Based on the idea of @Did, I came up with the following:

Series expansions give me:
$$
sum_{n = 0}^infty frac{t^n(A + B)^n}{n!} = I + tA + tB + frac{t^2(AB + BA)}{2} + sum_{n = 3}^infty frac{t^n(A + B)^n}{n!}
$$
and
$$
left(sum_{n = 0}^infty frac{t^n(A)^n}{n!} right) left(sum_{n = 0}^infty frac{t^n(B)^n}{n!} right)
= I + tA + tB + frac{t^2A^2}{2} + t^2AB + frac{t^2B^2}{2} + sum_{n = 3}^infty t^n c_n,
$$
where
$$
c_n := sum_{k = 0}^n frac{A^k B^{n – k}}{k! n!}.
$$

The comparison of both expansions gives
$$
frac{t^2(AB + BA)}{2} + sum_{n = 3}^infty frac{t^n(A + B)^n}{n!}
= t^2AB + sum_{n = 3}^infty t^n c_n.
$$
Division by $t > 0$ yields:
$$
frac{(AB + BA)}{2} + sum_{n = 3}^infty frac{t^{n-2}(A + B)^n}{n!}
= AB + sum_{n = 3}^infty t^{n-2} c_n.
$$
But I can’t quite see, why the two sums $sum_{n = 3}^infty dots$ should go to $0$ for $t to 0$ yielding the desired equality
$$
frac{(AB + BA)}{2}
= AB .
$$

Let us write $T(t)=e^{t(A+B)}$ and $S(t)=e^{tA}e^{tB}$. Then, using (for both) the product rule and (for $S(t)$) the fact that the generator commutes with the semigroup,

$$frac{d}{dt}T(t)=(A+B)T(t),quad frac{d}{dt}S(t)=AS(t)+S(t)B$$
$$frac{d^2}{dt^2}T(t)=(A+B)^2T(t),quad frac{d^2}{dt^2}S(t)=A^2S(t)+2AS(t)B+S(t)B^2tag{1}$$

Since $T(t)=S(t)$ for all $tgeq 0$, we have
$$frac{d^2}{dt^2}T(t)=frac{d^2}{dt^2}S(t),quadforall tgeq 0$$
and thus, from $(1)$,
$$(AB+BA)S(t)+B^2S(t)=2AS(t)B+S(t)B^2,quadforall tgeq 0.$$
In particular, for $t=0$,
$$AB+BA=2AB$$
and the desired result follows.

Remark: This solution follows the hint in Engel’s book, page 23.

Tagged : / / / /

Math Genius: If $exp(t(A + B)) = exp(tA) exp(tB)$ for all $t geq 0$ then $A,B$ commute

Original Source Link

Let $A,B$ be complex valued square matrices. If $exp(t(A + B)) = exp(tA) exp(tB)$ for all $t geq 0$ then $A,B$ commute.

The converse of this statement can be an easy application of the Cauchy product rule and the binomial theorem.

Note that this statement doesn’t hold, if we restrict ourselves to $t = 1$.

So far I have been trying to use the fact, that $A$ and $B$ are infinitesimal generators to the semigroups ${exp(tA)}$ and ${exp(tB)}$ but I have had no success. Do you have any other hints?


Based on the idea of @Did, I came up with the following:

Series expansions give me:
$$
sum_{n = 0}^infty frac{t^n(A + B)^n}{n!} = I + tA + tB + frac{t^2(AB + BA)}{2} + sum_{n = 3}^infty frac{t^n(A + B)^n}{n!}
$$
and
$$
left(sum_{n = 0}^infty frac{t^n(A)^n}{n!} right) left(sum_{n = 0}^infty frac{t^n(B)^n}{n!} right)
= I + tA + tB + frac{t^2A^2}{2} + t^2AB + frac{t^2B^2}{2} + sum_{n = 3}^infty t^n c_n,
$$
where
$$
c_n := sum_{k = 0}^n frac{A^k B^{n – k}}{k! n!}.
$$

The comparison of both expansions gives
$$
frac{t^2(AB + BA)}{2} + sum_{n = 3}^infty frac{t^n(A + B)^n}{n!}
= t^2AB + sum_{n = 3}^infty t^n c_n.
$$
Division by $t > 0$ yields:
$$
frac{(AB + BA)}{2} + sum_{n = 3}^infty frac{t^{n-2}(A + B)^n}{n!}
= AB + sum_{n = 3}^infty t^{n-2} c_n.
$$
But I can’t quite see, why the two sums $sum_{n = 3}^infty dots$ should go to $0$ for $t to 0$ yielding the desired equality
$$
frac{(AB + BA)}{2}
= AB .
$$

Let us write $T(t)=e^{t(A+B)}$ and $S(t)=e^{tA}e^{tB}$. Then, using (for both) the product rule and (for $S(t)$) the fact that the generator commutes with the semigroup,

$$frac{d}{dt}T(t)=(A+B)T(t),quad frac{d}{dt}S(t)=AS(t)+S(t)B$$
$$frac{d^2}{dt^2}T(t)=(A+B)^2T(t),quad frac{d^2}{dt^2}S(t)=A^2S(t)+2AS(t)B+S(t)B^2tag{1}$$

Since $T(t)=S(t)$ for all $tgeq 0$, we have
$$frac{d^2}{dt^2}T(t)=frac{d^2}{dt^2}S(t),quadforall tgeq 0$$
and thus, from $(1)$,
$$(AB+BA)S(t)+B^2S(t)=2AS(t)B+S(t)B^2,quadforall tgeq 0.$$
In particular, for $t=0$,
$$AB+BA=2AB$$
and the desired result follows.

Remark: This solution follows the hint in Engel’s book, page 23.

Tagged : / / / /

Math Genius: Conceptual issue with indefinite integrals and the meaning of $dx$

Original Source Link

Before I begin. I’ve read many posts discussing what $dx$ is in an integral, and none of them answer the question I’m about to ask. I am writing this because I don’t want my post to be labelled as a duplicate right away. Anyway, let the reader decide whether this is a duplicate or not.

This is the definition of a differential of a function in my textbook:

$$
lim_{Delta x to 0} frac{Delta f}{Delta x} = frac{df}{dx} = f'(x).
$$

The textbook says that multiplying both sides by $dx$, we get

$$
df = f'(x)dx.
$$

I already have conceptual issues with this definition. In the textbook, it is emphasized that $Delta x = d x$. But since the limit where $Delta x to 0$ is the fraction $frac{df}{dx}$ by the above equation, how else am I supposed to interpret $dx$ other than $Delta x$ which has arrived at $0$ and therefore $dx = 0$? I did a bit of research and found out that Leibniz (who originally conceived of a derivative as the above fraction) named $dx$ and $df$ “infinitesimals”. I don’t know what to make of them, they seem nonsensical to me. To me it seems that the above equation is saying that we multiply $f'(x)$ with some infinitesimally small $Delta x$, which is equal to $dx$ and get $df$. How is multiplication by infinitesimals defined, if it even is?

Anyway, the reason I’m focusing on $df$ right now is because my textbook uses it to define the indefinite integral. It says that differentiation is the inverse function of integration. In other words

$$
int dF(x) = int F'(x) ; dx = F(x) + C.
$$

My issue here is that I don’t understand the role of $dx$ in the integral. That $dF$ is equal to $F'(x)$ times $dx$, where $dx$ is an infinitesimal seems wholly nonsensical to me, as I don’t understand how multiplication by infinitesimals is defined (if it even is), as I’ve already said above.

Even worse is the fact that my textbook admits literal multiplication of $dx$ with $F'(x)$ with the following notation (taken from one of the exercises).

$$
int dx = int 1 cdot dx.
$$

At first, I thought that I could just disregard $dx$ as a trivial notational convention (it marks the end of the integrand nicely), but it seems that this is sometimes not possible, as $dx$ plays a vital role in the integral, i.e. we actually use it in the calculation. One example of this is when we introduce a new variable $t$ (note that here $F'(x) = f(x)$).

$$
int f(g(x))g'(x) ; dx = int f(t) ; dt,
$$

where

$$
t = g(x), quad dt = g(x)dx, quad F'(t) = f(t).
$$

We manipulate $dx$ as well, and hence I conclude that it can’t be thought of only as a trivial notational convention. Taking this into account, I am especially surprised that one of the answers in this post claimed that $dx$ is just a trivial notational convention in all cases. By the above example, I don’t see how that can be.

To sum up:

  1. What exactly is $dx$? How can it be that $dx = Delta x$ where $Delta x to 0$, but $dx neq 0$? Is multiplication by infinitesimals even defined in standard analysis?

  2. How can I define the integral in such a way, that $dx$ is trivial and I don’t need to calculate with it?

  3. I think it would be neater to define the indefinite integral as the inverse function of derivation, not differentiation. Can I do that?

Thank you for all your answers in advance.

As many books say, we define differential as $df = f'(x)dx$. This equation standalone has no meaning. It is defined just to make some algebraic manipulations on integrals and differential equations. But let’s look more thoroughly what a differential is.

In my view, differential is not an equation but a symbol that shows how the behaviour of the $df$ tends to match the behaviour of $f'(x)dx$. In other words for me a differential is the following statmement:
$$dfto f'(x)dx text{ when } dxto 0$$
Infinitesimals was a simple and more intuitive way to attack those problems but lucks the mathematical formalism that we need to keep our math consistent. As far as I know there is a mathematical approach to infinitesimals called non-standard analysis but I don’t have a clue on how this approach defines differential.

Let’s now look on how differenial are used in integration. When we say that by the substitution $t = g(x)$ we get $int{f(g(x))g'(x)} = int f(t)dt$ by saying that $dt = g'(x)dx$. Again in my view, it is in some sense wrong to say that $dt = g'(x)dx$ and it would be correct if we and written $dtto g'(x)dx text{ when } dxto 0$. But, there is a theorem that proves how the substitution does not change the final integral but just transforms it to another equivalent integral. So we can “accept” that $dt = g'(x)dx$ just as a symbol, without the meaning of a real equation.

So now let’s answer your questions:

  1. Whe we are referring to $dx$, we behave to it as a quantity which tens to zero. In other words $dxto 0$. As far as the multiplication is concerned, the way I handle differentials is not as standard algebraic quanities but as symbols that satisfy some “informal” equations (like $df = f'(x)dx$) in order to be used in some problems that need a notion of differential(like integration and differential equations).
  2. You cannot take away the $dx$ symbol form integration symbol (although sometimes people ignore it but keep in mind that there should have been). If you see the definition of an definite integral, you can see that $dx$ plays a vital role. And, of course, from definition of definite integral we can generalize and define the indefinite integral, where the same principles work.
  3. As far as I know, differentiation and derivation is the same thing.

To answer your first question, the infinitesimal is not defined in standard analysis.

After a little practice with integrals the $dx$ will feel like an arcane bit of notation that serve no real purpose. When you get into differential equations, though, you will have to think about this differential operator again, and whether the algebra that you do with it is, in fact, “legal.”

You can define the integral as the anti-differentiation, but what what are the implications of the anti-derivative?

I think it is easier to think of $dx$ as “small” small but not actually infinitesimal. The integral is the sum of finitely many small changes, rather than infinitely many infinitesimal changes.

In the standard analysis, we start with the definite integral. The integral is defined as the area under the curve between points with $x$ in some interval $[a,b].$

$int_a^b f(x) dx$

We can partition the interval: $ a= x_0 < x_1 <x_2 <cdots x_n = b$

And make a whole bunch of rectangles, each with a base of $(x_{i+1} – x_i)$ and height $x_i^*$ where $x_i^*$ is some point in $[x_i, x_{i+1}]$

And then we sum the areas of these rectangles $sum_limits{i=0}^{n-1} f(x_i^*)(x_{i+1} – x_i)$

How you choose $x_i$ will change the value of this sum. The true area must be between the upper bound and the lower bound of this sum.

But, if the partition is allowed to be sufficiently fine, the upper bound and the lower bound approach the same value.

And that defines the integral.

The next step is to prove the fundamental theorem of calculus. The area under the curve equals the anti-derivative.

If $F(x) = int_a^x f(t) dt$

Then $F(x+h) – F(x) = int_a^{x+h} f(t) dt – int_a^{x} f(t) dt = int_{x}^{x+h} f(t) dt $

If $f(t)$ is continuous then there is a $cin(x,x+h)$ such that $f(c)$ takes on the average value of $f(t)$

$F(x+h) – F(x) = hf(t)$

As $h$ approaches $0, c$ gets squeezed to equal $x$

$lim_{hto 0} frac {F(x+h) – F(x)}{h} = F'(x) = f(x)$

This is just a sketch of the the theory / proof, but it might be feel more natural to you that the non-standard analysis.

Tagged : /

Math Genius: integration of $int_0^{frac{pi}{2}} cos^{n}(t)dt$

Original Source Link

$$int_0^{frac{pi}{2}}cos^{n}(t)dt =?$$
To solve this problem, I was thinking that I would let $cos(t)= frac{e^{it} + e^{-it}}{2}$, then the integral will have the form:
$$int_0^{frac{pi}{2}} left (frac{e^{it} + e^{-it}}{2} right)^{n}dt=frac{1}{2^{n}},sum_{k=0}^{n}binom{n}{k}int_0^{frac{pi}{2}}e^{i(n-2k)t},dt$$

From this point, I was stuck. So, would anyone please help me to walk through this problem.

$$cos^n(t)=sin^nleft(frac{pi}{2}-tright)$$

Let:

$frac{pi}{2}-t=x$,

$t=0 implies x=frac{pi}{2}$

$t=frac{pi}{2} implies x=0$

$mathrm{d}t=-mathrm{d}x$

$$I=int^{frac{pi}{2}}_0 cos^n(t)~mathrm{d}t=-int^{frac{pi}{2}}_0 sin^n(x)mathrm{d}x=-int^{frac{pi}{2}}_0 sin (x)(1-cos^2x)^{frac{n-1}{2}}mathrm{d}x$$

Now expand $(1-cos^2x)^{frac{n-1}{2}}$

You get a polynomial of form $au’u^k$, where $u’=sin(x)$ and $u^k=cos^k$ , which is integrable.

Use integration by parts on
$$I(n)=int_0^{frac{pi}{2}}cos^{n}(t)dt=int_0^{frac{pi}{2}}cos(t)cos^{n-1}(t)dt$$
then use the pythagorean theorem on $sin^2(t)$, you should end up with
$$I(n)=frac{n-1}{n}int_0^{frac{pi}{2}}cos^{n-2}(t)dt=frac{n-1}{n}I(n-2)$$

Well, my teacher gave us what is given below, without proof. If only this satisfies you, well and good, but if not, I’m gonna have to ask fellow members to help.enter image description here

Tagged : / / /

Math Genius: integration of $int_0^{frac{pi}{2}} cos^{n}(t)dt$

Original Source Link

$$int_0^{frac{pi}{2}}cos^{n}(t)dt =?$$
To solve this problem, I was thinking that I would let $cos(t)= frac{e^{it} + e^{-it}}{2}$, then the integral will have the form:
$$int_0^{frac{pi}{2}} left (frac{e^{it} + e^{-it}}{2} right)^{n}dt=frac{1}{2^{n}},sum_{k=0}^{n}binom{n}{k}int_0^{frac{pi}{2}}e^{i(n-2k)t},dt$$

From this point, I was stuck. So, would anyone please help me to walk through this problem.

$$cos^n(t)=sin^nleft(frac{pi}{2}-tright)$$

Let:

$frac{pi}{2}-t=x$,

$t=0 implies x=frac{pi}{2}$

$t=frac{pi}{2} implies x=0$

$mathrm{d}t=-mathrm{d}x$

$$I=int^{frac{pi}{2}}_0 cos^n(t)~mathrm{d}t=-int^{frac{pi}{2}}_0 sin^n(x)mathrm{d}x=-int^{frac{pi}{2}}_0 sin (x)(1-cos^2x)^{frac{n-1}{2}}mathrm{d}x$$

Now expand $(1-cos^2x)^{frac{n-1}{2}}$

You get a polynomial of form $au’u^k$, where $u’=sin(x)$ and $u^k=cos^k$ , which is integrable.

Use integration by parts on
$$I(n)=int_0^{frac{pi}{2}}cos^{n}(t)dt=int_0^{frac{pi}{2}}cos(t)cos^{n-1}(t)dt$$
then use the pythagorean theorem on $sin^2(t)$, you should end up with
$$I(n)=frac{n-1}{n}int_0^{frac{pi}{2}}cos^{n-2}(t)dt=frac{n-1}{n}I(n-2)$$

Well, my teacher gave us what is given below, without proof. If only this satisfies you, well and good, but if not, I’m gonna have to ask fellow members to help.enter image description here

Tagged : / / /

Server Bug Fix: Why does L’Hospital’s rule require the limit to exist? About the proof.

Original Source Link

Proof of L’Hospital’s rule (only special case this time):

Let’s assume, that $$ f(a)=g(a)=0 .$$

Using the MVT, we get $$ frac{f(x)}{g(x)}=frac{f(x)-f(a)}{g(x)-g(a)}=frac{f'(zeta)}{g'(zeta)}, mathrm{where} zeta in (a, x) mathrm{or} zeta in (x, a) $$
and thus
$$
lim_{x rightarrow a} frac{f(x)}{g(x)}=lim_{x rightarrow a} frac{f'(zeta)}{g'(zeta)}=lim_{zeta rightarrow a} frac{f'(zeta)}{g'(zeta)}=lim_{x rightarrow a} frac{f'(x)}{g'(x)}.
$$

In literature, it’s always assumed that the limit $$ lim_{x rightarrow a} frac{f'(x)}{g'(x)} $$
exists. Why is that and how is it shown in the proof? According to the proof above, shouldn’t the limit of f/g be equal to the limit of f’/g’ always? If lim f’/g’ is not defined, then lim f/g shouldn’t be defined either.

Obviously this isn’t true, because there are examples of functions f and g s.t. lim f/g exists even though lim f’/g’ does not exist.

L’Hôpital’s rule is an “if” statement, not an “if and only if” statement. It tell us that if

$displaystyle lim_{x rightarrow a} frac{f'(x)}{g'(x)}$

exists then

$displaystyle lim_{x rightarrow a} frac{f(x)}{g(x)} = lim_{x rightarrow a} frac{f'(x)}{g'(x)}$

but it says nothing about the case when $displaystyle lim_{x rightarrow a} frac{f'(x)}{g'(x)}$ does not exist. In that case $displaystyle lim_{x rightarrow a} frac{f(x)}{g(x)}$ may or may not exist – as you say, there are examples of each possibility.

If $displaystyle lim_{x rightarrow a} frac{f'(x)}{g'(x)}$ does not exist then, although we may know that

$displaystyle frac {f(x)}{g(x)} = frac {f'(zeta_1)}{g'(zeta_2)}$

for some $zeta_1, zeta_2 in (x,a)$, the right hand side does not tend to a specific value as $x rightarrow a$, so we cannot conclude anything about the behaviour of the left hand side.

L’Hospital’s rule “requires” the limit to exist, because if it does not, you are stuck and the rule is useless. In particular, the functions $f,g$ might be non-differentiable.

I think the confusion comes from the particular version of L’Hosital one is using. Here is a version of L’Hospital that explicitly requires the existence of the limit the rational derivatives. I will also provide an example where this fails. I hope this helps.

Theorem: Suppose $f,g$ are differentiable functions in an interval $(a,b)$. If

  1. $lim_{xrightarrow a+}f(x)=0=lim_{xrightarrow a+}g(x)$,
  2. $g'(x)neq0$ in $(a,b)$, and
  3. $lim_{xrightarrow a+}frac{f'(x)}{g'(x)}$ exists and has value $L$ (here $L$ is either a real number, $infty$ or $-infty$)

then $lim_{xrightarrow a+}frac{f(x)}{g(x)}$ exists and equals $L$.

Notes:

  • A similar result holds for $xrightarrow b-$.
  • Similar versions exist when in (1), $0$ is replace by $pminfty$.

Here is a short proof for $a$ and $L$ are finite. By condition (1) we can extend $f$ and $g$ to $[a,b)$ be setting $f(a)=0=g(a)$. Given $varepsilon>0$ there is $x_varepsilonin (a,b)$ such that
$$
Big|frac{f'(x)}{g'(x)}-LBig|<varepsilon, qquadtext{for all}quad a<xleq x_varepsilon
$$

By the mean value theorem (the generalized version), for each $a<x<x_varepsilon$,
$$
frac{f(x)}{g(x)}=frac{f(x)-f(a)}{g(x)-g(a)}=frac{f'(c_x)}{g'(c_x)}qquadtext{for some}quad a<c_x<x.
$$

Hence, for all $a<x<x_varepsilon$
$$
Big|frac{f(x)}{g(x)} -LBig|=Big|frac{f'(c_x)}{g'(c_x)} -LBig|<varepsilon
$$

The case where $L$ is not finite is handled similarly. When $a=-infty$ ($b=infty$), a slight modification of the proof above works.


Observations:

  • Here, the value of $g'(a)$ (in the case where $f$ and $g$ can be extended continuously to $[a,b)$ and and the right derivative $g'(a+)$ exists) is irrelevant. It may not even be defined.

  • L’Hospital theorem (or the version I am presenting) is a double edge sword. First it is important to remember the assumptions (1),(2) and (3).

  • It is condition (3) that is sort of the clarivoyant type. In most situations we have (1), (2) but no idea of (3). So we may be tempted to think that it holds and proceed to apply the theorem.

  • By doing so, we replace the original problem ,that of $f/g$ by that of $f’/g’$. Often we find that the pair $f’,g’$ satisfy conditions (1) and (2) of the theorem so we apply the theorem to $f’/g’$, and so on. If this ends at some point, fantastic! we have our limit. But there is no warranty or a way to know this will be the case.


Some examples

  1. Here L’Hospital rule is not applicable as (3) does not hold:
    $$lim_{xrightarrowinfty}frac{x-sin x}{x+sin x}=lim_{xrightarrowinfty}frac{1-frac{sin x}{x}}{1+frac{sin x}{x}}=1$$
    however
    $$lim_{xrightarrowinfty}frac{Big(x-sin xBig)’}{Big(x+sin xBig)’}=lim_{xrightarrowinfty}frac{1-cos x}{1+cos x}$$
    does not exists, as one can see by looking at the sequence $y_n=2npi$ and $z_n=frac{pi}{2}+2pi n$.

  2. Here L’Hospital rule have us going in an infinite loop (gives us back the problem we started with)
    $$lim_{xrightarrowinfty}frac{x}{sqrt{1+x^2}}=lim_{xrightarrowinfty}frac{1}{sqrt{1+frac{1}{x^2}}}=1$$
    but
    $$lim_{xrightarrowinfty}frac{big(xbig)’}{big(sqrt{1+x^2}big)’}=lim_{xrightarrowinfty}frac{sqrt{1+x^2}}{x}=lim_{xrightarrowinfty}frac{1}{frac{x}{sqrt{1+x^2}}}$$
    So in a way you go back to your original problem.

  3. Here the assumption (3) of L’Hospital does not hold but one may not see that
    $$ lim_{xrightarrow0}frac{x^2sin(x^{-1})}{sin x}=lim_{xrightarrow0}frac{xsin(x^{-1})}{frac{sin x}{x}}=0$$
    sine $|xsin(x^{-1})|leq|x|xrightarrow{xrightarrow0}$ and $lim_{xrightarrow0}frac{sin x}{x}=1$. However

$$
lim_{xrightarrow0}frac{big(x^2sin(x^{-1})big)’}{(sin x)’}=
lim_{xrightarrow0}frac{2xsin(x^{-1}) -cos(x^{-1})}{cos x}
$$

But $lim_{xrightarrow0}cos(x^{-1})$ does not exists, as one can check by looking at the sequences $y_n=frac{1}{2npi}$ and $x_n=frac{1}{frac{pi}{2}+2pi n}$

Tagged : / /

Server Bug Fix: Can White defend and possibly win?

Original Source Link

[FEN "1nkr2nr/pppp2pp/2b2p2/3q4/3P4/1NQ5/P1P2PPP/R1B1R1K1 w KQkq - 0 1"]

Can White possibly win in this position and avoid a queen mate on g2?

Avoid queen mating on g2 is easy:

  • f3, Qg3, Qh3+ are decent options
  • Re4, Kf1 also prevent immediate mate but are not good

Winning this position is objectively speaking impossible. Black is up a piece and white does not have sufficient compensation for it. Most players would resign if playing somebody of 1800 strength or even less.

If black is a beginner and/or short on time white could try to continue the game. The usual plan in lost positions with material down is to keep the position as complicated as possible and to avoid trades. You want to activate your pieces and start to attack something, specifically the enemy king.

In this regard one possible plan could be:

  1. Qg3 (protects g2, attacks g7 and eyes c7), also makes space for the c pawn to move (clearance),
  2. Bf4 develops the bishop a tempo by double-attacking c7. Also makes space for a rook to move to c1 (clearance)
  3. Rac1 and 4. c4 or even immediately 3. c4 with the idea that after Qxc4 you play Rac1 and have a half-open file with attacking chances on the king

Once you get in c4 you can follow up with d5 which limits the range of the only active black minor piece.

Of course black also makes moves in-between and could spoil this plan.

The following example continuation is not best play, but about the best white could hope for for the next couple of moves:

[FEN "1nkr2nr/ppp3pp/2b2p2/3q4/3P4/1NQ5/P1P2PPP/R1B1R1K1 w - - 0 1"]

1. Qg3 g6 2. Bf4 Na6 3. Rac1 Kb8 4. c4 Qf7
5. d5 Ba4 6. Nd4

The final position is still lost for white, but at least white got the more active pieces and more space than black. There could be some attacking potential along the b file as well, now that the bishop left c6.

You can defend with Qg3 and Qh3 but the position is clearly disadvantageous for White, as they are one piece down

anything is possible.

depends how good the opponent is and how bad white is.

white is down a pawn and a piece and should lose.

Tagged : / / / /

Math Genius: Variant of Picard-Lindelöf theorem

Original Source Link

Question

Let $I=[0,a]$ and define the norm $||f||_{lambda}=sup_I |e^{-lambda x}f(x)|$ for $fin C(I)$. Let $phi:;mathbb{R}^2tomathbb{R}$ satify $|phi(x,u)-phi(y,v)|leqrho |u-v|$ for all $x,y,u,vinmathbb{R}$ and some $rho >0$. Define $tau:;fmapsto int_0^x phi(t,f(t));dt$

I need to find a $lambda$ such that $tau$ is a contraction under the norm $||cdot||_{lambda}$

Thoughts

I am not too sure how to do this; my first line of thought was:

$$begin{aligned}||tau (f)-tau (g)||_{lambda} &=sup_IBig| e^{-lambda x} int_0^x phi(t,f(t))-phi(t,g(t));dtBig| \ &leq sup_I e^{-lambda x} int_0^x |phi(t,f(t))-phi(t,g(t))|;dt\ &leqsup_I rho e^{-lambda x} int_0^x |f(t)-g(t)|;dt end{aligned}$$

But I can’t see how to get $cdots leq alphasup_I |e^{-lambda x}(f(x)-g(x))|$ for some $alpha<1 $ and some $lambda$ from this. Any help would be appreciated.

In general you can prove that if $w : I := [a,b] longmapsto mathbb{R}$ is strictly positive and continuos, it defines a norm equivalent to $|f|_{infty}$ on $C^{0}(I)$, defined as $|f|_{w} := suplimits_{t in I} leftlbrace w(t) |f(t)|rightrbrace$

Defining $w(t) := e^{-2L |t-t_{0}|}$ we define the norm $|f|_{w} := suplimits_{t in I} leftlbrace e^{-2L |t-t_{0}|} |f(t)|rightrbrace$

Let’s prove that with this choice, $T$ is a contraction on $I = [t_{0}-delta,t_{0}+delta]$ :

$$e^{-2L|t-t_{0}|} |int_{t_{0}}^{t}[f(s,v(s))-f(s,u(s))]ds| leq e^{-2L|t-t_{0}|} |int_{t_{0}}^{t}|f(s,v(s))-f(s,u(s))|ds| leq$$

$$e^{-2L|t-t_{0}|} |int_{t_{0}}^{t} L |v(s)-u(s)|ds| leq Le^{-2L|t-t_{0}|} |int_{t_{0}}^{t} |v-u|_{w}e^{2L|s-t_{0}|} ds|$$

$$ = frac{|v-u|_{w}}{2}(1-e^{-2L|t-t_{0}|}) leq frac{1}{2}|v-u|_{w}$$

So, taking the supremum for $t in I$ we get exactly

$$|T(v)(t)-T(u)(t)|_{w} leq frac{1}{2}|v-u|_{w}$$

In other words $T$ is a contraction $hspace{0.2cm} Box.$

In you case $I$ becomes $[0,a]$, $T$ becomes $tau$, $L$ becomes $rho$ and $lambda$ is the constant chosen in the definition of the weighted norm.

Well the supremum can split under multiplication, therefore if $f$ and $g$ were bounded in $I$, then you could call the supremum for the difference between $f$ and $g$ out of the integral, because the supremum of that difference remains constant in $I$ and the supremum and after done that you could join both and constrain the difference of $I$ on the delta and choose the delta you need.

Maybe not the cleanest but I do not see something wrong, but still, a person who can make mistakes, If I made some, please feel free to do all the corrections needed…

Tagged : / / /