Math Genius: What is dual vector and covector?

Original Source Link

I tried to learn dual vectors online but failed to exactly understand it, I know that it could be understood using change of basis. Below is a example for change of basis, kindly help me with this.
Let V be a space over $Bbb{R}^3$ and the basis be
$begin{Bmatrix}
begin{bmatrix}1\1\0\ end{bmatrix},&
begin{bmatrix}0\2\0\ end{bmatrix},&
begin{bmatrix}1\0\1\ end{bmatrix}
end{Bmatrix}$ and U be another space over $Bbb{R}^3$ with basis
$begin{Bmatrix}
begin{bmatrix}5\0\3\ end{bmatrix},&
begin{bmatrix}2\3\4\ end{bmatrix},&
begin{bmatrix}1\6\2\ end{bmatrix}
end{Bmatrix}$, $begin{bmatrix}3\5\7\ end{bmatrix}$ is a vector in U and we want this vector’s coefficient in V. The approach would be:

$$c_1left[begin{matrix}1\1\0\ end{matrix}right]+
c_2left[begin{matrix}0\2\0\ end{matrix}right]+
c_3left[begin{matrix}1\0\1\ end{matrix}right]
=
3left[begin{matrix}5\0\3\ end{matrix}right]+
5left[begin{matrix}2\3\4\ end{matrix}right]+
7left[begin{matrix}1\6\2\ end{matrix}right]
$$

Where $mathbf{c}$ are coefficients of $V$’s basis.

$$left[begin{matrix}1 & 0 &1 \ 1&2&0 \0& 0 & 1\ end{matrix}right]
left[ begin{matrix}c_1\c_2\c_3\ end{matrix}right]=
left[begin{matrix}5 &2&1\ 0&3&6 \3& 4 & 2\ end{matrix}right]
left[ begin{matrix}3\5\7\ end{matrix}right]
$$

$$
left[ begin{matrix}c_1\c_2\c_3\ end{matrix}right]=
left[begin{matrix}1 & 0 &1 \ 1&2&0 \0& 0 & 1\ end{matrix}right]^{-1}left[begin{matrix}5 &2&1\ 0&3&6 \3& 4 & 2\ end{matrix}right]
left[ begin{matrix}3\5\7\ end{matrix}right]= left[ begin{matrix}-11\34\43\ end{matrix}right]
$$

Now I want to know from above example:

  1. What is/are dual vector?
  2. What is/are covector?
  3. What is dual space?

Let me know if I have understood it completely wrong. Thanks

Given a vector space $V$ over field $K,$ the dual space of $V,$ denoted $V^ast,$ is the set of all linear maps $varphi:Vto K.$ For example, consider $V=mathbb{R}timesmathbb{R}$ with addition $+:Vtimes Vto V$ defined as:
$$begin{bmatrix}a \ bend{bmatrix}+begin{bmatrix}c \ dend{bmatrix}=begin{bmatrix}a+c \ b+dend{bmatrix}$$ and vector multiplication $cdot:Ktimes Vto V$ defined as
$$kcdotbegin{bmatrix}a \ bend{bmatrix}=begin{bmatrix}kcdot a \ kcdot bend{bmatrix}$$
The $+$ and $cdot$ inside represent field addition and multiplication, but I’ll just use the same symbol because I’m lazy. Define a linear mapping $varphi:Vto K$ as $varphi!left(begin{bmatrix}a\bend{bmatrix}right)=a+b.$ One can show that this function adds linearly:
$$varphi!left(begin{bmatrix}a\bend{bmatrix}right)+varphi!left(begin{bmatrix}c\dend{bmatrix}right)=a+b+c+d=a+c+b+d=varphi!left(begin{bmatrix}a+c\b+dend{bmatrix}right)$$
and that it scales linearly:
$$kcdotvarphi!left(begin{bmatrix}a\bend{bmatrix}right)=kcdotleft(a+bright)=kcdot a+kcdot b=varphi!left(begin{bmatrix}kcdot a\kcdot bend{bmatrix}right).$$ Since it’s linear, it’s said to be a co-vector. Thus, one can show that this is equivalent to the “row vector” $begin{bmatrix}1 & 1end{bmatrix}$ using matrix multiplication.$begin{bmatrix}1 & 1end{bmatrix}begin{bmatrix}a\bend{bmatrix}=a+b.$ When this collection of maps comes equipped with addition and multiplication that behaves linearly, the set itself also becomes a vector space over $K$ called the vector dual space that contains dual vectors.

Tagged : / /

Math Genius: Find normal vector of a 3d vector

Original Source Link

I need to find the normal vector for the following 3d vector presented in the vectorial equation because I need to find a plane that is orthogonal to the following line:

$(x,y,z)=(1,0,0)+k(1,2,3)$

I understand that two normal vectors will have a dot product of $0$, however, I am not sure what to do in 3D space?

In 2D space, if I have an equation of $(x,y) = (p1,p2) +k(a,b)$, I know that the normal vector would be $(-b,a)$.

Furthermore, I would like to ask if I have an equation for plan: $2x+3y+4z=0$, in which $(2,3,4)$ is the normal vector. How can I obtain the direction vector for this plan?

Thanks!

In $3$ dimensions, there are infinitely many vectors perpendicular to a given vector.

As you said $(x,y,z)perp(1,2,3)iff x+2y+3z=0$.

One solution is $(x,y,z)=(1,1,-1)$ by inspection.

One way to find a vector perpendicular to a given vector in $3$ dimensions is to take the cross-product with another (non-collinear) vector.

For example, $(1,0,0)times(1,2,3)=(0,-3,2)$ is perpendicular to both $(1,0,0)$ and $(1,2,3)$, as you can verify by showing their dot product is $0$.

Now that we have two vectors perpendicular to $(1,2,3)$, any linear combination of those two vectors $alpha(1,1,-1)+beta(0,-3,2)$ with $alpha,betainmathbb R$ will also be perpendicular to $(1,2,3)$.

The condition you need is that the dot product is $0$, as you said. Therefore, for any point $(x,y,z)$, you have
$$
(x,y,z)cdot (1,2,3)=0Leftrightarrow x+2y+3z=0.
$$

Naturally, there are infinitely many solutions to this equation (the solution describes a plane). One, for example, is the vector $(2,-1,0)$.

Tagged : / /

Math Genius: In a topological vector space $E$ a set different from $emptyset$ and from $E$ cannot be both open and closed.

Original Source Link

I want to prove that: In a topological vector space $E$ a set different from $emptyset$ and from $E$ cannot be both open and closed, in the other words, if a subset $Msubset E$ is open and closed, then $M= emptyset$ or $M=E$.

I can prove that $M subset E$ is a linear subspace of $E$ then, $M=E$. But, in our case, $M$ is a general set (closed and open).

Any topological vector space $X$ is path connected: Given $x$ and $y$ the map $f:[0,1] to X$ defined by $f(t)=tx+(1-t)y$ is a path from $y$ to $x$. Any path connected space is connected. If $E$ is open and closed that $X=E cup E^{c}$ gives a disconnection of $X$.

Direct proof without using path-connectedness:

Suppose such a set $E$ exists. Pick $x in E$ and $y in E^{c}$. Let $f(t)=tx+(1-t)y$. Consider $c=sup {t: f(t) in E^{c}}$. There exist $t_n$ increasing to $c$ such that $f(t_n) in E^{c}$ for all $n$. Since $f$ is continuous and $E^{c}$ is closed we get $f(c) in E^{c}$. Similarly, taking a sequence $t_n$ decreasing to $c$ we get $f(c) in E$. [Note that $c neq 1$ since $f(1)=x in E$]. Thus $f(c) in E cap E^{c}=emptyset$, a contradiction.

Tagged : / /

Math Genius: In a topological vector space $E$ a set different from $emptyset$ and from $E$ cannot be both open and closed.

Original Source Link

I want to prove that: In a topological vector space $E$ a set different from $emptyset$ and from $E$ cannot be both open and closed, in the other words, if a subset $Msubset E$ is open and closed, then $M= emptyset$ or $M=E$.

I can prove that $M subset E$ is a linear subspace of $E$ then, $M=E$. But, in our case, $M$ is a general set (closed and open).

Any topological vector space $X$ is path connected: Given $x$ and $y$ the map $f:[0,1] to X$ defined by $f(t)=tx+(1-t)y$ is a path from $y$ to $x$. Any path connected space is connected. If $E$ is open and closed that $X=E cup E^{c}$ gives a disconnection of $X$.

Direct proof without using path-connectedness:

Suppose such a set $E$ exists. Pick $x in E$ and $y in E^{c}$. Let $f(t)=tx+(1-t)y$. Consider $c=sup {t: f(t) in E^{c}}$. There exist $t_n$ increasing to $c$ such that $f(t_n) in E^{c}$ for all $n$. Since $f$ is continuous and $E^{c}$ is closed we get $f(c) in E^{c}$. Similarly, taking a sequence $t_n$ decreasing to $c$ we get $f(c) in E$. [Note that $c neq 1$ since $f(1)=x in E$]. Thus $f(c) in E cap E^{c}=emptyset$, a contradiction.

Tagged : / /

Math Genius: Given a vector and angle find new vector

Original Source Link

Given a vector and an angle, how can i find an vector that the angle between the two vector is exactly the given angle?

Edit:

We are in the n-dimensional space and the new vector has a fixed given length.

Assuming you are in $mathbb R^2$ take the rotation matrix
$$begin{bmatrix}
cos theta & -sin theta \
sin theta & cos theta \
end{bmatrix}$$
where $theta$ is your angle and multiply it with your vector.

For a vector $vinmathbb R^n$ with $n>2$ you should first decide in which plain you want to apply the rotation. Find an orthonormal basis $(b_i)_{1le ile n}$ of $mathbb R^n$ such that $b_1$ and $b_2$ span your rotation plain and $vinoperatorname{span}(b_1,b_2)$. Now you can translate the rotation from above into that plain. Actually you don’t have to care about $b_i$ for $i>2$ since nothing will change there.

Find another vector that is not linearly dependent on the first (should be easy if you have more than one dimension), then you use that to construct a normal vector and last you combine the first and the normal vector to construct your final vector.

Let’s say your original vector is $u$ and then select one of the basis vectors $e_j$ (you may have to select carefully if $u$ is parallel to some of the axes). Now $v = e_j – (e_jcdot u)u/u^2$ is perpendicular to $u$:

$vcdot u = e_jcdot u – (e_jcdot u)u^2/u^2 = 0$

Then we just combine $u$ and $v$:

$w = ucos(alpha) + vsin(alpha)$

In the special case where there is two dimensions one could immediately select $v$ as $(-u_y, u_x)$ resulting in just applying a rotation matrix:

$$begin{bmatrix}
cos alpha & -sin alpha \
sin alpha & cos alpha \
end{bmatrix}$$

Tagged :

Math Genius: If $A$ is a real or complex algebra and $ain A$ is such that $ab=0$ for all $bin A$, then $a=0$?

Original Source Link

Let $A$ be an (not necessarily unital) algebra over $mathbb{R}$ or $mathbb{C}$. If $ain A$ is an element such that $ab=0$ for all $bin A$ (or equivalently, $aA={0}$), can we then conclude that $a=0$?

At first sight, it looks like a trivial statement. However, I am not able to prove or counterprove it.

It is clearly true for unital algebras (take $b=1$). I was also able to prove that this statement is true for (complex) C*-algebras (a certain class of algebras):

If $A$ is a C*-algebra, then $A$ admits at least one approximate unit $(u_{lambda})_{lambdainLambda}$. By assumption we have $au_{lambda}=0$ for all $lambdainLambda$. Taking the limit on both sides yields $a=0$.

Any suggestions would be greatly appreciated. It feels like I’m missing something trivial…

We cannot conclude this. Indeed, on any vector space $V$ over any field, one can define multiplication by assigning $vw=0$ for all $v,win V$.

Of course, if your algebra $A$ is unital, with unit $e$, then the conclusion does hold. Indeed, $a=ae=0$.

Moreover, there is a more elementary proof that this is true for $C^*$-algebras

Let $A$ be a $C^*$-algebra, and suppose $ain A$ satisfies $ab=0$ for all $bin A$. Then
$$0=|aa^*|=|a|^2.$$
Thus $|a|=0$, and therefore $a=0$.

Hopefully, I am not saying something stupid.

Pick your favourite algebra $A$.

Let $a notin A$ be any element, and define
$$B= mathbb{F} a oplus A$$
where $mathbb{F}$ is your field.

Now, $B$ becomes an algebra, under the obvious addition and multiplication defined as
$$(alpha a+b)(beta a +c)=bc$$

And clearly $aB=0$.

Tagged : / / / /

Math Genius: If $A$ is a real or complex algebra and $ain A$ is such that $ab=0$ for all $bin A$, then $a=0$?

Original Source Link

Let $A$ be an (not necessarily unital) algebra over $mathbb{R}$ or $mathbb{C}$. If $ain A$ is an element such that $ab=0$ for all $bin A$ (or equivalently, $aA={0}$), can we then conclude that $a=0$?

At first sight, it looks like a trivial statement. However, I am not able to prove or counterprove it.

It is clearly true for unital algebras (take $b=1$). I was also able to prove that this statement is true for (complex) C*-algebras (a certain class of algebras):

If $A$ is a C*-algebra, then $A$ admits at least one approximate unit $(u_{lambda})_{lambdainLambda}$. By assumption we have $au_{lambda}=0$ for all $lambdainLambda$. Taking the limit on both sides yields $a=0$.

Any suggestions would be greatly appreciated. It feels like I’m missing something trivial…

We cannot conclude this. Indeed, on any vector space $V$ over any field, one can define multiplication by assigning $vw=0$ for all $v,win V$.

Of course, if your algebra $A$ is unital, with unit $e$, then the conclusion does hold. Indeed, $a=ae=0$.

Moreover, there is a more elementary proof that this is true for $C^*$-algebras

Let $A$ be a $C^*$-algebra, and suppose $ain A$ satisfies $ab=0$ for all $bin A$. Then
$$0=|aa^*|=|a|^2.$$
Thus $|a|=0$, and therefore $a=0$.

Hopefully, I am not saying something stupid.

Pick your favourite algebra $A$.

Let $a notin A$ be any element, and define
$$B= mathbb{F} a oplus A$$
where $mathbb{F}$ is your field.

Now, $B$ becomes an algebra, under the obvious addition and multiplication defined as
$$(alpha a+b)(beta a +c)=bc$$

And clearly $aB=0$.

Tagged : / / / /

Math Genius: Variant of Picard-Lindelöf theorem

Original Source Link

Question

Let $I=[0,a]$ and define the norm $||f||_{lambda}=sup_I |e^{-lambda x}f(x)|$ for $fin C(I)$. Let $phi:;mathbb{R}^2tomathbb{R}$ satify $|phi(x,u)-phi(y,v)|leqrho |u-v|$ for all $x,y,u,vinmathbb{R}$ and some $rho >0$. Define $tau:;fmapsto int_0^x phi(t,f(t));dt$

I need to find a $lambda$ such that $tau$ is a contraction under the norm $||cdot||_{lambda}$

Thoughts

I am not too sure how to do this; my first line of thought was:

$$begin{aligned}||tau (f)-tau (g)||_{lambda} &=sup_IBig| e^{-lambda x} int_0^x phi(t,f(t))-phi(t,g(t));dtBig| \ &leq sup_I e^{-lambda x} int_0^x |phi(t,f(t))-phi(t,g(t))|;dt\ &leqsup_I rho e^{-lambda x} int_0^x |f(t)-g(t)|;dt end{aligned}$$

But I can’t see how to get $cdots leq alphasup_I |e^{-lambda x}(f(x)-g(x))|$ for some $alpha<1 $ and some $lambda$ from this. Any help would be appreciated.

In general you can prove that if $w : I := [a,b] longmapsto mathbb{R}$ is strictly positive and continuos, it defines a norm equivalent to $|f|_{infty}$ on $C^{0}(I)$, defined as $|f|_{w} := suplimits_{t in I} leftlbrace w(t) |f(t)|rightrbrace$

Defining $w(t) := e^{-2L |t-t_{0}|}$ we define the norm $|f|_{w} := suplimits_{t in I} leftlbrace e^{-2L |t-t_{0}|} |f(t)|rightrbrace$

Let’s prove that with this choice, $T$ is a contraction on $I = [t_{0}-delta,t_{0}+delta]$ :

$$e^{-2L|t-t_{0}|} |int_{t_{0}}^{t}[f(s,v(s))-f(s,u(s))]ds| leq e^{-2L|t-t_{0}|} |int_{t_{0}}^{t}|f(s,v(s))-f(s,u(s))|ds| leq$$

$$e^{-2L|t-t_{0}|} |int_{t_{0}}^{t} L |v(s)-u(s)|ds| leq Le^{-2L|t-t_{0}|} |int_{t_{0}}^{t} |v-u|_{w}e^{2L|s-t_{0}|} ds|$$

$$ = frac{|v-u|_{w}}{2}(1-e^{-2L|t-t_{0}|}) leq frac{1}{2}|v-u|_{w}$$

So, taking the supremum for $t in I$ we get exactly

$$|T(v)(t)-T(u)(t)|_{w} leq frac{1}{2}|v-u|_{w}$$

In other words $T$ is a contraction $hspace{0.2cm} Box.$

In you case $I$ becomes $[0,a]$, $T$ becomes $tau$, $L$ becomes $rho$ and $lambda$ is the constant chosen in the definition of the weighted norm.

Well the supremum can split under multiplication, therefore if $f$ and $g$ were bounded in $I$, then you could call the supremum for the difference between $f$ and $g$ out of the integral, because the supremum of that difference remains constant in $I$ and the supremum and after done that you could join both and constrain the difference of $I$ on the delta and choose the delta you need.

Maybe not the cleanest but I do not see something wrong, but still, a person who can make mistakes, If I made some, please feel free to do all the corrections needed…

Tagged : / / /

Math Genius: Spanning of a Vector Space

Original Source Link

Prove or disprove: If a set of $n$ vectors in the vector space $V$ is linearly independent, then the set of $n-1$ vectors cannot span $V$.

I believe the statement to be false. Can’t there be a line, plane, or hyperplane which spans a vector space? Each having their own set of linearly independent vectors?

$n-1$ vectors can span at maximum a $n-1$ dimensional space. So if your space is of dimension $n$, $n-1$ vectors cannot span it.

A vector space $V$ with dimension $n$ has, by definition, a basis of $n$ linearly indipendent vectors which span $V$. It is well known that, if a vector space has a basis of $n$ vectors, it cannot have a basis with a different number of vectors. Thus, if you have $n-1$ linearly indipendent vectors, they cannot span $V$, otherwise $V$ would have a basis with $n-1$ elements, which is not possible.

No. If you have an n dimensional space, and you have n linearly independent vectors, the it must form a basis for the space.

A set of vectors spanning a space is a basis iff it is the minimum number of vectors needed to span the space.
So if you reduce the number of vectors in your basis, it is no longer a basis for $R^n$ but will instead form a basis for $R^{n-1}$

You can prove this more rigorously by writing any $x in V$ as the sum of vectors from your linearly independent list and showing that if you remove one vector from the list you can no longer do this, as you cannot write the nth element as some sum of the other elements.

Tagged : /

Math Genius: Equations of the same plane

Original Source Link

Are
begin{equation*}
-x-4y+3z=-9~text{and}~x+4y-3z=6
end{equation*}
equations of the same plane? I graphed them and they look the same, but I am not sure. Thanks

Two equations $$ax+by+cz = d$$ and $$a’x+b’y+c’z=d’$$ represent the same plane if there exists $lambda neq 0$ such that $(a’,b’,c’,d’) = lambda(a,b,c,d)$. In other words, if you can re-scale one equation to get the other one. This don’t happen here.

Tagged : /