Math Genius: Proving that every finitely generated vector space has a basis.

Original Source Link

I have tried to research more about this following problem, and I have tried to come up with a proof myself, but unfortunately I can’t think of anything that makes sense. I am having trouble with part (2) and (4) of this problem. Would really appreciate any help or clarification on how to begin solving this problem.

Question:
Prove that every finitely generated vector space has a
basis. In fact, every vector space has a basis, but the proof of that is beyond the scope of this course. Let V be a non-zero finitely generated vector space.

(1) Let w, v1,…,vn be in V . Prove w is in Span(v1,…, vn) if and only if Span(w, v1,…, vn) = Span(v1,…, vn).

(2) Prove, using part (1) that there exists a finite set X with nonzero members such that V = SpanX.

(3) Suppose V is spanned by a finite set of cardinality n. By part (2) we can assume 0 not in X. Our claim is V has a basis. We will use induction on n. Prove that the induction base holds. That is,
if n = 1, then V has a basis.

(4) Induction hypothesis: suppose that V has a basis whenever it is spanned by a set of cardinality n. Now suppose V is spanned by a set of cardinality n + 1. Prove that V has a basis.

(2) We are assuming that $V$ is a non-zero finitely generated vector space. What this means is that $Vneq{0}$ and that there is a finite set $X={v_1,ldots,v_n}$ which spans $V$. If $0notin X$, you’re done. Otherwise, if, say, $v_1=0$, then, by (1), $V=operatorname{span}bigl({v_2,ldots,v_n}bigr)$ and ${v_2,ldots,v_n}$ is a set of non-zero vectors.

(4) Suppose that $V$ is spanned by ${v_1v_2,ldots,v_{n+1}}$. There are two possibilities:

  • the set ${v_1v_2,ldots,v_{n+1}}$ is linearly dependent. Then one of the vectors belongs to the span of the others. You can assume without loss of generality that $v_{n+1}inoperatorname{span}bigl({v_1v_2,ldots,v_n}bigr)$. Therefore, $V=operatorname{span}bigl({v_1v_2,ldots,v_n}bigr)$ But then, by the induction hypothesis, $V$ has a basis.
  • the set ${v_1v_2,ldots,v_{n+1}}$ is linearly independent. But then it is a basis.
Tagged : / /

Math Genius: Show that if S is a finite spanning set of $n$ distinct vectors of a vector space V. Then V is finite dimensional and $dim(V) leq n$.

Original Source Link

I am grappling with the following problem

Let S be a finite spanning set of $n$ vectors of a vector space V. Then V is finite dimensional and $dim(V) leq n$.

I have attempted to show that this holds and this is what I have now

If $S = emptyset$, then $operatorname{Span}(S) = V = {0}$ so the result is clearly true when $S$ is empty. Suppose now that S is non-empty. We will show that there is a linearly independent subset of $S$ that spans V algorithmically. If $S$ is linearly independent we are done. Otherwise we can find a vector $s_1 in S$ such that $operatorname{Span}(S − {s_1}) = V$. If $S − {s_1}$ is linearly independent we are done. Otherwise again we may find a vector $s_2 in S− {s_1}$ such that $operatorname{Span}(S − {s_1,s_2}) = V$. We continue in this manner until we finally have a linearly independent subset of $S$.

I’m not sure how to conclude that my process will end or if it is even correct. I’m fairly new to proving so I will appreciate any help, suggestions and corrections.

My line of thought for my original solution was that if we already have S to be linearly independent then we don’t need to show anything. If that wern’t true then we can find a vector in $S$ such if we removed it the smaller set would still span V like S. Then we apply the procedure again on the smaller set stopping if the set is linearly independent. Otherwise we continue to rip out vectors resulting in a smaller set that still spans V. I assumed that since $S$ was finite that my process would end and if it does end then we have a finite basis for $V$.

EDIT 1: I thank @Matias Heikkilä and @mathcounterexamples.net for their suggestion to reformulate my proof as an induction proof. What I have managed to produce is the following

Problem: Let V be a vector space and $S = {s_1, s_2, cdots, s_n}$ where all the vectors in $S$ are distinct vectors in $V$ and $n geq 1$ such that $operatorname{Span}(S) = V$. Then S contains a finite basis of V.

We proceed by induction on n.

Base step ($n=1$): If $n = 1$, then $S = {s_1}$. If $S$ is linearly independent then $S$ forms a finite basis of V. If on the other hand $S$ is linearly dependent then $S – {s_1} = emptyset$ spans $S$ which is finite and linearly independent subset that spans $V = {0}$. In either case we have a finite subset of $S$ that forms a basis of V. Thus our proposition holds for the base case.

Inductive step: Suppose inductively for a fixed $n > 1$ that we have have proved the result for $n -1$. We wish to show that the result holds for $n$. That is the set $S = {s_1, s_2, cdots, s_n}$ contains a subset $B$ that forms a basis of V. Suppose that $S$ were linearly independent then we set $B = S$ and we are done. If on the other hand $S$ were linearly dependent then we can find a vector $v_1 in S$ such that $operatorname{Span}(S – {v_1}) = operatorname{Span}(S) = V$. We note that $S – {v_1}$ is a set of $n-1$ elements which span $V$ thus we may let $S’= S – {v_1} $ and apply our induction hypothesis to $S’$. Whence there is a finite subset $B$ of $S’$ which forms a basis of $V$. But $B$ is also a finite subset of $S’$. Thus $S$ contains a finite subset that forms a basis of $V$. This completes the inductive step.

Conclusion: By the principle of mathematical induction, the proposition is true for all $n geq 1$ $Box$

Im still unsure of the validity of my original proof and the inductive proof above.

Tagged : / /

Math Genius: Relation between number of components in vectors and the dimension of their span.

Original Source Link

We have $n$ vectors each with $k$ components. Let $V$ be their span and $r$ be the dimension of $V$. Then $r leq k$.

I encountered this in a paper I am reading. I am not able to see why this is true(probably due to my not so sound linear algebra background).

Can anyone help me in understanding why the argument is true? Thanks.

Link to paper: https://www.cs.tau.ac.il/~shpilka/publications/RazShpilka_PIT.pdf

Let $Bbb F$ be the underlying field. A vector with $k$ components is a vector in $Bbb F^k$, which is a vector space of dimension $k$. If $V$ is the span of $n$ vectors in $Bbb F^k$, then it is a subspace of $Bbb F^k$: in particular, its dimension is less or equal to the dimension of $Bbb F^k$, that is, $k$.

Denote the $n$ vectors by $v_1, cdots, v_n in W$ (where $W$ is the vector space you’re working with). To say that each vector has $k$ components is the same as saying that $W$ is $k$-dimensional, that is, that there exist $k$ linearly independent vectors $e_1, cdots, e_k$ such that their span is equal to $W$ and that no combination of less than $k$ vectors can possibly span $W$ (and of course, also that any combination of more than $k$ vectors is linearly dependent). So the result you want follows by the very definition of dimension and basis.

Tagged :

Math Genius: Is the intersection between two $n$-spheres an $(n-1)$-sphere?

Original Source Link

Is it true that the intersection between two $n$-sphere in $mathbb{R}^n$ is a $(n-1)$-sphere if is not empty or a single point? I have tried to prove it but my only idea is to work with equations and apparently is not a good idea.

We pick one center at the origin for sphere $S_0$ (radius $a$) and the other at the point $(1,0,0,…,0)$ (radius $b$) for the sphere $S_1,$ so that the two equations are respectively
$$x_1^2+x_2^2+cdots + x_n^2=a^2,\
(x_1-1)^2+x_2^2+ cdots +x_n^2=b^2.$$
Subtracting these gives $2x_1-1=a^2-b^2,$ so that any point in the intersection $S_1 cap S_2$ must lie in the hyperplane $x_1=c,$ where $c=(a^2+b^2+1)/2.$ In this hyperplane, one views its “origin” as the point $P=(c,x_2, cdots x_n).$ Then the intersection, lying in $x_1=c,$ is a sphere centered at $P$ in that plane, and with equation
$$x_2^2+x_3^2+cdots + x_n^2=r^2,$$
where $r^2$ may be found from either equation for $S_1$ or for $S_2.$ Hopefully these match, namely $r^2=a^2-c^2=b^2-(c-1)^2.$ These do indeed match (which one would expect by symmetry) and the expression for $r^2$ may be put in the form
$$r^2=frac14[(a+b)^2-1][1-(a-b)^2].$$
We are viewing $a,b>0$ and if the first factor here is negative it corresponds to $|a+b|<1$ so the two radii don’t add up to enough for the spheres to meet. (They are outside each other.) In cases I’ve tried, if the first factor is positive and the second is negative, it corresponds to the case where one sphere is completely inside the other.

For generally located centers $C_1,C_2$ I think one can rescale so the distance between $C_1$ and $C_2$ is one, and apply the preceding, at least to get the center and radius. This means using $C_1+t(C_2-C_1),$ in vector notation, where $t$ is obtained from the previous case (the formula for that $c$ and its radius should be obtained using the rescaled versions of the initial radii for the spheres centered at $C_1$ and $C_2.$ And in this general case the resulting sphere lies in a hyperplane perpendicular to the line joining the two centers, so may not be so easy to visualize.

Tagged : / /

Math Genius: Prove that there’s no ordered basis $E$ in which $T{xchoose y}={0choose y}$ can be represented as $1 2choose 2 4$

Original Source Link

Let $T:mathbb{R}^2rightarrow mathbb{R}^2$ such that:

$$Tbegin{pmatrix} x \ yend{pmatrix} = begin{pmatrix}0\ yend{pmatrix}$$

Show that there’s no ordered-basis, $E$ such that:

$$[T]_E = begin{pmatrix}
1 & 2\
2 & 4\
end{pmatrix}$$

I didn’t get how should I prove it. I tried some ways but end up with nothing. I guess I’m missing something.

The trace is unvariant under change of basis. In the canonical basis, $T$ has for matrix $begin{pmatrix}0 & 0 \ 0 & 1 end{pmatrix}$ and then $mathrm{trace}T = 1$. The second matrix has trace $5 neq 1$. Therefore, there is no basis in wich this is the matrix of $T$.

With

$T begin{pmatrix} x \ y end{pmatrix} = begin{pmatrix} 0 \ y end{pmatrix}, tag 1$

we have

$T begin{pmatrix} 1 \ 0 end{pmatrix} = begin{pmatrix} 0 \ 0 end{pmatrix} tag 2$

and

$T begin{pmatrix} 0 \ 1 end{pmatrix} = begin{pmatrix} 0 \ 1 end{pmatrix}, tag 3$

which shows the eigenvalues of $T$ are $0$ and $1$; on the other hand, the eigevalues of the matrix

$[T]_E = begin{pmatrix} 1 & 2 \ 2 & 4 end{pmatrix} tag 4$

are the roots of its characteristic polynomial

$det ([T]_E – lambda I) = begin{pmatrix} 1 – lambda & 2 \ 2 & 4 – lambda end{pmatrix}$
$= (1 – lambda)(4 – lambda) – 4 = 4 – 5lambda + lambda^2 – 4 = lambda^2 – 5 lambda, tag 5$

which are $0$ and $5$; since the eigenvalues of the transformation $T$ as in (1) are $0$ and $1$, it cannot be represented by (4) in any basis $E$.

Tagged : /

Math Genius: Prove that there’s no ordered basis $E$ in which $T{xchoose y}={0choose y}$ can be represented as $1 2choose 2 4$

Original Source Link

Let $T:mathbb{R}^2rightarrow mathbb{R}^2$ such that:

$$Tbegin{pmatrix} x \ yend{pmatrix} = begin{pmatrix}0\ yend{pmatrix}$$

Show that there’s no ordered-basis, $E$ such that:

$$[T]_E = begin{pmatrix}
1 & 2\
2 & 4\
end{pmatrix}$$

I didn’t get how should I prove it. I tried some ways but end up with nothing. I guess I’m missing something.

The trace is unvariant under change of basis. In the canonical basis, $T$ has for matrix $begin{pmatrix}0 & 0 \ 0 & 1 end{pmatrix}$ and then $mathrm{trace}T = 1$. The second matrix has trace $5 neq 1$. Therefore, there is no basis in wich this is the matrix of $T$.

With

$T begin{pmatrix} x \ y end{pmatrix} = begin{pmatrix} 0 \ y end{pmatrix}, tag 1$

we have

$T begin{pmatrix} 1 \ 0 end{pmatrix} = begin{pmatrix} 0 \ 0 end{pmatrix} tag 2$

and

$T begin{pmatrix} 0 \ 1 end{pmatrix} = begin{pmatrix} 0 \ 1 end{pmatrix}, tag 3$

which shows the eigenvalues of $T$ are $0$ and $1$; on the other hand, the eigevalues of the matrix

$[T]_E = begin{pmatrix} 1 & 2 \ 2 & 4 end{pmatrix} tag 4$

are the roots of its characteristic polynomial

$det ([T]_E – lambda I) = begin{pmatrix} 1 – lambda & 2 \ 2 & 4 – lambda end{pmatrix}$
$= (1 – lambda)(4 – lambda) – 4 = 4 – 5lambda + lambda^2 – 4 = lambda^2 – 5 lambda, tag 5$

which are $0$ and $5$; since the eigenvalues of the transformation $T$ as in (1) are $0$ and $1$, it cannot be represented by (4) in any basis $E$.

Tagged : /

Math Genius: Is range of A equal to range of AB?

Original Source Link

I have a simple question.

Actually I just tried to solve the question ‘Is range of $A$ equal to range of $AA^TA$‘.

But it looks like much general question to ask ‘Is range of $A$ equal to range of $AB$‘.

In my first impression, I think range of $A$ is same as range of $AB$ because no matter which vectors come after, matrix $A$ would linearly transform it to the column space of $A$.

But, it looks wrong.

Can you help me to understand it?

And how can I prove range of $A$ is equal to $AA^TA$?

I am studying power iteration in randomized SVD and it said they are same but I cannot get it.

https://dl.acm.org/doi/pdf/10.1145/3004053?download=true (in page 4)

I guess something like ‘$Null(A)$ is equal to $Null(A^TA)$‘would be helpful, but hard to apply it.

Thank you very much.

In general, you can say that
$$Range(AB) subseteq Range(A)$$
because if $vin Range(AB)$, then
$$ABx=vimplies A(Bx) = vimplies vin Range(A).$$
The opposite is generally false, since, as Arthur suggested, if $B=0$, then $AB=0$ that has usually a very different range from the one of $A$.


For your case, here’s a simple explanation using SVD. If $A=USigma V$ is the SVD, then (assuming $A$ real) $AA^TA = USigma^3 V$. From the outer expansion
$$
A =sum_i sigma_i u_iv_i^T
$$

you can see that the range of $A$ is the span of $u_1,dots,u_r$, where $r$ is the rank of $A$, or equivalently the number of non-zero singular values.
Since
$$
AA^TA = USigma^3 V = sum_i sigma_i^3 u_iv_i^T
$$

you see that $AA^TA$ has the same rank of $A$, and the range of $AA^TA$ is still the span of $u_1,dots,u_r$.

As to the general question, $Range(AB)subseteq Range(A)$, but $Range(AB)=Range(A)$ if $B$ is square and full rank:
$$Range(A)=Range(ABB^{-1})subseteq Range(AB)subseteq Range(A)$$

Tagged : / /

Math Genius: Prove a unique real matrix exists to denote complex numbers in the set of Cauchy–Riemann matrices

Original Source Link

One model for the complex numbers $mathbb{C}$ uses the set of Cauchy–Riemann matrices

$CR:=begin{pmatrix}a&b\c&dend{pmatrix}$ such that $a,b,c,d in mathbb{R}, a=d, b+c=0$ with matrix addition and matrix multiplication corresponding to $+$ and $×$ in $mathbb{C}$. We say that a matrix$
begin{pmatrix}a&b\c&dend{pmatrix} in$
CR is real if $b=c=0$ and imaginary if $a=d=0$, we write $i:=begin{pmatrix}0&-1\1&0end{pmatrix}$

$(a)$ Show that for every imaginary $z ∈$ CR there is a unique real matrix $u ∈$ CR such that $z = u · i.$

$(b)$ Show that for every $z ∈$ CR there are unique real matrices $u, v ∈$ CR such that $z = u + v · i$.

I believe for both parts of the question, I am missing a key aspect of proving uniqueness. Any help would be appreciated

Since $i^2=-I_2$, For (a) the required $u$ is $$-zi=-left(begin{array}{cc}
0 & b_{z}\
-b_{z} & 0
end{array}right)left(begin{array}{cc}
0 & -1\
1 & 0
end{array}right)=left(begin{array}{cc}
-b_{z} & 0\
0 & -b_{z}
end{array}right).$$
For (b) note that$$u+vi=left(begin{array}{cc}
a_{u} & 0\
0 & a_{u}
end{array}right)+left(begin{array}{cc}
a_{v} & 0\
0 & a_{v}
end{array}right)left(begin{array}{cc}
0 & -1\
1 & 0
end{array}right)=left(begin{array}{cc}
a_{u} & -a_{v}\
a_{v} & a_{u}
end{array}right),$$
which agrees with $left(begin{array}{cc}
a_{z} & b_{z}\
-b_{z} & a_{z}
end{array}right)$
iff $a_{u}=a_{z},,a_{v}=-b_{z}$.

Some intuition on CR matrices may be gained via inspection of the following simple matrix equations (1)-(3), easily verified:

$begin{bmatrix} a & 0 \ 0 & a end{bmatrix} begin{bmatrix} 0 & -1 \ 1 & 0 end{bmatrix} = begin{bmatrix} 0 & -a \ a & 0 end{bmatrix}; tag 1$

$begin{bmatrix} 0 & -a \ a & 0 end{bmatrix}begin{bmatrix} 0 & -1 \ 1 & 0 end{bmatrix} = begin{bmatrix} -a & 0 \ 0 & -a end{bmatrix}; tag 2$

$begin{bmatrix} 0 & -a \ a & 0 end{bmatrix}begin{bmatrix} 0 & 1 \ -1 & 0 end{bmatrix} = begin{bmatrix} a & 0 \ 0 & a end{bmatrix}; tag 3$

we have

$i = begin{bmatrix} 0 & -1 \ 1 & 0 end{bmatrix}; tag 4$

$i^2 = -I; tag 5$

$(-i)i = -i^2 = I; tag 6$

$i^{-1} = -i. tag 7$

Note that (1) shows that for every imaginary $z in CR$ there is a real $u in CR$ such that

$ui = z; tag 8$

if

$yi = z, tag 9$

then in light of (5),

$-y = -yI = yi^2 = zi, tag{10}$

or

$y = -zi, tag{11}$

which shows that there is precisely one $u$ satisfying (8); thus is item (a) resolved.

We next observe that the only $w in CR$ which is both real and imaginary is $0$, for if $w$ is real then

$w = begin{bmatrix} a & 0 \ 0 & a end{bmatrix} tag{12}$

for some $a in Bbb R$, and since $w$ is also imaginary, we have

$w = begin{bmatrix} 0 & b \ -b & 0 end{bmatrix} tag{13}$

for some $b in Bbb R$; equating these two forms of $w$ yields

$begin{bmatrix} a & 0 \ 0 & a end{bmatrix} = begin{bmatrix} 0 & b \ -b & 0 end{bmatrix}; tag{14}$

comparing entries of these two matrices indicates that

$a = b = 0, tag{15}$

whence

$w = 0, tag{16}$

as asserted.

Now any $z in CR$ may be written

$z = begin{bmatrix} a & -b \ b & a end{bmatrix} tag{17}$

for some $a, b in Bbb R$; we have

$z = begin{bmatrix} a & 0 \ 0 & a end{bmatrix} + begin{bmatrix} 0 & -b \ b & 0 end{bmatrix} = begin{bmatrix} a & 0 \ 0 & a end{bmatrix} + begin{bmatrix} b & 0 \ 0 & b end{bmatrix}begin{bmatrix} 0 & -1 \ 1 & 0 end{bmatrix}; tag{18}$

we denote the real matrices occurring in this equation by $u$ and $v$:

$u = begin{bmatrix} a & 0 \ 0 & a end{bmatrix}, tag{19}$

$v = begin{bmatrix} b & 0 \ 0 & b end{bmatrix}, tag{20}$

then using (4) we have

$z = u + vi; tag{21}$

if there were two other real matrices $u’$ and $v’$ such that

$z = u’ + v’i, tag{22}$

then

$u’ + v’i = u + vi, tag{23}$

or

$u’ – u = vi – v’i = (v – v’)i; tag{24}$

since $u’ – u$ and $v – v’$ are both real, $(v – v’)i$ is imaginary in light of (1), and thus by what we have just seen in (12)-(18) it follows that

$u’ – u = (v – v’)i = 0, tag{25}$

whence

$u’ = u, tag{26}$

and again by virtue of (5),

$v – v’ = (v – v’)I = -(v – v’)i^2 = (-(v – v’)i)i = 0, tag{27}$

and thus

$v = v’ tag{28}$

as well. Thus the uniqueness of $u$, $v$ as in (21) is established, and we have dispensed with item (b).

Tagged : / / /

Math Genius: When does a d-dimensional array with given “weighted” marginals exist (“weighted transportation polytopes”)?

Original Source Link

Suppose I have two vectors $p,qinmathbb{R}^n_{>0}$. It is somewhat well known that one can find a matrix $Minmathbb{R}^{ntimes n}_{geq 0}$ (transportation plan) that has $p$ and $q$ as marginals, that is,
$$sum_i M_{i,j}=p_j qquad sum_j M_{i,j}=q_i$$
if and only if $sum_i p_i= sum_i q_i $.

Now suppose that in addition to $Minmathbb{R}^{ntimes n}$ one can choose $lambda_1,lambda_2inmathbb{R}^n_{>0}$ (“transportation efficiencies”) in a way that
$$sum_i M_{i,j}lambda_{1j}=p_j qquad sum_j M_{i,j}lambda_{2i}=q_i.$$
One can then always take $M$ to be the identity.

Question: While the former problem is not well-understood beyond two dimensions, I am wondering what can be said about the latter problem in higher dimensions (I couldn’t find a reference, maybe it’s trivial but I don’t see it). So, for example, in 3 dimensions we have 2D marginals $p,q,rinmathbb{R}^{ntimes n}_{>0}$ and want to find an array $Minmathbb{R}^{ntimes ntimes n}_{geq 0}$ and weights $lambda_1,lambda_2,lambda_3inmathbb{R}^n_{>0}$ such that

$$sum_i M_{i,j,k}lambda_1=p_{j,k} qquad sum_j M_{i,j,k}lambda_2=p_{i,k} qquad sum_k M_{i,j,k}lambda_3=p_{i,j}.$$

How must the marginals $p,q,r$ be such that I can find a corresponding array $M$ and weights $lambda_i$?

Tagged : /

Math Genius: Show that if S is a finite spanning set of $n$ distinct vectors of a vector space V. Then V is finite dimensional and $dim(V) leq n$.

Original Source Link

I am grappling with the following problem

Let S be a finite spanning set of $n$ vectors of a vector space V. Then V is finite dimensional and $dim(V) leq n$.

I have attempted to show that this holds and this is what I have now

If $S = emptyset$, then $operatorname{Span}(S) = V = {0}$ so the result is clearly true when $S$ is empty. Suppose now that S is non-empty. We will show that there is a linearly independent subset of $S$ that spans V algorithmically. If $S$ is linearly independent we are done. Otherwise we can find a vector $s_1 in S$ such that $operatorname{Span}(S − {s_1}) = V$. If $S − {s_1}$ is linearly independent we are done. Otherwise again we may find a vector $s_2 in S− {s_1}$ such that $operatorname{Span}(S − {s_1,s_2}) = V$. We continue in this manner until we finally have a linearly independent subset of $S$.

I’m not sure how to conclude that my process will end or if it is even correct. I’m fairly new to proving so I will appreciate any help, suggestions and corrections.

My line of thought for my original solution was that if we already have S to be linearly independent then we don’t need to show anything. If that wern’t true then we can find a vector in $S$ such if we removed it the smaller set would still span V like S. Then we apply the procedure again on the smaller set stopping if the set is linearly independent. Otherwise we continue to rip out vectors resulting in a smaller set that still spans V. I assumed that since $S$ was finite that my process would end and if it does end then we have a finite basis for $V$.

EDIT 1: I thank @Matias Heikkilä and @mathcounterexamples.net for their suggestion to reformulate my proof as an induction proof. What I have managed to produce is the following

Problem: Let V be a vector space and $S = {s_1, s_2, cdots, s_n}$ where all the vectors in $S$ are distinct vectors in $V$ and $n geq 1$ such that $operatorname{Span}(S) = V$. Then S contains a finite basis of V.

We proceed by induction on n.

Base step ($n=1$): If $n = 1$, then $S = {s_1}$. If $S$ is linearly independent then $S$ forms a finite basis of V. If on the other hand $S$ is linearly dependent then $S – {s_1} = emptyset$ spans $S$ which is finite and linearly independent subset that spans $V = {0}$. In either case we have a finite subset of $S$ that forms a basis of V. Thus our proposition holds for the base case.

Inductive step: Suppose inductively for a fixed $n > 1$ that we have have proved the result for $n -1$. We wish to show that the result holds for $n$. That is the set $S = {s_1, s_2, cdots, s_n}$ contains a subset $B$ that forms a basis of V. Suppose that $S$ were linearly independent then we set $B = S$ and we are done. If on the other hand $S$ were linearly dependent then we can find a vector $v_1 in S$ such that $operatorname{Span}(S – {v_1}) = operatorname{Span}(S) = V$. We note that $S – {v_1}$ is a set of $n-1$ elements which span $V$ thus we may let $S’= S – {v_1} $ and apply our induction hypothesis to $S’$. Whence there is a finite subset $B$ of $S’$ which forms a basis of $V$. But $B$ is also a finite subset of $S’$. Thus $S$ contains a finite subset that forms a basis of $V$. This completes the inductive step.

Conclusion: By the principle of mathematical induction, the proposition is true for all $n geq 1$ $Box$

Im still unsure of the validity of my original proof and the inductive proof above.

Tagged : / /