## Math Genius: Reference about infinite dimensional matrix

I want some knowledge about infinite dimensional matrix. The infinite dimensional means that the matrix has infinite rows and columns. I just find some paper about the infinite matrix, but seemly, there is not a systematic discussion about it. What book I should read?

Tagged : /

## Math Genius: Find rigid body transform $T_b$, given transforms $T_a, T_c$, where $T_c = T_b^{-1}T_aT_b$ for $T_b$. i.e. find change of basis.

Let $$mathbf{T}_a, mathbf{T}_b, mathbf{T}_c in SE(3)$$ be rigid body transforms, and;

$$begin{equation} mathbf{T}_c = mathbf{T}_b^{-1}mathbf{T}_amathbf{T}_b. label{eq_basis} end{equation}$$

We wish to find $$mathbf{T}_b$$ given $$mathbf{T}_a, mathbf{T}_c$$.

I think this is equivalent to finding a change of basis between $$mathbf{T}_a$$ and $$mathbf{T}_c$$. My approach so far has been to decompose the problem into rotational and translational constraints as follows.

Let $$mathbf{R}_a in SO(3)$$ be the rotational component of $$mathbf{T}_a$$, and $$mathbf{t}_a$$ be the translational component. Then we have:

$$begin{bmatrix}mathbf{R}_c & mathbf{t}_c\ mathbf{0} & 1end{bmatrix} = begin{bmatrix}mathbf{R}^{top}_b & -mathbf{R}^{top}_bmathbf{t}_b\ mathbf{0} & 1end{bmatrix} begin{bmatrix}mathbf{R}_a & mathbf{t}_a\ mathbf{0} & 1end{bmatrix}begin{bmatrix}mathbf{R}_b & mathbf{t}_b\ mathbf{0} & 1end{bmatrix}$$

Expanding this equation and collecting terms gives the following set of matrix equations;

$$(1) mathbf{R}_c = mathbf{R}^{top}_bmathbf{R}_amathbf{R}_b\ (2) mathbf{t}_c = mathbf{R}^{top}_bmathbf{R}_amathbf{t}_b + mathbf{R}^{top}_bmathbf{t}_a – mathbf{R}^{top}_bmathbf{t}_b$$

In general, (1) has infinitely many solutions which form a circle. We can find them by the following;

Let $$mathbf{c}$$ be the rotation axis of $$mathbf{R}_c$$, i.e. $$mathbf{R}_cmathbf{c} = mathbf{c}$$. Then;

$$mathbf{R}_cmathbf{c} = mathbf{R}^{top}_bmathbf{R}_amathbf{R}_bmathbf{c} \ mathbf{c} = mathbf{R}^{top}_bmathbf{R}_amathbf{R}_bmathbf{c} \ mathbf{R}_bmathbf{c} = mathbf{R}_amathbf{R}_bmathbf{c}$$

This means $$mathbf{R}_bmathbf{c}$$ is parallel to the rotation axis of $$mathbf{R}_a$$. Hence $$mathbf{R}_b$$ can be taken as any rotation taking $$mathbf{a}$$ to $$mathbf{c}$$, i.e. a rotation of the rotation axes.

As stated, there are infinitely many such rotations. We may choose $$mathbf{R}_b$$ to be for example the rotation in the plane given by $$mathbf{a}timesmathbf{c}$$ by the angle $$acos(frac{mathbf{a}cdotmathbf{c}}{|mathbf{a}||mathbf{c}|})$$. There are a few degenerate cases, for example if $$mathbf{R}_c = mathbf{I}$$ or $$mathbf{R}_a = mathbf{I}$$, or if $$mathbf{R}_c = mathbf{R}_a$$. I dealt with those here.

This takes care of (1).

Substituting our choice for $$mathbf{R}_b$$ into (2), and solving for $$mathbf{t}_b$$ we find;

$$left(mathbf{R}_a – mathbf{I}_{3times3}right)mathbf{t}_b = mathbf{R}_bmathbf{t}_c – mathbf{t}_a$$

Where $$mathbf{I}_{3times3}$$ is the 3×3 identity matrix. The term $$mathbf{R}_a – mathbf{I}_{3times3}$$ gives a singular matrix since $$mathbf{R}_a$$ has only 1 as its eigenvalue. The resulting system therefore has zero or infinitely many solutions for any choice of $$mathbf{R}_b$$.

When does this system have solutions at all? What are the additional constraints? Is there a neater way of approaching this problem? What additional constraints are necessary to give a unique solution?

## Math Genius: Derivative of squared norm of component of a matrix perpendicular to identity matrix, with respect to the original matrix

Let $$Jinmathbb{R}^{ntimes n}$$

What is the derivative (with respect to $$J$$) of the squared norm of the component of $$J$$ that is orthogonal to $$I$$ (the identity matrix)?

### Attempt

$$J$$‘s projection onto $$I$$ is $$frac{langle J,Irangle_F}{n}I=frac{Tr(J)}{n}I$$

where $$langle A,Brangle_F=Tr(A^TB)$$ denotes the Frobenius inner product (dot product for matrices) and $$Tr(A)$$ denotes the trace of A.

So the orthogonal component is $$J-frac{Tr(J)}{n}I$$

So we seek

$$frac{partial}{partial J}||J-frac{Tr(J)}{n}||_F^2$$
$$=frac{partial}{partial J}Tr((J-frac{Tr(J)}{n})^T(J-frac{Tr(J)}{n}))$$
$$=frac{partial}{partial J}Tr(J^TJ-frac{Tr(J)}{n}(J^T+J)+frac{Tr^2(J)}{n^2}I)$$

How to proceed (if correct so far)?

This question is (relatively) easily answered using the chain rule for the total derivative. Let $$f(X) = |X|_F^2$$, and let $$g(X) = X – frac{operatorname{tr}(X)}{n}$$. We note that $$g$$ is linear, so its derivative is given by $$dg(X)(H) = g(H)$$. On the other hand, we have
$$f(X + H) = operatorname{tr}[(X + H)^T(X + H)] \ = operatorname{tr}(X^TX) + 2operatorname{tr}(X^TH) + operatorname{tr}(H^TH)\ = f(X) + 2operatorname{tr}(X^TH) + o(|H|_F^2).$$
Conclude that $$dg(X)(H) = 2operatorname{tr}(X^TH)$$.

With the chain rule, we have
$$d[f circ g](X)(H) = [df(X) circ dg(X)](H) = df(X)(g(H)) \ = 2operatorname{tr}(X^Tg(H)) = 2operatorname{tr}left(X^T[H – frac{operatorname{tr}(H)}{n}]right)\ = 2operatorname{tr}left(X^THright) – frac 2n operatorname{tr}(X)operatorname{tr}left(Hright).$$
To convert this to the more conventional format of “denominator layout”, we can use the connection between notations explained here to find that $$h(J) = (f circ g)(J)$$ satisfies
$$frac{dh}{dJ} = 2J – frac 2n operatorname{tr}(J)I = 2g(J).$$

In continuum mechanics, they have a name for this,

it’s called the Isotropic-Deviatoric decomposition.
eqalign{ {rm iso}(A) &= left[frac{{rm Tr}(A)}{{rm Tr}(I)}right]I, qquad {rm dev}(A) = A – {rm iso}(A) \ }
The operations are idempotent and orthogonal
eqalign{ {rm iso}({rm iso}(A)) &= {rm iso}(A) \ {rm iso}({rm dev}(A)) &= {rm dev}({rm iso}(A)) ;=; 0 \ {rm dev}({rm dev}(A)) &= {rm dev}(A) \ }
and behave like the Sym-Skew operators with respect
to the inner product
eqalign{ A:B &= {rm Tr}big(A^TBbig) &{rm Frobenius,product}\ 0 &={rm iso}(A):{rm dev}(B) \ A:{rm iso}(B) &= {rm iso}(A),:{rm iso}(B) &= {rm iso}(A):B \ A:{rm dev}(B) &= {rm dev}(A):{rm dev}(B) &= {rm dev}(A):B \ }
Write the current problem in terms of these operators.

Then calculate the differential and gradient.
eqalign{ X &= {rm dev}(J) \ phi &= X:X \ dphi &= 2X:dX \ &= 2X:{rm dev}(dJ) \ &= 2,{rm dev}(X):dJ \ &= 2X:dJ \ frac{partialphi}{partial J} &= 2X = 2,{rm dev}(J) \ }

With $$S(X) = |X|^2 = langle X, X rangle$$ we have
$$DS(X) H = 2 langle X, Hrangle$$.

Since $$phi(J)= J – {operatorname{tr} J over n} I$$ is linear we see that $$D phi(J)H = phi(H)$$.

The chain rule gives
$$D (Scirc phi) (J)H = D S(phi(J)) D phi(J)H = 2 langle phi(J), phi(H)rangle$$.

Unwinding (& rewinding) gives
$$D (Scirc phi) (J)H = langle 2J – 2{operatorname{tr} J over n} I , H rangle = langle 2phi(J), Hrangle$$.

Tagged : / / /

## Math Genius: What properties does $Ain M_n(Bbb F),ngeqslant2$ have if $operatorname{rank}(A)=det(A)$?

What properties does $$Ain M_n(Bbb F),ngeqslant2$$ have if $$operatorname{rank}(A)=det(A)$$?

My work:

$$A$$ is either $$text{regular}$$ or a $$text{zero-matrix}$$.

~For non-trivial case, let $$Cin M_n,ngeqslant2$$ be regular and $$det(C)=n$$
$$d(n):={kinBbb N:kmid n}$$

Depending on whether $$n$$ is $$text{prime}$$ or $$text{composite}$$, we multiplied some number($$leqslant d(k)$$) of columns(rows) by some $$k$$.

To visualise (the simplest LaPlace development or evaluation of a diagonal matrix determinant), let: $$B=begin{bmatrix}I_{n-1}&0\0&nend{bmatrix}$$
$$prod_{i=1}^n a_{ii}=det(B)=ncdotdet(I_{n-1})in Bbb N$$

For $$ninBbb N_{2n+1}$$ it would work even for $$-I_{n-1}$$

Is $$Bsim C$$ s.t. the equivalence realized by an arbitrary number of the $$3^{text{rd}}$$ type of elementary transformations and/or an even number of the first two types? Then the matrices $$Bin[C]$$? in order to $$|B|=|C|, B,Cin M_n(Bbb Z)$$

Such a matrix can be simply build in the following way.

Let $$A$$ be any non singular square $$n times n$$ matrix, over any field $$Bbb F$$ extending the rationals ($$Bbb Q subseteq Bbb F$$).

We have $$det A neq 0$$, while the rank of $$A$$ is $$n$$.
Let $$a_1, dots , a_n in Bbb F$$ such that
$$a_1 cdots a_n = n (det A)^{-1}$$
Then if you multiply for all $$j in { 1, dots , n }$$ the $$j$$-th row/column of $$A$$ by $$a_j$$, you will get a new matrix $$tilde{A}$$ whose determinant is $$n= r(tilde{A})$$.

In particular, you don’t need that the matrix has integer entries, nor integer eigenvalues. In other words, it could be (almost) anything.

## Math Genius: What properties does $Ain M_n(Bbb F),ngeqslant2$ have if $operatorname{rank}(A)=det(A)$?

What properties does $$Ain M_n(Bbb F),ngeqslant2$$ have if $$operatorname{rank}(A)=det(A)$$?

My work:

$$A$$ is either $$text{regular}$$ or a $$text{zero-matrix}$$.

~For non-trivial case, let $$Cin M_n,ngeqslant2$$ be regular and $$det(C)=n$$
$$d(n):={kinBbb N:kmid n}$$

Depending on whether $$n$$ is $$text{prime}$$ or $$text{composite}$$, we multiplied some number($$leqslant d(k)$$) of columns(rows) by some $$k$$.

To visualise (the simplest LaPlace development or evaluation of a diagonal matrix determinant), let: $$B=begin{bmatrix}I_{n-1}&0\0&nend{bmatrix}$$
$$prod_{i=1}^n a_{ii}=det(B)=ncdotdet(I_{n-1})in Bbb N$$

For $$ninBbb N_{2n+1}$$ it would work even for $$-I_{n-1}$$

Is $$Bsim C$$ s.t. the equivalence realized by an arbitrary number of the $$3^{text{rd}}$$ type of elementary transformations and/or an even number of the first two types? Then the matrices $$Bin[C]$$? in order to $$|B|=|C|, B,Cin M_n(Bbb Z)$$

Such a matrix can be simply build in the following way.

Let $$A$$ be any non singular square $$n times n$$ matrix, over any field $$Bbb F$$ extending the rationals ($$Bbb Q subseteq Bbb F$$).

We have $$det A neq 0$$, while the rank of $$A$$ is $$n$$.
Let $$a_1, dots , a_n in Bbb F$$ such that
$$a_1 cdots a_n = n (det A)^{-1}$$
Then if you multiply for all $$j in { 1, dots , n }$$ the $$j$$-th row/column of $$A$$ by $$a_j$$, you will get a new matrix $$tilde{A}$$ whose determinant is $$n= r(tilde{A})$$.

In particular, you don’t need that the matrix has integer entries, nor integer eigenvalues. In other words, it could be (almost) anything.

## Server Bug Fix: Is the rank of a matrix equal to the number of non-zero eigenvalues?

I have studied before that the rank of a matrix = number of non zero Eigen values. But recently i came across a problem and i dont think it is valid there. I know i am going wrong somewhere.

$$A= begin{bmatrix} 0 & 4 & 0 \ 0 & 0 & 4\ 0 & 0 & 0 \ end{bmatrix} quad$$
The Rank of this matrix is 2. So there should be 2 non zero eigen values. But I only get 0 as the eigen value(λ) using $$[A-λI]=0$$

Can anybody explain? Thanks

rank of a matrix = number of non zero Eigen values

is not true, as you have witnessed.

Consider that $$A^3=0$$, so if $$A$$ has an eigenvalue $$lambda$$ and $$vneq0$$ is a corresponding eigenvector, then
$$0=A^3v=lambda^3v$$
meaning $$lambda^3=0$$, so $$lambda$$ must be $$0$$.

The rank is, however, equal to the dimension of the image. Which is to say, the size of the largest possible set of linearly independent vectors of the form $$Av$$.

It is also the case that nilpotency (or more specifically the fact that the image may contain elements of the kernel) is in some sense the only thing that can go wrong with your statement.

Unfortunately, the answer is no in general, though the claim will hold true for diagonalizable matrices. Not all matrices are diagonalizable, including the matrix that you gave in your example. If your matrix is $$n times n$$, then diagonalizability is equivalent to having a set of $$n$$ linearly independent eigenvectors, and those eigenvectors corresponding to non-zero eigenvalues will form a basis for the range of the matrix; hence rank is obtained (including multiplicities).

However, if you look at $$A^T A$$, then you can use the eigenvalues of that matrix to obtain the rank, regardless of what $$A$$ is. This is because $$A^T A$$ is symmetric, and thus must be diagonalizable, and furthermore one can show that $$mathrm{rank}(A^T A) = mathrm{rank}(A)$$.

## Math Genius: Counting a walk $i rightarrow k rightarrow l rightarrow i rightarrow k rightarrow j rightarrow l rightarrow j$ in a graph

This paper gives a procedure for counting redundant paths (which I will refer to as walks) in a graph using its adjacency matrix. As an exercise, I want to count only the walks of the form $$i rightarrow k rightarrow l rightarrow i rightarrow k rightarrow j rightarrow l rightarrow j$$ from node $$i$$ to node $$j$$, with $$i neq j neq k neq l$$. Also see this post.

Let $$A$$ be the adjacency matrix. The notation I use below is: “$$cdot$$” for usual matrix multiplication (when using matrices, otherwise the normal number product), “$$odot$$” for element-wise matrix product, “diag$$(A)$$” for the matrix with the same principal diagonal as $$A$$ and zeros elsewhere, and $$S = A odot A^T$$.

The idea is to find a matrix expression giving the desired walk. I will include 3 examples from the paper. These seem straightforward, but I do not know how to infer a general rule.

(First example)

$$i rightarrow color{red}j rightarrow k rightarrow l rightarrow color{red}j tag{1}$$

In this case, the desired matrix has its $$(i, j)$$ entry the product:
$$a_{ij} cdot a_{jk} cdot a_{kl} cdot a_{lj}$$

In this case, the matrix that has entries $$a_{jk} cdot a_{kl} cdot a_{lj}$$ is diag$$(A^3)$$ and $$a_{ij}$$ is given just by $$A$$. Overall, the expression for this walk is $$A cdot text{diag}(A^3)$$.

(Second example)

$$color{blue}i rightarrow color{red}j rightarrow k rightarrow color{blue}i rightarrow color{red}j tag{2}$$

In this case, the desired matrix has its $$(i, j)$$ entry the product:
$$a_{ij} cdot a_{jk} cdot a_{ki} cdot a_{ij}$$

One thing to note is that $$a_{ij}$$ is included 2 times and $$a_{ij} cdot a_{ij} = a_{ij}$$, so the expression simplifies to:
$$a_{jk} cdot a_{ki} cdot a_{ij}$$

Here, $$a_{jk} cdot a_{ki}$$ is the $$(i, j)$$ entry in $$(A^2)^T$$, so overall the desired matrix is given by $$A odot (A^2)^T$$.

(Third example)
$$color{blue}i rightarrow color{orange}k rightarrow color{blue}i rightarrow color{orange}k rightarrow j tag{3}$$

Here, the $$(i, j)$$ entry in the matrix is given by:
$$a_{ik} cdot a_{ki} cdot a_{ik} cdot a_{kj}$$

One $$a_{ik}$$ can again be removed, leaving:
$$a_{ik} cdot a_{ki} cdot a_{kj}$$

The argument here is that $$a_{ik} cdot a_{ki}$$ is the $$(i, k)$$ entry of $$S = A odot A^T$$ and $$a_{kj}$$ is the $$(k, j)$$ of $$A$$ so the summation is with regards to $$k$$, giving $$S cdot A$$.

(Problem at hand)

However, I can’t seem to make any progress on $$color{blue}i rightarrow color{orange}k rightarrow l rightarrow color{blue}i rightarrow color{orange}k rightarrow color{red}j rightarrow l rightarrow color{red}j$$. The $$(i, j)$$ entry is given by:

$$a_{ik} cdot a_{kl} cdot a_{li} cdot a_{ik} cdot a_{kj} cdot a_{jl} cdot a_{lj}.$$

I do not fully understand the rules. Here, it would seem that for multiplication the summation has to be done over different indices, like $$k$$ and $$l$$. Another source of confusion is if it is allowed to do operations on elements which are not adjacent, like at position 1 ($$a_{ik}$$) and 5 ($$a_{kj}$$) above. Since they are factors of multiplication, the position shouldn’t matter.

## Math Genius: trace and derivative: understanding $text{tr}left(e_j e_i^T B^T B right) = langle B e_i e_j^T, B rangle$

where $$B$$ is $$n times n$$ matrix

Can someone help me understand the above equality. what kind of inner product we are using here?

Also, why do we have $$frac{d}{dB}left(langle B e_i e_j^T, B rangle right) = B e_i e_j^T$$ ?

The Frobenius product (aka Inner product) can be written in terms of the trace
$$A:B={rm Tr}big(AB^Tbig)$$
The trace’s cyclic property means that such products can be rearranged in various ways, e.g.
eqalign{ A:B &= A^T:B^T &= B:A \ A:BC &= AC^T:B &= B^TA:C\ }
Write the function in terms of the Frobenius product. Then calculate its differential and gradient.
eqalign{ phi &= {rm Tr}left(e_je_i^TB^TBright) \ &= e_je_i^T:B^TB \ dphi &= e_je_i^T:(dB^TB+B^TdB) \ &= (e_je_i^T)B^T:dB^T + B(e_je_i^T):dB \ &= B(e_je_i^T)^T:dB + B(e_je_i^T):dB \ &= B(e_ie_j^T+e_je_i^T):dB \ frac{partialphi}{partial B} &= Bleft(e_ie_j^T+e_je_i^Tright) \ }

Tagged : /

## Math Genius: General form of $SU(2)$ matrix

I am pretty sure it’s duplicate but on the other hand it’s hard to find by a title.

I am having hard time showing that every matrix in $$SU(2)$$ is of the form $$begin{pmatrix}z & -overline w \ w & overline zend{pmatrix}$$ for some $$z, w in mathbb{C}$$ such that $$|z|^2 + |w|^2 =1$$. In most of the sources (including answers here) it is stated as something well-known. It is indeed well-known but I have never seen a proof and couldn’t come up with one.

Let $$begin{bmatrix}x_1 & x_2\x_3 & x_4end{bmatrix}in SU(2),$$ then $$x_1cdot x_4-x_3cdot x_2=1$$ because determinant is $$1$$(this you did not mention) and $$begin{bmatrix}overline{x_1} & overline{x_3}\overline{x_2} & overline{x_4}end{bmatrix}=begin{bmatrix}x_4 & -x_2\-x_3 & x_1end{bmatrix}$$ so $$x_1=overline{x_4}$$ and $$overline{x_2}=-x_3.$$
Notice that the other $$2$$ equations are just manipulations of these two. Check that left hand side is the inverse of the matrix and the left hand side is the adjoint.

Note that $$text{SU}(2):={{Ain M(2,mathbb{C}): langle Ax,Ayrangle,; forall x,yinmathbb{C}^2,;text{det}(A)=1}}$$ where $$langle x,yrangle:=x_1bar{y_1}+x_2bar{y_2}.$$ Then $$text{SU}(2):={{Ain M(2,mathbb{C}): AA^*=A^*A=I_2,;text{det}(A)=1}}$$ where $$A^*=overline{(A^{t})}.$$

So for all $$A=begin{pmatrix}a & b \ c & dend{pmatrix}intext{SU}(2)$$ It is true that $$A^{-1}=A^{*};text{and};text{det}(A)=1$$:

$$A^{-1}=begin{pmatrix}d & -b \ -c & aend{pmatrix}$$ and $$A^{*}=begin{pmatrix}overline{a} & overline{c} \ overline{b} & overline{d}end{pmatrix}$$ hence $$A^{-1}=A^*$$ implies that $$d=overline{a};text{and}; b=-overline{c}.$$ Then $$A=begin{pmatrix}a & -overline{c} \ c & overline{a}end{pmatrix}.$$

Tagged : / /

## Math Genius: Is there a topic, branch, operation in math to systematically express an equation as a matrix?

I’m trying to find a topic to study on that would deal on expressing equations as matrices determinants. Let me explain.

Let’s say I have the equation:

x – 2y + 2z = 0

I can write the above equation in matrix form as:

$$det begin{bmatrix} x & 0 & 2 \ z & 1 & 0 \ y & 1 & 1 end{bmatrix} = 0$$

Is there anyway to create this matrix not being by trial and error? Is there a name for this operation? Maybe I’m cracking my head on something that Wolfram could give me the answer if I type the correct command. Or maybe someone created a code that does this and I just don’t know what to look for.

Especifically, I need to create 3×3 matrices and each variable shoud be in different row. I can also create a matrix and then manipulate it by diving a column by a variable, etc.

Just to put it in context, I use this topic to create matrices and later use them as parametric equations to plot a nomogram. If I have a equation with 3 variables and create a matrix where each variable is in separate rows, I can create a nomogram with 3 axes, where I can choose 2 variables, connect them with a straight line and the place it crosses the third axis is the answer for the equation.

Obviously not every equation can be turned into a matrix, but doing it by trial and error is extremely exhausting and I can’t ever be sure if I ran out of possibilities.
(Actually I’m trying to create a matrix of an especific equation and I thought about posting it here, but since I’m not even sure this topic exists, so I thought about creating this question first)

Generalization

Yes.

The determinant traduce the idea of dependance if it is zero, independance elsewhere.

For instance

$$D=left| begin{array}x x&0&2\ y&1&0\ z&1&1 end{array} right|=0$$

Means that $$vec X= left( begin{array} xx\ y\ z\ end{array} right),vec A= left( begin{array} 00\ 1\ 1\ end{array} right),vec B= left( begin{array} 22\ 0\ 1\ end{array} right)$$

are dependent (family is not free)

So if you have $$x+ay+bz=0 (1)$$

You can take two vectors $$vec{A}= left( begin{array} xa\ -1\ 0\ end{array} right)$$ and $$vec{B}= left( begin{array} xb\ 0\ -1\ end{array} right)$$ , for which the coordinates satisfies the equation$$(1)$$

Vector $$A$$ and $$B$$ are taken to belong to the plan, and to be independant (it is necessary). So because they belong to the plan defined by the equation. If X belong to the plan, the determinant of X,A,B must be null because vectors are linearly dependant (Because 3 vectors belonging to the same 2 dimensional plan (hyperplan of 3D space))

Develop the determinant, it will give you an equation of the same plan.

N-th dimension

You can generalize for $$n$$ dimensions plans :

Given an hyperplan $$H$$ of dimension $$n$$ generated by the equation :

$$x_1+a_2 x_2+a_3 x_3+…..+a_{n} x_{n}=0$$

You will get by taking the $$i$$ vectors

$$vec{X_i}= left( begin{array} 11\ 0\ .\ a_i text{at the i-th row}\ .\ .\ 0 \ end{array} right)$$

You get :

$$det(vec X,vec X_1,…,vec X_n)=0$$

Which gives an equation of the same plan.

If anything, you have to make it a very precise request because you can always rewrite an equation $$P(underline{x})=0$$ trivially as
$$detleft( begin{array}[cc] {}P(underline{x}) & 0 \ 0 & 1 end{array} right)=0$$
(not to mention the fact that “anything” is the determinant of a $$1times 1$$ matrix whose only entry is that”anything”).

Tagged : / /