Andrew Thangaraj
Aug-Nov 2020
\(T:V\to V\), operator and \(v\in V\), \(v\ne 0\), \(F=\mathbb{C}\) and dim \(V=n\)
\(T\) can be applied repeatedly to \(v\)
\(v\), \(Tv\), \(\ldots\), \(T^nv\): \(n+1\) vectors
Linearly dependent in \(V\)
\(a_0v+a_1Tv+\cdots+a_nT^nv=0\), \(a_i\in \mathbb{C}\)
Let \(m=\) max \(i\) s.t. \(a_i\ne 0\), \(m\ge 1\)
\((a_0+a_1T+\cdots+a_mT^m)v=0\), \(v\ne0\), \(a_m\ne 0\)
Let \(a_0+a_1x+\cdots+a_mx^m=a_m(x-\lambda_1)\cdots(x-\lambda_m)\)
Operator algebra - they can be added, multiplied
\(a_0+a_1T+\cdots+a_mT^m=a_m(T-\lambda_1I)\cdots(T-\lambda_mI)\)
\(a_m(T-\lambda_1I)\cdots(T-\lambda_mI)v=0\), \(v\ne 0\)
Since \((T-\lambda_1I)\cdots(T-\lambda_mI)v=0\), \(v\ne 0\), there exist at least one \(i\) s.t. \(T-\lambda_iI\) is non-invertible
Proof
Contradiction: if \(T-\lambda_iI\) is invertible for every \(i\), then \(v=0\)
\(T-\lambda_iI\): non-invertible implies \(\lambda_i\) is an eigenvalueThis proves existence of one eigenvalue for an operator (\(F=\mathbb{C}\))
There are at most dim \(V\) distinct eigenvalues for an operator
Proof
Eigenvectors of distinct eigenvalues are linearly independent
Eigenvalues of diagonal and triangular matrices
Diagonal elements of a diagonal matrix are eigenvalues of the corresponding operator
Proof
\(T-\lambda I\): non-invertible if \(\lambda=\) a diagonal element
Diagonal elements of a triangular matrix are eigenvalues of the corresponding operator
Proof
\(T-\lambda I\): non-invertible if \(\lambda=\) a diagonal element
Diagonalization
If \(T\) has \(n=\) dim \(V\) distinct eigenvalues, the eigenvectors form a basis for \(V\)
Proof
Eigenvectors are independent for distinct eigenvalues
\(n\) linearly independent eigenvectors form a basis
In the above basis of eigenvectors, \(T\) is diagonal
Proof
\(v_i\): eigenvector, \(Tv_i=\lambda_i v_i\), \(i=1,\ldots,n\)
\(\{v_1,\ldots,v_n\}\): basis
\(Tv_i\): has \(\lambda_i\) in \(i\)-th coordinate, zero elsewhere
Colab notebook for eigenvalues and eigenvectors
Examples: Repeated Eigenvalues
\(\begin{bmatrix} 1&0&0\\ 0&1&0\\ 0&0&1 \end{bmatrix}\)
Eigenvalues: \(1, 1, 1\)
Eigenvectors: \((1,0,0)\), \((0,1,0)\), \((0,0,1)\)
\(\begin{bmatrix} 1&2&0\\ 0&1&0\\ 0&0&1 \end{bmatrix}\)
Eigenvalues: \(1, 1, 1\)
Eigenvectors: \((1,0,0)\), \((0,0,1)\)
\(\begin{bmatrix} 1&2&0\\ 0&1&3\\ 0&0&1 \end{bmatrix}\)
Eigenvalues: \(1, 1, 1\)
Eigenvectors: \((1,0,0)\)
Towards “simple” matrices
Eigenvalues are useful in obtaining simple matrices
Basis of eigenvectors results in a diagonal matrix
When eigenvalues are repeated, there may not be enough eigenvectors
What can be said, in general, about the “simplest” matrix for an operator?