Andrew Thangaraj
Aug-Nov 2020
\(V\): inner product space over \(F= \mathbb{R}\) or \(\mathbb{C}\)
An operators \(T:V\to V\) is said to be positive if \(T\) is self-adjoint and \[\langle Tv,v\rangle\ge 0,\quad v\in V\]
Necessary condition for being positive
\(\langle Tv,v\rangle\in\mathbb{R},\quad v\in V\)
\(F=\mathbb{C}\): above implies \(T\) is self-adjoint
\(F=\mathbb{R}\): \(T\) self-adjoint is not implied
\(A\): \(n\times n\) matrix with, possibly, complex entries
\(A\) is positive if \(A=A^H\) and \(x^HAx\ge0\), \(x\in F^n\)
Notation: \(x^H\) denotes conjugate-transpose
Written as \(A\succeq0\)
\(A=\begin{bmatrix}\lambda_1&0\\0&\lambda_2\end{bmatrix}\): \(A\succeq0\) if \(\lambda_1,\lambda_2\ge0\)
\(x^HAx=\begin{bmatrix}\overline{x_1}&\overline{x_2}\end{bmatrix}\begin{bmatrix}\lambda_1x_1\\\lambda_2x_2\end{bmatrix}=\lambda_1\lvert x_1\rvert^2+\lambda_2\lvert x_2\rvert^2\)
\(A=\begin{bmatrix}2&i\\-i&2\end{bmatrix}\): \(A\succeq0\) (exercise)
\(x^HAx=2\lvert x_1\rvert^2+i\overline{x_1}x_2-ix_1\overline{x_2}+2\lvert x_2\rvert^2\)
\(A=\begin{bmatrix}1&0\\0&-1\end{bmatrix}\): not positive
In real spaces, there are some interesting examples.
\(A=\begin{bmatrix}1&1\\-1&1\end{bmatrix}\)
Not symmetric
\(x^TAx=x_1^2+x_2^2\ge0\)
\(x^TAx=x^T\dfrac{A+A^T}{2}x\)
So, w.r.t. positivity, it is sufficient to consider symmetric matrices.
\(U\subseteq V\) subspace, \(P_U\): orthogonal projection onto \(U\)
\(P_U\) is positive.
Proof
\(P_Uv=u\) implies \(v=u+w\) with \(u\in U\) and \(w\in U^{\perp}\)
\(\langle v,u\rangle=\langle u,u\rangle+\langle w,u\rangle=\langle u,u\rangle\ge0\)
So, \(\langle v,P_Uv\rangle=\langle P_Uv,P_Uv\rangle\)
Is \(P_U\) self-adjoint?
\(0=\langle P_Ux,y-P_Uy\rangle=\langle P_Ux,y\rangle - \langle P_Ux,P_Uy\rangle\)
\(0=\langle x-P_Ux,P_Uy\rangle=\langle x,P_Uy\rangle - \langle P_Ux,P_Uy\rangle\)
So, \(\langle P_Ux,y\rangle=\langle x,P_Uy\rangle\)
\(R\) is said to be a square root of \(T\) is \(T=R^2\).
Square root is said to be positive if \(R\succeq0\).
Examples
\(A=\begin{bmatrix}0&0&1\\ 0&0&0\\ 0&0&0\end{bmatrix}\), \(B=\begin{bmatrix}0&1&0\\ 0&0&1\\ 0&0&0\end{bmatrix}\), \(B^2=A\)
\(P=\begin{bmatrix}0&0\\ \alpha&1\end{bmatrix}\), \(P^2=P\) (is \(P\) an orthogonal projection?)
\(A=\begin{bmatrix}4&0\\ 0&9\end{bmatrix}\), \(B=\begin{bmatrix}2&0\\ 0&3\end{bmatrix}\), \(B\): positive square root of \(A\)
\(T:V\to W\), \(T^*:W\to V\)
TT^* and T^*T are positive operators.
Proof
\(T^*T\) and \(TT^*\) are clearly self-adjoint
\(\langle T^*Tv,v\rangle=\langle Tv,Tv\rangle=\lVert Tv\rVert^2\ge0\)
\(\langle TT^*v,v\rangle=\langle T^*v,T^*v\rangle=\lVert T^*v\rVert^2\ge0\)
The following are equivalent:
- \(T\) is positive
- \(T\) is self-adjoint with non-negative eigenvalues
- \(T\) has a positive square root
- \(T\) has a self-adjoint square root
- There is an operator \(R\) such that \(T=RR^*\)
Proof
(1) implies (2)
\(\lambda\): eigenvalue of \(T\) with eigenvector \(v\)
\(0\le\langle Tv,v\rangle=\langle\lambda v,v\rangle=\lambda\langle v,v\rangle\)
(2) implies (3), (4), (5)
\(\{e_1,\ldots,e_n\}\): orthonormal eigenvector basis of \(T\) (coordinates in standard basis)
\(T=\lambda_1e_1\overline{e^T_1}+\cdots+\lambda_ne_n\overline{e^T_n}\), \(\lambda_i\ge0\)
Let \(R=\sqrt{\lambda_1}e_1\overline{e^T_1}+\cdots+\sqrt{\lambda_n}e_n\overline{e^T_n}\)
Verify \(T=R^2\), \(R=R^*\)
(5) implies (1)
\(RR^*\): operator-adjoint product
Condition | Notation | Terminology |
---|---|---|
\(x^HAx>0\) | \(A\succ0\) | positive definite (pd) |
\(x^HAx\ge0\) | \(A\succeq0\) | positive semidefinite (psd) |
\(x^HAx<0\) | \(A\prec0\) | negative definite (nd) |
\(x^HAx\le0\) | \(A\preceq0\) | negative semidefinite (nsd) |
\(A\succ B\) if \(A-B\succ0\)
(and similarly for other relations)
Why partial ordering?: There are matrices that are neither positive nor negative.
Example: \(\begin{bmatrix}1&0\\0&-1\end{bmatrix}\)
So, \(A\) and \(B\) may not be comparable using \(\succ\) or \(\prec\)
Positive: short for positive semidefinite