Special Matrices
Go to: Introduction, Notation, Index
Antisymmetric
see skew-symmetric .
A is upper bidiagonal if a(i,j)=0 unless i=j
or i=j-1.
A is lower bidiagonal if a(i,j)=0 unless i=j or
i=j+1
A bidiagonal matrix is also tridiagonal, triangular and Hessenberg.
A[n#n] is bisymmetric if it is
symmetric about both main diagonals, i.e. if
A=AT=JAJ where J is the exchange matrix.
WARNING: The term persymmetric is
sometimes used instead of bisymmetric. Also bisymmetric is sometimes used to
mean centrosymmetric and sometimes to mean
symmetric and perskewsymmetric.
- A bisymmetric matrix is symmetric, persymmetric and centrosymmetric. Any two of these four properties
properties implies the other two.
- More generally, symmetry, persymmetry and centrosymmetry can each come in
four flavours: symmetric, skew-symmetric, hermitian and skew-hermitian. Any
pair of symmetries implies the third and the total number of skew and hermitian
flavourings will be even. For example, if A is skew-hermitian and
perskew-symmetric, then it will also be centrohermitian.
- If A[2m#2m] is bisymmetric
- A=[S PT; P JSJ] for
some symmetric S[m#m] and persymmetric
P[m#m].
- A is orthogonally
similar to [S-JP 0; 0 S+JP]
- A has a set of 2m orthonormal eigenvectors consisting of
m skew-symmetric vectors of the form [u; -Ju]/k and
m symmetric vectors of the form [v; Jv]/k where
u and v are eigenvectors of S-JP and
S+JP respectively and k=sqrt(2).
- If A has distinct eigenvalues and rank(P)=1 then if the
eigenvalues are arranged in descending order, the corresponding eigenvectors
will be alternately symmetric and skew-symmetric with the first one being
symmetric or skew-symmetric according to whether the non-zero eigenvalue of
P is positive or negative.
- If A[2m+1#2m+1] is bisymmetric
- A=[S x PT;
xT y xTJ; P
Jx JSJ] for some symmetric S[m#m] and
persymmetric P[m#m].
- A is orthogonally
similar to [S-JP 0; 0 y
kxT; 0 kx S+JP]
where k=sqrt(2).
- A has a set of 2m+1 orthonormal eigenvectors consisting of
m skew-symmetric vectors of the form [u; 0; -Ju]/k
and m+1 symmetric vectors of the form [v; kw;
Jv]/k where u and [v; w] are
eigenvectors of S-JP and [S+JP kx;
kxT y] respectively and
k=sqrt(2).
- If A has distinct eigenvalues and P=0 then if the
eigenvalues are arranged in descending order, the corresponding eigenvectors
will be alternately symmetric and skew-symmetric with the first one being
symmetric.
A is block diagonal if it has the form [A 0 ...
0; 0 B ... 0;...;0 0 ... Z] where A,
B, ..., Z are matrices (not necessarily square).
- A matrix is block diagonal iff is the direct sum of two or more smaller
matrices.
A[m#n] is centrohermitian if it is
rotationally hermitian symmetric about its centre, i.e. if
AT=JAHJ where J is the
exchange matrix.
- Centrohermitian matrices are closed under addition, multiplication and (if
non-singular) inversion.
A[m#n] is centrosymmetric (also
called perplectic) if it is rotationally symmetric about its centre,
i.e. if A=JAJ where J is the exchange matrix. It is centrohermitian if
AT=JAHJ and
centroskew-symmetric if A= -JAJ.
- Centrosymmetric matrices are closed under addition, multiplication
and (if non-singular) inversion.
A circulant matrix, A[n#n], is a
Toeplitz matrix in which ai,j is a
function of {(i-j) modulo n}. In other words each column of
A is equal to the previous column rotated downwards by one element.
WARNING: The term circular is sometimes used
instead of circulant.
- Circulant matrices are closed under addition, multiplication and
(if non-singular) inversion.
- A circulant matrix, A[n#n] , may be
expressed uniquely as a polynomial in C, the cyclic permutation matrix, as A =
Sumi=0:n-1{ ai,1
Ci} = Sumi=0:n-1{
a1,i
C-i}
- All circulant matrices have the same eigenvectors. If
A[n#n] is a circulant matrix, the normalized
eigenvectors of A are the columns of n-½
F., the discrete Fourier Transform matrix. The
corresponding eigenvalues are the discrete fourier transform of the first row
of A given by
FATe1 =
(FACe1)C
= nF-1Ae1 where
e1 is the first comlumn of I.
- F-1AF = n-1
FHAF=DIAG(FATe1)
A Circular matrix, A[n#n], is one
for which AAC = I.
WARNING: The term circular is sometimes used for a circulant matrix.
- A matrix A is circular iff A=exp (j B) where
j = sqrt(-1), B is real and exp() is the matrix exponential
function.
- If A = B + jC where B and C are
real and j = sqrt(-1) then A is circular iff BC=CB
and also BB + CC = I.
If p(x) is a polynomial of the form a(0) + a(1)*x +
a(2)*x2 + ... +
a(n)*xn then the polynomial's companion
matrix is n#n and equals [0 I; -a(0:n-1)/a(n)] where
I is n-1#n-1. For n=1, the companion matrix is
[-a(0)/a(1)].
The rows and columns are sometimes given in reverse order
[-a(n-1:0)/a(n) ; I 0].
- The characteristic and minimal polynomials of a companion matrix both equal
p(x).
- The eigenvalues of a companion matrix equal the roots of p(x).
A matrix is complex if it has complex elements.
Complex to Real Isomporphism
We can associate a complex matrix C[m#n]
with a corresponding real matrix R[2m#2n]
by replacing each complex element, z, of C by a 2#2 real matrix
[zR -zI; zI
zR]=|z|×[cos(t) -sin(t);
sin(t) cos(t)] where t=arg(z). We will write
C <=> R for this mapping below.
- This mapping preserves the operations +,-,*,/ and, for square matrices,
inversion. It does not however preserve • (Hadamard) or ⊗ (Kronecker) products.
- If C <=> R
- R = C ⊗ [()R -()I;
()I ()R] where the operators
()R and ()I take the real and imaginary
parts respectively.
- C = (I[m#m] ⊗ [1 j])
R (I[n#n] ⊗ [1; 0]) = (I
⊗ [1 0]) R (I ⊗ [1; -j])=½(I
⊗ [1 j]) R (I ⊗ [1; -j]) where
j=sqrt(-1).
- CR = (I[m#m]
⊗ [1 0]) R (I[n#n] ⊗ [1;
0]) = (I[m#m] ⊗ [0 1]) R
(I[n#n] ⊗ [0; 1])
- det(R)=|det(C)|2
- tr(R)=2 tr(C)
- R is orthogonal iff C is unitary.
- R is symmetric iff C is hermitian.
- R is positive definite symmetric iff C is positive definite
hermitian.
Vector mapping: Under the isomorphism a complex vector maps to a real
matrix: z[n] <=>
Y[2n#2]. We can also define a simpler mapping,
<->, from a vector to a vector as z[n]
<-> x[2n] = z ⊗
[()R; ()I] = Y [1; 0]
In the results below, we assume z[n] <->
x[2n] , w[n] <->
u[2n] and C <=> R:
- If wHCz is known to be real, then
wHCz = uTRx
- If C is hermitian, then,
zHCz =
xTRx
- zHz = xTx
To relate the martrix and vector mappings, <-> and <=>, we
define the following two block-diagonal matrices: E =
I[n#n] ⊗ [0 1; 1 0] and N =
I[n#n] ⊗ [1 0; 0 -1]. We now have the
following properties (assuming z[n] <->
x[2n] and C <=>
R):
- E2=N2=I
- ET=E,
NT=N
- EN=-NE
- ENEN=NENE=-I
-
xTENx=xTNEx=0
- ENRNE = NEREN = R
- ENREN = NERNE = -R
- CH <=> RT
- CT <=>
NRTN
- CC <=> NRN
- jC <=> ENR = REN = -NER = -RNE
- z <-> x and z <=> [x ENx]
- zC <-> Nx and
zC<=> [Nx Ex]
- zH <-> xT and
zH <=> YT =
[xT; xTNE]
- zT <-> xTN and zT
<=> [xTN;
xTE]
A matrix A is convergent if Ak tends to
0 as k tends to infinity.
- A is convergent iff all its eigenvalues have modulus < 1.
- A is convergent iff there exists a positive definite X such that
X-AHXA is positive definite (Stein's theorem)
- If Sk is defined as
I+A+A2+ … +Ak,
then A is convergent iff Sk converges as
k tends to infinity. If it does converge its limit is
(I-A)-1.
See also: Stability
The n#n cyclic permutation matrix (or cyclic shift
matrix), C, is equal to
[0n-1T 1;
In-1#n-1
0n-1]. Its elements are given by
ci,j = δi,1+(j mod
n) where δi,j is the
Kronecker delta.
- C is a toeplitz, circulant, permutation matrix.
- Cx is the same as x but with the last
element moved to the top and all other elements shifted down by one
position.
- C-1 = CT =
Cn-1
- Cn = I
A matrix, A, is fully decomposable (or reducible) if there exists a permutation matrix P such
that PTAP is of the form [B C; 0
D] where B and D are square.
A matrix, A, is partly-decomposable if there exist
permutation matrices P and Q such that
PTAQ is of the form [B C; 0 D]
where B and D are square.
A matrix that is not even partly-decomposable is
fully-indecomposable.
A matrix, X:n#n, is defective if it does not have
n linearly independent eigenvectors, otherwise it is simple.
An n*n square matrix is derogatory if its minimal
polynomial is of lower order than n.
A is diagonal if a(i,j)=0 unless i=j.
- Diagonal matrices are closed under addition, multiplication and (where
possible) inversion.
- The determinant of a square diagonal matrix is the product of its diagonal
elements.
- If D is diagonal, DA multiplies each row of A by a
constant while BD multiplies each column of B by a constant.
- If D is diagonal then XDXT =
sumi(di ×
xixiT) and
XDXH = sumi(di
×
xixiH)
[1.15]
- If D is diagonal then tr(XDXT) =
sumi(di ×
xiTxi) and
tr(XDXH) = sumi(di
×
xiHxi) =
sumi(di ×
|xi|2) [1.16]
- If D is diagonal then AD = DA iff
ai,j=0 whenever di,i !=
dj,j. [1.12]
- If D = DIAG(c1I1,
c2I2, ...,
cMIM) where the
ck are distinct scalars and the
Ik are identity matrices, then AD = DA
iff A = DIAG(A1, A2, ...,
AM) where each Ak is the same
size as the corresponding Ik. [1.13]
The functions DIAG(x) and diag(X) respectively
convert a vector into a diagonal matrix and the diagonal of a square matrix into a
vector. The function sum(X) sums the rows of X to produce a vector. In the expression below, • denotes elementwise
(a.k.a. Hadamard) multiplication.
- diag(DIAG(x)) = x
- xT(diag(Y)) =
tr(DIAG(x)Y)
- DIAG(x) DIAG(y) =
DIAG(x •
y)
- diag(XY) = sum(X •
YT)
Diagonalizable or Diagonable or
Simple or Non-Defective
A matrix, X, is diagonalizable (or, equivalently, simple or diagonable or non-defective)
if it is similar to a diagonal matrix otherwise it is defective.
- If X is diagonalizable, it may be
written X=EDE-1 where D
is a diagonal matrix of eigenvalues and the columns of E are
the corresponding eigenvectors.
- [X, Y diagonalizable]:
The diagonalizable matrices, X and Y, commute, i.e. XY=YX, iff they
can be decomposed as X=EDE-1 and
Y=EGE-1 where D and G
diagonal and the columns of E for a common set of eigenvectors.
- The following are equivalent:
- X is diagonalizable
- The Jordan form of X is diagonal.
- For each eigenvalue of X, the geometric and algebraic multiplicities are equal.
- X has
n linearly independent eigenvectors.
A square matrix An#n is diagonally
dominant if the absolute value of each diagonal element is greater than
the sum of absolute values of the non-diagonal elements in its row. That is if
for each i we have |ai,i| > sumj
!= i(|ai,j|) or equivalently
abs(diag(A)) > ½ABS(A)
1n#1.
- [Real]: If the diagonal elements of a square
matrix A are all >0 and if A and AT
are both diagonally dominant then A is positive definite.
- If A is diagonally dominant and irreducible
then
- A is non singular
- If diag(A) > 0 then all eigenvalues of A have strictly positive
real parts.
The discrete fourier transform matrix,
F[n#n], has
fp,q = exp(-2jπ(p-1)
(q-1) n-1).
- Fx is the discrete fourier transform (DFT) of x.
- F is a symmetric, Vandermonde matrix.
- F-1 =
n-1FH=n-1FC
- If y = Fx then yHy = n
xHx. This is Parseval's theorem.
- F is a Vandermonde matrix.
- det(F) = n½n.
- tr(F) = 0.
- FC =
DIAG(Fe2) F where
C is the cyclic permutation matrix
and e2 is the second column of
I.
- If A[n#n] is a circulant matrix, the normalized eigenvectors of A are
the columns of n-½ F. The corresponding
eigenvalues are the discrete fourier transform of the first row of A
given by FATe1
=
(FACe1)C
= nF-1Ae1
where e1 is the first column of
I.
- [n=2k]:
F[n#n] = GP where:
- P is a symmetric permutation matrix with P =
prodr=1:k(Ek-r
⊗ [ Er-1 ⊗ [1 0] ;
Er-1 ⊗ [0 1] ] ) where
Es is a 2s#2s
identity matrix and ⊗ denotes the Kroneker product. If x=0:n-1 then
Px consists of the same numbers but arranged in bit-reversed order (e.g.
for n=8, Px = [0; 4; 2; 6; 1; 5; 3; 7] ).
- G = prodr=1:k(Er-1
⊗ [ [1 1] ⊗ Ek-r ; [1 -1]
⊗ Wk-r ]T) where the diagonal
"twiddle factor" matrix is Ws =
DIAG(exp(-2-s j pi
(0:2s-1))).
- Calculation of Fx as GPx is the "decimation-in-time" FFT
(Fast Fourier Transform) while Fx =
FTx =
PGTx is the "decimation-in-frequency"
FFT. In each case only O(n log2n) non-trivial
arithmetic operations are required because most of the non-zero elements of the
factors of G are equal to ±1.
A real non-negative square matrix A is doubly-stochastic
if its rows and columns all sum to 1.
See under stochastic for properties.
An essential matrix, E, is the product E=US of a
3#3 orthogonal matrix, U, and a 3#3 skew-symmetric matrix, S = SKEW(s). In 3-D euclidean space, a translation+rotation
transformation is associated with an essential matrix.
- If E=U SKEW(s) is an
essential matrix then
- E=SKEW(Us) U
- ETE =
(sTs) I -
ssT
- EET =
(sTs) I -
UssTU
- tr(ETE) =
tr(EET) =
2sTs
- If E is an essential matrix then so are ET,
kE and WEV where k is a non-zero scalar and
W and V are orthogonal.
- E is an essential matrix iff rank(E)=2 and
EETE =
½tr(EET)E. This defines a set of nine
homogeneous cubic equations.
- E is an essential matrix iff its singular
values are k, k and 0 for some k>0.
- If the singular value decomposition of
E is E = Q DIAG([k; k; 0])
RT, then we can write E = US where
U=Q [0 1 0; -1 0 0; 0 0
1] RT and S = R [0 -k 0;
k 0 0; 0 0 0] RT = SKEW(R [0; 0; k]).
- If E is an essential matrix then A = kE for
some k iff Ex × Ax = 0 for all x
where × denotes the vector cross product.
The exchange matrix J[n#n] is equal to
[en en-1 …
e2 e1] where
ei is the ith column
of I. It is equal to I but with the columns in reverse
order.
- J is Hankel, Orthogonal, Symmetric, Permutation, Doubly
Stochastic.
- J2 = I
- JAT, JAJ and
ATJ are versions of the matrix A that
have been rotated anti-clockwise by 90, 180 and 270 degrees
- JA, JATJ, AJ and
AT are versions of the matrix A that have
been reflected in lines at 0, 45, 90 and 135 degrees to the horizontal measured
anti-clockwise.
- det(Jn#n) =
(-1)n(n-1)/2 i.e. it equals +1 if n mod 4 equals 0 or
1 and -1 if n mod 4 equals 2 or 3
[Real]: A Givens Reflection
is an n#n matrix of the form PT[Q 0 ; 0
I]P where P is any permutation matrix and
Q is a matrix of the form [cos(x) sin(x); sin(x)
-cos(x)].
- A Givens reflection is symmetric and orthogonal.
- The determinant of a Givens reflection = -1.
- [2*2]: A 2#2 matrix is a Givens reflection iff it is a Householder matrix.
[Real]: A Givens Rotation is
an n#n matrix of the form PT[Q 0 ; 0 I]P
where P is a permutation matrix and Q
is a matrix of the form [cos(x) sin(x); -sin(x)
cos(x)].
An n*n Hadamard matrix has orthogonal columns whose
elements are all equal to +1 or -1.
- Hadamard matrices exist only for n=2 or n a multiple of
4.
- If A is an n*n Hadamard matrix then
ATA = n*I. Thus
A/sqrt(n) is orthogonal.
- If A is an n*n Hadamard matrix then det(A) =
nn/2.
A real 2n*2n matrix, A, is Hamiltonian if
KA is symmetric where K = [0 I; -I
0].
See also: symplectic
A Hankel matrix has constant anti-diagonals. In other words
a(i,j) depends only on (i+j).
- A Hankel matrix is symmetric.
- [A:Hankel] If J is the exchange matrix, then JAJ is Hankel; JA and
AJ are Toepliz.
- [A:Hankel] A+B and A-B
are Hankel.
A square matrix A is Hermitian if A =
AH, that is
A(i,j)=conj(A(j,i))
For real matrices, Hermitian and symmetric
are equivalent. Except where stated, the following properties apply to real
symmetric matrices as well.
- [Complex]: A is Hermitian iff
xHAx is real for all (complex) x.
- The following are equivalent
- A is Hermitian and +ve
semidefinite
- A=BHB for some B
- A=C2 for some Hermitian C.
- Any matrix A has a unique decomposition A = B +
jC where B and C are Hermitian: B =
(A+AH)/2 and
C=(A-AH)/2j
- Hermitian matrices are closed under addition, multiplication by a scalar,
raising to an integer power, and (if non-singular) inversion.
- Hermitian matrices are normal with real eigenvalues,
that is A = UDUH for some unitary U
and real diagonal D.
- A is Hermitian iff
xHAy=xHAHy
for all x and y.
- If A and B are hermitian then so are AB+BA and
j(AB-BA) where j =sqrt(-1).
- For any complex a with |a|=1, there is a 1-to-1
correspondence between the unitary matrices, U,
not having a as an eigenvalue and hermitian matrices, H, given by
U=a(jH-I)(jH+I)-1
and
H=j(U+aI)(U-aI)-1 where
j =sqrt(-1). These are Caley's formulae.
- Taking a=-1 gives
U=(I-jH)(I+jH)-1=(I+jH)-1(I-jH)
and
H=j(U-I)(U+I)-1=j(U+I)-1(U-I).
See also: Definiteness, Loewner partial order
A Hessenberg matrix is like a triangular matrix except that the
elements adjacent to the main diagonal can be non-zero.
A is upper Hessenberg if A(i,j)=0 whenever i>j+1.
It is like an upper triangular matrix except for the
elements immediately below the main diagonal.
A is lower Hessenberg if a(i,j)=0 whenever
i<j-1. It is like a lower triangular matrix
except for the elements immediately above the main diagonal.
A Hilbert matrix is a square Hankel matrix with elements
a(i,j)=1/(i+j-1).
If we define an equivalence relation in
which X ~ Y iff X = cY for some non-zero
scalar c, then the equivalence classes are called homogeneous
matrices and homogeneous vectors.
- Multiplication: If X ~ A and Y ~ B, then
XY ~ AB
- Addition: If X ~ A and Y ~ B then it is
not generally true that X+Y ~ A+B
- The projective space
RPn, consists of all non-zero homogeneous vectors from
Rn+1.
A Householder matrix (also called Householder
reflection or transformation) is a matrix of the form
(I-2vvH) for some vector v with
||v||=1.
Multiplying a vector by a Householder transformation reflects it in the
hyperplane that is orthogonal to v.
Householder matrices are important because they can be chosen to annihilate
any contiguous block of elements in any chosen vector.
- A Householder matrix is symmetric and orthogonal.
- Given a vector x, we can choose a Householder matrix P such
that Px=[-k 0 0 ... 0]H where
k=sgn(x(1))*||x||. To do so, we choose v =
(x + ke1)/||x +
ke1|| where e1 is the first column
of the identity matrix. The first row of P equals
-k-1xT and the remaining rows form
an orthonormal basis for the null space
of xT.
- [2*2]: A 2*2 matrix is Householder iff it is a Givens Reflection.
The hypercompanion matrix of the polynomial
p(x)=(x-a)n is an n#n
upper bidiagonal matrix, H, that is zero
except for the value a along the main diagonal and the value 1 on the
diagonal immediately above it. That is, hi,j = a if
j=i, 1 if j=i+1 and 0 otherwise.
If the real polynomial
p(x)=(x2-ax-b)n with
a2+4b<0 (i.e. the quadratic term has no real
factors) then its Real hypercompanion matrix is a 2n#2n
tridiagonal matrix that is zero except for a
at even positions along the main diagonal, b at odd positions along the
sub-diagonal and 1 at all positions along the super-diagonal. Thus for odd
i, hi,j = 1 if j=i+1 and 0 otherwise while
for even i, hi,j = 1 if j=i+1, a if
j=i and b if j=i-1.
P matrix P is idempotent if P2 =
P . An idempotent matrix that is also hermitian
is called a projection matrix.
WARNING: Some people call any idempotent matrix a projection matrix
and call it an orthogonal projection matrix if it is also hermitian.
- The following conditions are equivalent
- P is idempotent
- P is similar to a diagonal matrix each of whose diagonal elements equals 0 or
1.
- 2P-I is involutary.
- If P is idempotent, then:
- rank(P)=tr(P).
- The eigenvalues of P are all either 0 or 1. The geometric multiplicity of the eigenvalue 1 is
rank(P).
- PH, I-P and I-PH
are all idempotent.
- P(I-P) = (I-P)P = 0.
- Px=x iff x lies in the range of P.
- The null space of P equals the range of I-P. In other words
Px=0 iff x lies in the range of I-P.
- P is its own generalized
inverse, P#.
- [A: n#n, F,G: n#r] If
A=FGH where F and G are of full
rank, then A is idempotent iff GHF =
I.
The identity matrix , I, has a(i,i)=1 for all i and
a(i,j)=0 for all i !=j
A non-negative matrix T is impotent if
min(diag(Tn)) = 0 for all integers n>0 [see
potency].
An incidence matrix is one whose elements all equal 1 or 0.
An Integral matrix is one whose elements are all integers.
Involutary (also written
Involutory)
An Involutary matrix is one whose square equals the identity.
- A is involutary iff ½(A+I) is idempotent.
- A[2#2] is involutary iff A = +-I or else
A = [a b; (1-a2)/b
-a] for some real or complex a and b.
Irreducible
see under Reducible
Jacobi
see under Tridiagonal
A matrix, A, is monotone iff A-1
is non-negative, i.e. all its entries are >=0.
In computer science a matrix is monotone if its entries are monotonically
non-decreasing as you move away from the main diagonal along either a row or
column.
A matrix A is nilpotent to index k if
Ak = 0 but Ak-1 !=
0.
- The determinant of a nilpotent matrix is 0.
- The eigenvalues of a nilpotent matrix
are all 0.
- If A is nilpotent to index k, its minimal polynomial is
tk.
Non-negative
see under positive
A square matrix A is normal if
AHA = AAH
- An#n is normal iff any of the following
equivalent conditions is true
- The following types of matrix are normal: diagonal,
hermitian, skew-hermitian and unitary.
- A normal matrix is hermitian iff its eigenvalues
are all real.
- A normal matrix is skew-hermitian iff its
eigenvalues all have zero real parts.
- A normal matrix is unitary iff its eigenvalues all
have an absolute value of 1.
- For any Xm#n,
XHX and XXH are
normal.
- The singular values of a normal matrix are the absolute values of the
eigenvalues.
- [A: normal] The eigenvalues of
AH are the conjugates of the eigenvalues of A
and have the same eigenvectors.
- Normal matrices are closed under raising to an integer power and (if
non-singular) inversion.
- If A and B are normal and AB=BA then AB
is normal.
A real square matrix Q is orthogonal if Q'Q =
I. It is a proper orthogonal matrix if det(Q)=1 and an
improper orthogonal matrix if det(Q)=-1.
For real matrices, orthogonal and unitary mean
the same thing. Most properties are listed under unitary.
Geometrically: Orthogonal matrices in 2 and 3 dimensions correspond to
rotations and reflections.
- The determinant of an orthogonal matrix equals +-1 according to whether it
is proper or improper.
- Q is a proper orthogonal matrix iff Q = exp(K) or
K=ln(Q) for some real skew-symmetric K.
- A 2#2 orthogonal matrix is either a Givens
rotation or a Givens reflection according to
whether it is proper or improper.
- A 3#3 orthogonal matrix is either a rotation matrix
or else a rotation matrix plus a reflection in the plane of the rotation
according to whether it is proper or improper.
- For a=+1 or a=-1, there is a 1-to-1 correspondence between
real skew-symmetric matrices, K, and
orthogonal matrices, Q, not having a as an eigenvalue given by
Q=a(K-I)(K+I)-1 and
K=(aI+Q)(aI-Q)-1. These
are Caley's formulae.
- For a=-1 this gives
Q=(I-K)(I+K)-1 and
K=(I-Q)(I+Q)-1. Note that
(I+K) is always non-singular.
A square matrix P is a permutation matrix if its columns
are a permutation of the columns of I.
- A permutation matrix is orthogonal and doubly stochastic.
- The set of permutation matrices is closed under multiplication and
inversion.1
- If P is a permutation matrix:
- P is a permutation matrix iff each row and each column contains a
single 1 with all other elements equal to 0.
A matrix A[n#n] is persymmetric
if it is symmetric about its anti-diagonal, i.e. if
A=JATJ where J is the exchange matrix. It is perhermitian if
A=JAHJ and perskewsymmetric if
A= -JATJ.
WARNING: The term persymmetric is sometimes used for a bisymmetric matrix.
- If A is persymmetric then so is Ak for any
positive or, providing A is non-singular, negative k.
- A Toeplitz matrix is persymmetric.
A polynomial matrix of order p is one whose elements are
polynomials of a single variable x. Thus
A=A(0)+A(1)x+...+A(p)xp
where the A(i) are constant matrices and A(p) is
not all zero.
See also regular.
A real matrix is positive if all its elements are strictly >
0.
A real matrix is non-negative if all its elements are >= 0.
- [Perron's theorem] If An#n is
positive with spectral radius
r, then the real positive value r is an eigenvalue with the
following properties:
- the eigenvector, x, satisfying Ax = rx
can be chosen to have strictly positive real elements.
- the eigenvector, y, satisfying
ATy = ry can be
chosen to have strictly positive real elements.
- all other eigenvalues have magnitude strictly less than r and
their corresponding eigenvectors cannot be chosen to have all elements strictly
positive and real.
- The rank-1 impotent matrix, T =
xyT/xTy, is the
projection onto the eigenspace spanned by x. The
limit, limm->inf(r-1A)m
= T =
xyT/xTy.
- [Perron-Frobenius theorem] If An#n is
irreducible and non-negative with spectral radius r, then the real
positive value r is an eigenvalue with the following properties:
- the eigenvector, x, satisfying Ax = rx
can be chosen to have strictly positive real elements.
- the eigenvector, y, satisfying
ATy = ry can be
chosen to have strictly positive real elements.
- the eigenvectors associated with any other eigenvalue cannot be chosen to
have all elements strictly positive and real.
- If there are h eigenvalues of magnitude r, then these
eigenvalues are simple and are given by
r exp(2jπk/h) for k=0, 1,
…, h-1. h is the period.
Positive Definite
see under definiteness
If k is the eigenvalue of a
matrix An#n having the largest absolute value,
then A is primitive if the absolute values of all other
eigenvalues are < |k|.
- If An#n is non-negative then A is primitive iff
Am is positive for some
m>0.
- If An#n is non-negative and primitive then
limm->inf(r-1A)m
= xyT where r is the spectral radius of A and x and
y are positive eigenvectors satisfying Ax =
rx, ATy = ry
and xTy = 1.
A projection matrix (or orthogonal projection matrix) is a square matrix
that is hermitian and idempotent: i.e.
PH=P2=P.
WARNING: Some people call any idempotent
matrix a projection matrix and call it an orthogonal projection
matrix if it is also hermitian.
- If P is a projection matrix then P is positive semi-definite.
- I-P is a projection matrix iff P is a projection matrix.
-
X(XHX)#XH
is a projection whose range is the subspace spanned by the columns of X.
- If X has full column rank, we can equivalently write
X(XHX)-1XH
- xxH/xHx is a
projection onto the 1-dimensional subspace spanned by x.
- If P and Q are projection matrices, then the following are
equivalent:
- P-Q is a projection matrix
- P-Q is positive
semidefinite
- ||Px|| >= ||Qx|| for all x.
- PQ=Q
- QP=Q
- [A: idempotent] A is a projection
matrix iff ||Ax|| <= ||x|| for all x.
Quaternions are a generalization of complex numbers. A
quaternion consists of a real component and three independent imaginary
components and is written as r+xi+yj+zk where
i2=j2=k2=ijk=-1.
It is approximately true that whereas the polar decomposition of a complex
number has a magnitude and 2-dimensional rotation, that of a quaternion has a
magnitude and a 3-dimensionl rotation (see below). Quaternions form a
division ring rather than a field because although every non-zero
quaternion has a multiplicative inverse, multiplication is not in general commutative
(e.g. ij=-ji=k). Quaternions are widely used to represent
three-dimensional rotations in computer graphics and computer vision as an
alternative to orthogonal matrices with the following advantages: (a) more
compact, (b) possible to interpolate, (c) does not suffer from "gimbal lock",
(d) easy to correct for drift due to rounding errors.
We can represent a quaternion either as a real 4-vector
qR=[r x y z]T or a complex
2-vector qC=[r+jy
x+jz]T. This gives r+xi+yj+zk = [1 i j
k]qR = [1
i]qC. We can also represent it as a real 4#4
matrix QR=[r -x -y -z; x r -z y; y z
r -x; z -y x r] or a complex 2#2 matrix
QC=[r+jy -x+jz; x+jz r-jy]. Both the
real and the complex quaternion matrices obey the same arithmetic rules as
quaternions, i.e. the quaternion matrix representing the result of applying +,
-, * and / operations to quaternions is the same as the result of applying the
same operations to the corresponding quaternion matrices. Note that
qR=QR[1 0 0
0]T and
qC=QC[1
0]T; we can also define the inverse functions
QR=QUATR(qR)
and
QC=QUATC(qC).
Note that the real and complex representations given above are not the only
possible choices.
In the following,
PR=QUATR(pR),
QR=QUATR(qR),
K=DIAG([-1 1 1 1]) and qR=[r x y
z]T=[r; w].
PC,pC,QC
and qC are the corresponding complex quantities; the
subscripts R and C are omitted below for results that apply to
both real and complex representations.
- The magnitude of the quaternion is
m=|q|=sqrt(r2+x2+y2+z2).
. A unit quaternion has m = 1.
- det(QR)=m4.
det(QC)=qHq=m2
- Any quaternion may be written as m times a unit quaternion.
-
Q-1=(qHq)-1QH
is the reciprocal of the quaternion.
- QH is the conjugate of the quaternion;
this corresponds to reversing the signs of x, y and
z.
- PQ=QUAT(Pq) and
P+Q=QUAT(p+q). This illustrates that we may
often use the quaternion vectors rather than the matrices when performing
arithmetic with a resultant saving in computation.
-
PRqR=KQRTKpR.
Note however that KQRTK is not a
quaternion matrix unless Q is a multiple of I (i.e. the
corresponding quaternion is purely real).
-
(QRK)2=(KQR)2
-
(PRQRK)2=(PRK)2(QRK)2
- QR=rI+[0
-wT; w SKEW(w)]
- [|q|=1]
(QRK)2=(KQR)2=[1
0; 0 S] where S is a 3#3 rotation matrix corresponding to an angle of
2cos-1(r) about an axis whose unit vector is
w/sqrt(1-r2).
- Every 3#3 rotation matrix corresponds to a unit quaternion matrix
that is unique except for its sign, i.e. +Q and -Q correspond to
the same rotation matrix. Thus the decomposition of a quaternion into a
magnitude and 3-dimensional rotation is only invertible to within a sign
ambiguity.
- [|p|=|q|=1] If
(PRK)2==[1 0; 0
R] and (QRK)2==[1 0;
0 S], then
(PRK)2(QRK)2=(PRQRK)2=[1
0; 0 RS]. This shows that multiplying unit quaternions is
equivalent to multiplying rotation matrices but may be more efficient
computationally if it is possible to use quaternion vectors rather than
matrices for intermediate results.
A non-zero matrix A is a rank-one matrix iff it can be
decomposed as A=xyT.
- If A=xyT is a rank-one matrix then
- If A=pqT then
p=kx and q=y/k for some
scalar k. That is, the decomposition is unique to within a scalar
multiple.
- If A=xyT is a square rank-one matrix then
- A has a single non-zero eigenvalue equal to
xTy=yTx.
The associated right and left eigenvectors are respectively x and
y.
- Frobenius Norm:
||A||F2=tr(AHA)=xHx×yHy
- Pseudoinverse:
A+=AH/
||A||F2
=AH/tr(AHA)=AH/(xHx×yHy)
where ||A||F is the Frobenius Norm.
A matrix An#n is reducible (or fully decomposable) if if there exists a permutation matrix P such that
PTAP is of the form [B C; 0 D]
where B and D are square. As a special case
01#1 is regarded as reducible. A matrix that is not
reducible is irreducible.
WARNING: The term reducible is sometimes used to mean one that
has more than one block in its Jordan Normal
Form.
- An irreducible matrix has at least one non-zero off-diagonal element
in each row and column.
- An#n is irreducible iff (I +
ABS(A))n-1 is positive.
A polynomial matrix, A, of order p
is regular if det(A) is non-zero.
- An n#n square polynomial matrix, A(x), of order
p is regular iff det(A) is a polynomial in x of degree
n*p.
[Real]: A Rotation matrix,
R, is an n*n matrix of the form R=U[Q 0 ; 0
I]UT where U is any orthogonal matrix and Q is a matrix of the form
[cos(x) -sin(x); sin(x) cos(x)]. Multiplying a
vector by R rotates it by an angle x in the plane containing
u and v, the first two columns of U. The direction of
rotation is such that if x=90 degrees, u will be rotated to
v.
- A Rotation matrix is orthogonal with a
determinant of +1.
- All but two of the eigenvalues of R equal unity and the remaining
two are exp(jx) and exp(-jx) where j is the square root of
-1. The corresponding unit modulus eigenvectors are [u v][1
-j]T/sqrt(2) and [u v][1
+j]T/sqrt(2).
-
R=I+(cos(x)-1)(uuT+vvT)+sin(x)(vuT-uvT)
where u and v are the first two columns of U.
- If x=90 degrees then
R=I-uuT-vvT+vuT-uvT
.
- If x=180 degrees then
R=I-2uuT-2vvT
- If x=270 degrees then
R=I-uuT-vvT-vuT+uvT
- [3#3] R =
wwT+cos(x)(I-wwT)+sin(x)SKEW(w)
=
I+sin(x)SKEW(w)+(1-cos(x))SKEW(w)2
where the unit vector w = u × v is the axis of
rotation. [See skew-symmetric for the definition
and properties of SKEW()].
- tr(R) = 2 cos(x) + 1
- Every 3#3 orthogonal matrix is either a rotation
matrix or else a rotation matrix plus a reflection in the plane of the rotation
according to whether its determinant is +1 or -1.
- The product of two 3#3 rotation matrices is a rotation matrix.
- A 3#3 rotation matrix may be expressed as the product of three rotations
about the x, y and z axes respectively. The corresponding
rotation angles are the Euler angles. The order in which the rotations
are performed is significant and is not standardised. Using Euler angles is
often a bad idea because their relation to the rotation axis direction is not
continuous.
- R=(I-K)(I+K)-1 where
K=-tan(x/2)*SKEW(w) except when x=180 degrees. This
is the Caley transform.
- If x=90 degrees then
R=wwT+SKEW(w)
=(I+SKEW(w))(I-SKEW(w))-1
- If x=180 degrees then
R=2wwT-I
- If x=270 degrees then
R=wwT-SKEW(w)=(I-SKEW(w))(I+SKEW(w))-1
- ADJ(R-I)=2(1-cos(x))wwT where
ADJ() denotes the adjoint. All
columns of this rank-1 matrix are multiples of w.
- Every 3#3 rotation matrix corresponds to a quaternion matrix that is unique except for its sign.
A shift matrix, or lower shift matrix, Z,
is a matrix with ones below the main diagonal and zeros elsewhere.
ZT has ones above the main diagonal and zeros
elsewhere and is an upper shift matrix.
- ZA, ZTA, AZ,
AZT, ZAZT are equal to the
matrix A shifted one position down, up left, right, and down along the
main diagonal respectively.
- Zn#n is nilpotent.
A signature matrix is a diagonal matrix
whose diagonal entries are all +1 or -1.
An n*n square matrix is simple (or, equivalently,
diagonalizable or diagonalizable or
non-defective) if all its eigenvalues are regular, otherwise it is
defective.
A matrix is singular if it has no inverse.
- A matrix A is singular iff det(A)=0.
A square matrix K is Skew-Hermitian (or
antihermition) if K = -KH, that is
a(i,j)=-conj(a(j,i))
For real matrices, Skew-Hermitian and skew-symmetric are equivalent. The following properties
apply also to real skew-symmetric matrices.
- S is Hermitian iff jS is
skew-Hermitian where j = sqrt(-1)
- K is skew-Hermitian iff xHKy =
-xHKHy for all x
and y.
- Skew-Hermitian matrices are closed under addition, multiplication by a
scalar, raising to an odd power and (if non-singular) inversion.
- Skew-Hermitian matrices are normal.
- If K is skew-hermitian, then K2 is hermitian.
- The eigenvalues of a skew-Hermitian matrix are either 0 or pure
imaginary.
- Any matrix A has a unique decomposition A = S +
K where S is Hermitian and K is
skew-hermitian.
- K is skew-hermitian iff K=ln(U) or
U=exp(K) for some unitary U
.
- For any complex a with |a|=1, there is a 1-to-1
correspondence between the unitary matrices, U,
not having a as an eigenvalue and skew-hermitian matrices, K,
given by U=a(K-I)(I+K)-1
and
K=(aI+U)(aI-U)-1. These
are Caley's formulae.
- Taking a=-1 gives
U=(I-K)(I+K)-1 and
K=(I-U)(I+U)-1.
A square matrix K is skew-symmetric (or
antisymmetric) if K = -KT, that is
a(i,j)=-a(j,i)
For real matrices, skew-symmetric and Skew-Hermitian are equivalent. Most properties are listed
under skew-Hermitian .
- Skew-symmetry is preserved by congruence.
- The diagonal elements of a skew-symmetric matrix are all 0. [1.10]
- The rank of a real or complex skew-symmetric matrix is even. [1.11]
- [Real] The non-zero eigenvalues of a
real skew-symmetric matrix are all purely imaginary and occur in complex
conjugate pairs.
- If K is skew-symmetric, then I - K is
non-singular
- [Real] If A is skew-symmetric,
then xTAx = 0 for all real x.
- [Real] If a=+1 or a=-1,
there is a 1-to-1 correspondence between real skew-symmetric matrices,
K, and those orthogonal matrices, Q,
not having a as an eigenvalue given by
Q=a(K-I)(K+I)-1 and
K=(aI+Q)(aI-Q)-1
. These are Caley's formulae.
- K is real skew-symmetric iff K=ln(Q) or Q =
exp(K) for some real proper orthogonal matrix
Q.
- [Real 3#3] All 3#3 skew-symmetric
matrices have the form SKEW(a)
= [0 -a3 a2; a3 0
-a1; -a2 a1 0] for some
vector a.
- SKEW(ka) = k SKEW(a) for any
scalar k
- The vector cross product is given by a × b =
SKEW(a) b = -SKEW(b) a
- SKEW(a) b = 0 iff a = kb for some
scalar k
-
SKEW(a)2n=(-aTa)n-1aaT+(-aTa)nI=(-aTa)n-1(aaT-(aTa)I)
for integer n>=1
-
SKEW(a)2n+1=(-aTa)nSKEW(a)
for integer n>=0
- The eigenvalues of SKEW(a) are 0 and
+-sqrt(-aTa)
- The eigenvector associated with 0 is ka
- [Real a]: Eigenvalues are 0 and +-j|a| where
j is sqrt(-1). Unless q=r=0 a suitable pair of eigenvectors are
[-q2-r2 jr-pq
pr-jq]T and
[-q2-r2 -jr-pq
pr+jq]T.
- The singular values of SKEW(a)
are |a|, |a| and 0.
- If z=|a| and w=[z2
z3]T, then a singular value decomposition is
SKEW(a)=USVT where
U=[zT;
w I+(z1-1)-1wwT]J,
S=DIAG(|a|, |a|, 0) and
V=U [0 1 0; -1 0 0; 0 0 1]
where J is the exchange matrix (i.e. I
with the column order reversed). All other decompositions may be obtained by
postmultiplying both U and V by
DIAG(Q[2#2], 1) for some orthogonal Q and/or negating the final column of one
or both of U and V.
- SKEW(a)T SKEW(a) =
SKEW(a) SKEW(a)T =
|a|2 I - aaT
- tr( SKEW(a)T
SKEW(a))=2aTa
- det([a b c]) = aT
SKEW(b) c = bT
SKEW(c) a = cT
SKEW(a) b, this is the scalar triple product.
- aT SKEW(b) a =
aT SKEW(a) b =
bT SKEW(a) a = 0 for all a
and b
- SKEW(a)SKEW(b) =
baT-(bTa)I
- SKEW(a)SKEW(b) c =
(aTc)b -
(aTb)c, this is the vector triple
product.
- For any a and B[3#3],
- BTSKEW(Ba)B = det(B) *
SKEW(a)
- SKEW(Ba)B = ADJ(B)T
SKEW(a) where ADJ(B) denotes the adjoint matrix.
- [det(B)!=0]: SKEW(Ba) =
det(B) *
B-TSKEW(a)B-1
- SKEW(SKEW(a)b) = baT -
abT
- [U orthogonal] The product E =
U SKEW(a) = SKEW(Ua) U is an essential matrix
- ETE =
(aTa) I -
aaT
- tr(ETE) =
2aTa.
A matrix is sparse if it has relatively few non-zero
elements.
A Stability or Stable matrix is one whose eigenvalues
all have strictly negative real parts.
A semi-stable matrix is one whose eigenvalues all have non-positive
real parts.
See also: Convergent
A real non-negative square matrix A is
stochastic if all its rows sum to 1.. If all its columns also sum to 1 it is Doubly Stochastic.
- All eigenvalues of A are <= 1.
- 1 is an eigenvalue with eigenvector [1 1 ... 1]T
Sub-stochastic
A real non-negative square matrix A is
sub-stochastic if all its rows sum to <=1.
A is subunitary if ||AAHx|| =
||AHx|| for all x. A is also called a
partial isometry.
The following are equivalent:
- A is subunitary
- AHA is a projection matrix
- AAHA = A
- A+ = AH
- A is subunitary iff AH is subunitary iff
A+ is subunitary.
- If A is subunitary and non-singular than A is unitary.
A square matrix A is symmetric if A =
AT, that is a(i,j) = a(j,i).
Most properties of real symmetric matrices are listed under Hermitian .
- [Real]: If A is real,
symmetric, then A=0 iff xTAx = 0 for all
real x.
- [Real]: A real symmetric matrix is
orthogonally similar to a
diagonal matrix.
- [Real, 2#2] A=[a b; b
d]=RDRT where D is diagonal and
R=[cos(t) -sin(t); sin(t) cos(t)] and
t=½tan-1(2b/(a-d)).
- A is symmetric iff it is congruent to a diagonal matrix.
- Any square matrix may be uniquely decomposed as the sum of a symmetric matrix and a skew-symmetric matrix.
- Any symmetric matrix A can be expressed as
A=UDUT where
U is unitary and D is
real, non-negative and diagonal with its diagonal elements arranged in
non-increasing order (i.e. di,i <= dj,j
for i < j). This is the Takagi decomposition and is a special case of the
singular value decomposition.
See also Hankel.
A real matrix, A, is symmetrizable if
ATM = MA for some positive definite M.
A matrix, A[2n#2n], is
symplectic if AHKA=K where K
is the antisymmetric orthogonal matrix [0 I; -I
0].
- A is symplectic iff
A-1=KTAHK
- If a symplectic matrix A=[P Q; R S] where
P,Q,R,S are all n#n, then
A-1=[SH -RH
; -QH PH]
- The set of symplectic matrices of size 2n#2n is closed under
multiplication and inversion and so forms a multiplicative group.
- A is symplectic iff it preserves the symplectic form
xHKy, that is
(Ax)HK(Ay) =
xHKy for all x and y. This
is analogous to the way that a unitary matrix, U, preserves the inner
product:
(Ux)H(Uy)=xHy.
See also: hamiltonian
A toeplitz matrix, A, has constant diagonals. In other
words ai,j depends only on i-j.
We define
A=TOE(b[m+n-1])[m#n]
to be the m#n matrix with ai,j =
bi-j+n. Thus, b is the column
vector formed by starting at the top right element of A, going backwards
along the top row of A and then down the left column of A.
In the topics below, J is the exchange
matrix.
- A toeplitz matrix is persymmetric and so, if it
exists, is its inverse. A symmetric toeplitz matrix is
bisymmetric.
- If A and B are toeplitz, then so are A+B and
A-B. Note that AB and A-1 are not necessarily
toeplitz.
- If A is toeplitz, then AT,
AH and JAJ are Toeplitz while JA,
ATJ, AJ and
JAT are Hankel.
- If A[n#n] is toeplitz, then
JATJ=(JAJ)T=A while
JA=ATJ and
AJ=JAT are Hankel.
- TOE(a+b) = TOE(a) +
TOE(b)
-
TOE(b[m+n-1])[m#n]=TOE(Jb)[n#m]T
-
TOE(b[2n-1])[n#n]=TOE(Jb)[n#n]T
- If the lower triangular matrices
A[n#n]=TOE([0[n-1];
p[n]]) and
B[n#n]=TOE([0[n-1];
q[n]]) then:
- Aq = Bp =
conv(p,q)1:n
- AB = BA = TOE([0[n-1];
Aq]) = TOE([0[n-1]; Bp]) =
TOE([0[n-1];
conv(p,q)1:n])
- A-1 and B-1 are toeplitz
lower triangular if they exist.
- If the upper triangular matrices
A[n#n]=TOE([
p[n]; 0[n-1]]) and
B[n#n]=TOE([
q[n]; 0[n-1]]) then:
- Aq = Bp =
conv(p,q)n:2n-1
- AB = BA = TOE([Aq;
0[n-1]]) = TOE([Bp;
0[n-1]]) =
TOE([conv(p,q)n:2n-1;
0[n-1]])
- A-1 and B-1 are toeplitz
lower triangular if they exist.
- The product
TOE(a)[m#r]TOE(b)[r#n]
is toeplitz iff
ar+1:r+m-1b1:n-1T
=
a1:m-1br+1:r+n-1T
[1.21]. This m-1#n-1
rank-one matrix identity is equivalent to requiring one
of the following conditions:
- Both
ar+1:r+m-1=ka1:m-1
and
br+1:r+n-1=kb1:n-1
for the same scalar k. Note that a1:m-1 and
ar+1:r+m-1 will overlap if
m>r+1 and similarly for b if
n>r+1.
-
- For TOE(a) to be square and symmetric,
a1:m-1 must be either symmetric or antisymmetric with
k=+1 or -1 respectively (a similar condition applies to
TOE(b)).
- Either ar+1:r+m-1= 0 or
b1:n-1 = 0 and also either
a1:m-1= 0 or
br+1:r+n-1= 0 . If
m=r=n then this condition is equivalent to requiring
that A and B are either both upper triangular or both lower
triangular or else one of them is diagonal.
Some special cases of this are:
-
TOE(a)[m#r]TOE(b)[r#n]
is toeplitz if ar+1:r+m-1 =
a1:m-1 and br+1:r+n-1=
b1:n-1. Note that this does not make the matrices
symmetrical even for square matrices because a1:m-1
goes backwards along the top row of the matrix.
- TOE([0[m-1];
a[r]])[m#r]TOE([0[n-1];
b[r]])[r#n] =
TOE([0[n+m-r-1];
conv(a,b)1:r])
- TOE([a[r];
0[m-1]])[m#r]TOE([b[r];
0[n-1]])[r#n] =
TOE([ conv(a,b)r:2r-1;
0[n+m-r-1]])
- If A=TOE(b)[m#n] then
JAJ=TOE(Jb)[m#n]
- TOE([0[n-p];
a[m];
0[q-m]])[q-p+1#n]
b[n] =
TOE([0[m-p];
b[n];
0[q-n]])[q-p+1#m]
a[m] =
conv(a,b)p:q provided that
p<=m,n<=q and
conv(a,b)i is taken to be 0 for i
outside the range 1 to m+n-1.
-
TOE(a[m])[m-n+1#n]
b[n] =
conv(a,b)n:m
- TOE([0[n-p];
a[n]])[n-p+1#n]
b[n] =
TOE([0[n-p];
b[n]])[n-p+1#n]
a[n] =
conv(a,b)p:n
- TOE([0[n-1];
a[n]])[n#n]
b[n] =
TOE([0[n-1];
b[n]])[n#n]
a[n] =
conv(a,b)1:n
- TOE([a[n];
0[q-n]])[q-n+1#n]
b[n] =
TOE([b[n];
0[q-n]])[q-n+1#n]
a[n] =
conv(a,b)n:q
- TOE([a[n];
0[n-1]])[n#n]
b[n] =
TOE([b[n];
0[n-1]])[n#n]
a[n] =
conv(a,b)n:2n-1
- TOE([0[n-1];
a[m];
0[n-1]])[m+n-1#n]
b[n] =
TOE([0[m-1];
b[n]
;0[m-1]])[m+n-1#m]
a[m] =
conv(a,b)
- A symmetric toeplitz matrix is of the form
S[n#n] =
TOE([Ja[n];
0[n-1]]+[0[n-1];
a[n]])
- JSJ = S
- Sb = (TOE([b[n];
0[n-1]])[n#n]J+TOE([0[n-1];
b[n]])[n#n])a . The
matrix on the right is the sum of a lower triangular toeplitz and an upper
triangular hankel matrix.
A is upper triangular if a(i,j)=0 whenever
i>j.
A is lower triangular if a(i,j)=0
whenever i<j.
A is triangular iff it is either upper or lower
triangular.
A triangular matrix A is strictly triangular if its diagonal
elements all equal 0.
A triangular matrix A is unit triangular if its diagonal
elements all equal 1.
- [Real]: An orthogonal triangular matrix must be diagonal
- [n*n]: The determinant of a triangular matrix is the product of its
diagonal elements.
- If A is unit triangular then inv(A) exists and is unit
triangular.
- A strictly triangular matrix is nilpotent .
- The set of upper triangular matrices are closed under multiplication and
addition and (where possible) inversion.
- The set of lower triangular matrices are closed under multiplication and
addition and (where possible) inversion.
A is tridiagonal or Jacobi if A(i,j)=0 whenever
|i-j|>1. In other words its non-zero elements lie either on or immediately
adjacent to the main diagonal.
- A is tridiagonal iff it is both upper and lower Hessenberg.
A complex square matrix A is unitary if
AHA = I. A is also sometimes
called an isometry.
A real unitary matrix is called orthogonal .The
following properties apply to orthogonal matrices as well as to unitary
matrices.
- Unitary matrices are closed under multiplication, raising to an integer
power and inversion
- U is unitary iff UH is unitary.
- Unitary matrices are normal.
- U is unitary iff ||Ux|| = ||x|| for all x.
- The eigenvalues of a unitary matrix all have an absolute value of 1.
- The determinant of a unitary matrix has an absolute value of 1.
- A matrix is unitary iff its columns form an orthonormal basis.
- U is unitary iff U=exp(K) or K=ln(U)
for some skew-hermitian K.
- For any complex a with |a|=1, there is a 1-to-1
correspondence between the unitary matrices, U, not having a as
an eigenvalue and skew-hermitian matrices,
K, given by
U=a(K-I)(I+K)-1 and
K=(aI+U)(aI-U)-1. These
are Caley's formulae.
- Taking a=-1 gives
U=(I-K)(I+K)-1 and
K=(I-U)(I+U)-1.
An Vandermonde matrix, V[n#n], has the
form [1 x
x•2 …
x•n-1] for some column vector x. (where
x•2 denotes elementwise squaring). A general
element is given by v(i,j) =
(xi)j-1. All elements of the first column
of the matrix equal 1. Vandermonde matrices arise in connection with fitting
polynomials to data.
WARNING: Some authors define a Vandermonde matrix to be either the
transpose or the horizontally flipped version of the above definition.
The vectorized transpose matrix, TVEC(m,n), is
the mn#mn permutation matrix
whose i,jth element is 1 if
j=1+m(i-1)-(mn-1)floor((i-1)/n) or 0
otherwise.
For clarity, we write Tm,n =
TVEC(m,n) in this section.
- [A[m#n]]
(AT): = Tm,nA:
[see vectorization, R.6]
- Tm,n is a permutation
matrix and is therefore orthogonal.
- T1,n = Tn,1 =
I
- Tn,m =
Tm,nT =
Tm,n-1
- [A[m#n],
B[p#q]] B ⊗ A =
Tp,m (A ⊗
B) Tn,q
- [A[m#n],
B[p#q]]
(A ⊗ B)
Tn,q =
Tm,p (B ⊗
A)
- [a[n],
B[p#q]] (a ⊗
B) = Tn,p (B ⊗
a)
The zero matrix, 0, has a(i,j)=0 for all i,j
- [Complex]: A=0 iff xHAx = 0
for all x .
- [Real]: If A is symmetric, then A=0 iff
xTAx = 0 for all x.
- [Real]: A=0 iff xTAy = 0
for all x and y.
- A=0 iff AHA = 0
This page is part of The Matrix Reference
Manual. Copyright © 1998-2022 Mike Brookes, Imperial
College, London, UK. See the file gfl.html for copying
instructions. Please send any comments or suggestions to "mike.brookes" at
"imperial.ac.uk".
Updated: $Id: special.html 11291 2021-01-05 18:26:10Z dmb $