Special Matrices
Go to: Introduction, Notation, Index
Antisymmetric
see skewsymmetric .
A is upper bidiagonal if a(i,j)=0 unless i=j
or i=j1.
A is lower bidiagonal if a(i,j)=0 unless i=j or
i=j+1
A bidiagonal matrix is also tridiagonal, triangular and Hessenberg.
A_{[n#n]} is bisymmetric if it is
symmetric about both main diagonals, i.e. if
A=A^{T}=JAJ where J is the exchange matrix.
WARNING: The term persymmetric is
sometimes used instead of bisymmetric. Also bisymmetric is sometimes used to
mean centrosymmetric and sometimes to mean
symmetric and perskewsymmetric.
 A bisymmetric matrix is symmetric, persymmetric and centrosymmetric. Any two of these four properties
properties implies the other two.
 More generally, symmetry, persymmetry and centrosymmetry can each come in
four flavours: symmetric, skewsymmetric, hermitian and skewhermitian. Any
pair of symmetries implies the third and the total number of skew and hermitian
flavourings will be even. For example, if A is skewhermitian and
perskewsymmetric, then it will also be centrohermitian.
 If A_{[2m#2m]} is bisymmetric
 A=[S P^{T}; P JSJ] for
some symmetric S_{[m#m]} and persymmetric
P_{[m#m]}.
 A is orthogonally
similar to [SJP 0; 0 S+JP]
 A has a set of 2m orthonormal eigenvectors consisting of
m skewsymmetric vectors of the form [u; Ju]/k and
m symmetric vectors of the form [v; Jv]/k where
u and v are eigenvectors of SJP and
S+JP respectively and k=sqrt(2).
 If A has distinct eigenvalues and rank(P)=1 then if the
eigenvalues are arranged in descending order, the corresponding eigenvectors
will be alternately symmetric and skewsymmetric with the first one being
symmetric or skewsymmetric according to whether the nonzero eigenvalue of
P is positive or negative.
 If A_{[2m+1#2m+1]} is bisymmetric
 A=[S x P^{T};
x^{T} y x^{T}J; P
Jx JSJ] for some symmetric S_{[m#m]} and
persymmetric P_{[m#m]}.
 A is orthogonally
similar to [SJP 0; 0 y
kx^{T}; 0 kx S+JP]
where k=sqrt(2).
 A has a set of 2m+1 orthonormal eigenvectors consisting of
m skewsymmetric vectors of the form [u; 0; Ju]/k
and m+1 symmetric vectors of the form [v; kw;
Jv]/k where u and [v; w] are
eigenvectors of SJP and [S+JP kx;
kx^{T} y] respectively and
k=sqrt(2).
 If A has distinct eigenvalues and P=0 then if the
eigenvalues are arranged in descending order, the corresponding eigenvectors
will be alternately symmetric and skewsymmetric with the first one being
symmetric.
A is block diagonal if it has the form [A 0 ...
0; 0 B ... 0;...;0 0 ... Z] where A,
B, ..., Z are matrices (not necessarily square).
 A matrix is block diagonal iff is the direct sum of two or more smaller
matrices.
A_{[m#n]} is centrohermitian if it is
rotationally hermitian symmetric about its centre, i.e. if
A^{T}=JA^{H}J where J is the
exchange matrix.
 Centrohermitian matrices are closed under addition, multiplication and (if
nonsingular) inversion.
A_{[m#n]} is centrosymmetric (also
called perplectic) if it is rotationally symmetric about its centre,
i.e. if A=JAJ where J is the exchange matrix. It is centrohermitian if
A^{T}=JA^{H}J and
centroskewsymmetric if A= JAJ.
 Centrosymmetric matrices are closed under addition, multiplication
and (if nonsingular) inversion.
A circulant matrix, A_{[n#n]}, is a
Toeplitz matrix in which a_{i,j} is a
function of {(ij) modulo n}. In other words each column of
A is equal to the previous column rotated downwards by one element.
WARNING: The term circular is sometimes used
instead of circulant.
 Circulant matrices are closed under addition, multiplication and
(if nonsingular) inversion.
 A circulant matrix, A_{[n#n]} , may be
expressed uniquely as a polynomial in C, the cyclic permutation matrix, as A =
Sum_{i=0:n1}{ a_{i,1}
C^{i}} = Sum_{i=0:n1}{
a_{1,i}
C^{i}}
 All circulant matrices have the same eigenvectors. If
A_{[n#n]} is a circulant matrix, the normalized
eigenvectors of A are the columns of n^{½}
F., the discrete Fourier Transform matrix. The
corresponding eigenvalues are the discrete fourier transform of the first row
of A given by
FA^{T}e_{1} =
(FA^{C}e_{1})^{C}
= nF^{1}Ae_{1} where
e_{1} is the first comlumn of I.
 F^{1}AF = n^{1}
F^{H}AF=DIAG(FA^{T}e_{1})
A Circular matrix, A_{[n#n]}, is one
for which AA^{C} = I.
WARNING: The term circular is sometimes used for a circulant matrix.
 A matrix A is circular iff A=exp (j B) where
j = sqrt(1), B is real and exp() is the matrix exponential
function.
 If A = B + jC where B and C are
real and j = sqrt(1) then A is circular iff BC=CB
and also BB + CC = I.
If p(x) is a polynomial of the form a(0) + a(1)*x +
a(2)*x^{2} + ... +
a(n)*x^{n} then the polynomial's companion
matrix is n#n and equals [0 I; a(0:n1)/a(n)] where
I is n1#n1. For n=1, the companion matrix is
[a(0)/a(1)].
The rows and columns are sometimes given in reverse order
[a(n1:0)/a(n) ; I 0].
 The characteristic and minimal polynomials of a companion matrix both equal
p(x).
 The eigenvalues of a companion matrix equal the roots of p(x).
A matrix is complex if it has complex elements.
Complex to Real Isomporphism
We can associate a complex matrix C_{[m#n]}
with a corresponding real matrix R_{[2m#2n]}
by replacing each complex element, z, of C by a 2#2 real matrix
[z^{R} z^{I}; z^{I}
z^{R}]=z×[cos(t) sin(t);
sin(t) cos(t)] where t=arg(z). We will write
C <=> R for this mapping below.
 This mapping preserves the operations +,,*,/ and, for square matrices,
inversion. It does not however preserve • (Hadamard) or ⊗ (Kronecker) products.
 If C <=> R
 R = C ⊗ [()^{R} ()^{I};
()^{I} ()^{R}] where the operators
()^{R} and ()^{I} take the real and imaginary
parts respectively.
 C = (I_{[m#m]} ⊗ [1 j])
R (I_{[n#n]} ⊗ [1; 0]) = (I
⊗ [1 0]) R (I ⊗ [1; j])=½(I
⊗ [1 j]) R (I ⊗ [1; j]) where
j=sqrt(1).
 C^{R} = (I_{[m#m]}
⊗ [1 0]) R (I_{[n#n]} ⊗ [1;
0]) = (I_{[m#m]} ⊗ [0 1]) R
(I_{[n#n]} ⊗ [0; 1])
 det(R)=det(C)^{2}
 tr(R)=2 tr(C)
 R is orthogonal iff C is unitary.
 R is symmetric iff C is hermitian.
 R is positive definite symmetric iff C is positive definite
hermitian.
Vector mapping: Under the isomorphism a complex vector maps to a real
matrix: z_{[n]} <=>
Y_{[2n#2]}. We can also define a simpler mapping,
<>, from a vector to a vector as z_{[n]}
<> x_{[2n]} = z ⊗
[()^{R}; ()^{I}] = Y [1; 0]
In the results below, we assume z_{[n]} <>
x_{[2n]} , w_{[n]} <>
u_{[2n]} and C <=> R:
 If w^{H}Cz is known to be real, then
w^{H}Cz = u^{T}Rx
 If C is hermitian, then,
z^{H}Cz =
x^{T}Rx
 z^{H}z = x^{T}x
To relate the martrix and vector mappings, <> and <=>, we
define the following two blockdiagonal matrices: E =
I_{[n#n]} ⊗ [0 1; 1 0] and N =
I_{[n#n]} ⊗ [1 0; 0 1]. We now have the
following properties (assuming z_{[n]} <>
x_{[2n]} and C <=>
R):
 E^{2}=N^{2}=I
 E^{T}=E,
N^{T}=N
 EN=NE
 ENEN=NENE=I

x^{T}ENx=x^{T}NEx=0
 ENRNE = NEREN = R
 ENREN = NERNE = R
 C^{H} <=> R^{T}
 C^{T} <=>
NR^{T}N
 C^{C} <=> NRN
 jC <=> ENR = REN = NER = RNE
 z <> x and z <=> [x ENx]
 z^{C} <> Nx and
z^{C}<=> [Nx Ex]
 z^{H} <> x^{T} and
z^{H} <=> Y^{T} =
[x^{T}; x^{T}NE]
 z^{T} <> x^{T}N and z^{T}
<=> [x^{T}N;
x^{T}E]
A matrix A is convergent if A^{k} tends to
0 as k tends to infinity.
 A is convergent iff all its eigenvalues have modulus < 1.
 A is convergent iff there exists a positive definite X such that
XA^{H}XA is positive definite (Stein's theorem)
 If S_{k} is defined as
I+A+A^{2}+ … +A^{k},
then A is convergent iff S_{k} converges as
k tends to infinity. If it does converge its limit is
(IA)^{1}.
See also: Stability
The n#n cyclic permutation matrix (or cyclic shift
matrix), C, is equal to
[0_{n1}^{T} 1;
I_{n1#n1}
0_{n1}]. Its elements are given by
c_{i,j} = δ_{i,1+(j mod
n)} where δ_{i,j} is the
Kronecker delta.
 C is a toeplitz, circulant, permutation matrix.
 Cx is the same as x but with the last
element moved to the top and all other elements shifted down by one
position.
 C^{1} = C^{T} =
C^{n1}
 C^{n} = I
A matrix, A, is fully decomposable (or reducible) if there exists a permutation matrix P such
that P^{T}AP is of the form [B C; 0
D] where B and D are square.
A matrix, A, is partlydecomposable if there exist
permutation matrices P and Q such that
P^{T}AQ is of the form [B C; 0 D]
where B and D are square.
A matrix that is not even partlydecomposable is
fullyindecomposable.
A matrix, X:n#n, is defective if it does not have
n linearly independent eigenvectors, otherwise it is simple.
An n*n square matrix is derogatory if its minimal
polynomial is of lower order than n.
A is diagonal if a(i,j)=0 unless i=j.
 Diagonal matrices are closed under addition, multiplication and (where
possible) inversion.
 The determinant of a square diagonal matrix is the product of its diagonal
elements.
 If D is diagonal, DA multiplies each row of A by a
constant while BD multiplies each column of B by a constant.
 If D is diagonal then XDX^{T} =
sum_{i}(d_{i} ×
x_{i}x_{i}^{T}) and
XDX^{H} = sum_{i}(d_{i}
×
x_{i}x_{i}^{H})
[1.15]
 If D is diagonal then tr(XDX^{T}) =
sum_{i}(d_{i} ×
x_{i}^{T}x_{i}) and
tr(XDX^{H}) = sum_{i}(d_{i}
×
x_{i}^{H}x_{i}) =
sum_{i}(d_{i} ×
x_{i}^{2}) [1.16]
 If D is diagonal then AD = DA iff
a_{i,j}=0 whenever d_{i,i} !=
d_{j,j}. [1.12]
 If D = DIAG(c_{1}I_{1},
c_{2}I_{2}, ...,
c_{M}I_{M}) where the
c_{k} are distinct scalars and the
I_{k} are identity matrices, then AD = DA
iff A = DIAG(A_{1}, A_{2}, ...,
A_{M}) where each A_{k} is the same
size as the corresponding I_{k}. [1.13]
The functions DIAG(x) and diag(X) respectively
convert a vector into a diagonal matrix and the diagonal of a square matrix into a
vector. The function sum(X) sums the rows of X to produce a vector. In the expression below, • denotes elementwise
(a.k.a. Hadamard) multiplication.
 diag(DIAG(x)) = x
 x^{T}(diag(Y)) =
tr(DIAG(x)Y)
 DIAG(x) DIAG(y) =
DIAG(x •
y)
 diag(XY) = sum(X •
Y^{T})
Diagonalizable or Diagonable or
Simple or NonDefective
A matrix, X, is diagonalizable (or, equivalently, simple or diagonable or nondefective)
if it is similar to a diagonal matrix otherwise it is defective.
 If X is diagonalizable, it may be
written X=EDE^{1} where D
is a diagonal matrix of eigenvalues and the columns of E are
the corresponding eigenvectors.
 [X, Y diagonalizable]:
The diagonalizable matrices, X and Y, commute, i.e. XY=YX, iff they
can be decomposed as X=EDE^{1} and
Y=EGE^{1} where D and G
diagonal and the columns of E for a common set of eigenvectors.
 The following are equivalent:
 X is diagonalizable
 The Jordan form of X is diagonal.
 For each eigenvalue of X, the geometric and algebraic multiplicities are equal.
 X has
n linearly independent eigenvectors.
A square matrix A_{n#n} is diagonally
dominant if the absolute value of each diagonal element is greater than
the sum of absolute values of the nondiagonal elements in its row. That is if
for each i we have a_{i},i > sum_{j
!= i}(a_{i},j) or equivalently
abs(diag(A)) > ½ABS(A)
1_{n#1}.
 [Real]: If the diagonal elements of a square
matrix A are all >0 and if A and A^{T}
are both diagonally dominant then A is positive definite.
 If A is diagonally dominant and irreducible
then
 A is non singular
 If diag(A) > 0 then all eigenvalues of A have strictly positive
real parts.
The discrete fourier transform matrix,
F_{[n#n]}, has
f_{p}_{,q} = exp(2jπ(p1)
(q1) n^{1}).
 Fx is the discrete fourier transform (DFT) of x.
 F is a symmetric, Vandermonde matrix.
 F^{1} =
n^{1}F^{H}=n^{1}F^{C}
 If y = Fx then y^{H}y = n
x^{H}x. This is Parseval's theorem.
 F is a Vandermonde matrix.
 det(F) = n^{½n}.
 tr(F) = 0.
 FC =
DIAG(Fe_{2}) F where
C is the cyclic permutation matrix
and e_{2} is the second column of
I.
 If A_{[n#n]} is a circulant matrix, the normalized eigenvectors of A are
the columns of n^{½} F. The corresponding
eigenvalues are the discrete fourier transform of the first row of A
given by FA^{T}e_{1}
=
(FA^{C}e_{1})^{C}
= nF^{1}Ae_{1}
where e_{1} is the first column of
I.
 [n=2^{k}]:
F_{[n#n]} = GP where:
 P is a symmetric permutation matrix with P =
prod_{r=1:k}(E_{kr}
⊗ [ E_{r1} ⊗ [1 0] ;
E_{r1} ⊗ [0 1] ] ) where
E_{s} is a 2^{s}#2^{s}
identity matrix and ⊗ denotes the Kroneker product. If x=0:n1 then
Px consists of the same numbers but arranged in bitreversed order (e.g.
for n=8, Px = [0; 4; 2; 6; 1; 5; 3; 7] ).
 G = prod_{r=1:k}(E_{r1}
⊗ [ [1 1] ⊗ E_{kr} ; [1 1]
⊗ W_{k}r ]^{T}) where the diagonal
"twiddle factor" matrix is W_{s} =
DIAG(exp(2^{s} j pi
(0:2^{s}1))).
 Calculation of Fx as GPx is the "decimationintime" FFT
(Fast Fourier Transform) while Fx =
F^{T}x =
PG^{T}x is the "decimationinfrequency"
FFT. In each case only O(n log_{2}n) nontrivial
arithmetic operations are required because most of the nonzero elements of the
factors of G are equal to ±1.
A real nonnegative square matrix A is doublystochastic
if its rows and columns all sum to 1.
See under stochastic for properties.
An essential matrix, E, is the product E=US of a
3#3 orthogonal matrix, U, and a 3#3 skewsymmetric matrix, S = SKEW(s). In 3D euclidean space, a translation+rotation
transformation is associated with an essential matrix.
 If E=U SKEW(s) is an
essential matrix then
 E=SKEW(Us) U
 E^{T}E =
(s^{T}s) I 
ss^{T}
 EE^{T} =
(s^{T}s) I 
Uss^{T}U
 tr(E^{T}E) =
tr(EE^{T}) =
2s^{T}s
 If E is an essential matrix then so are E^{T},
kE and WEV where k is a nonzero scalar and
W and V are orthogonal.
 E is an essential matrix iff rank(E)=2 and
EE^{T}E =
½tr(EE^{T})E. This defines a set of nine
homogeneous cubic equations.
 E is an essential matrix iff its singular
values are k, k and 0 for some k>0.
 If the singular value decomposition of
E is E = Q DIAG([k; k; 0])
R^{T}, then we can write E = US where
U=Q [0 1 0; 1 0 0; 0 0
1] R^{T} and S = R [0 k 0;
k 0 0; 0 0 0] R^{T} = SKEW(R [0; 0; k]).
 If E is an essential matrix then A = kE for
some k iff Ex × Ax = 0 for all x
where × denotes the vector cross product.
The exchange matrix J_{[n#n]} is equal to
[e_{n} e_{n1} …
e_{2} e_{1}] where
e_{i} is the ^{ith} column
of I. It is equal to I but with the columns in reverse
order.
 J is Hankel, Orthogonal, Symmetric, Permutation, Doubly
Stochastic.
 J^{2} = I
 JA^{T}, JAJ and
A^{T}J are versions of the matrix A that
have been rotated anticlockwise by 90, 180 and 270 degrees
 JA, JA^{T}J, AJ and
A^{T} are versions of the matrix A that have
been reflected in lines at 0, 45, 90 and 135 degrees to the horizontal measured
anticlockwise.
 det(J_{n#n}) =
(1)^{n(n1)/2} i.e. it equals +1 if n mod 4 equals 0 or
1 and 1 if n mod 4 equals 2 or 3
[Real]: A Givens Reflection
is an n#n matrix of the form P^{T}[Q 0 ; 0
I]P where P is any permutation matrix and
Q is a matrix of the form [cos(x) sin(x); sin(x)
cos(x)].
 A Givens reflection is symmetric and orthogonal.
 The determinant of a Givens reflection = 1.
 [2*2]: A 2#2 matrix is a Givens reflection iff it is a Householder matrix.
[Real]: A Givens Rotation is
an n#n matrix of the form P^{T}[Q 0 ; 0 I]P
where P is a permutation matrix and Q
is a matrix of the form [cos(x) sin(x); sin(x)
cos(x)].
An n*n Hadamard matrix has orthogonal columns whose
elements are all equal to +1 or 1.
 Hadamard matrices exist only for n=2 or n a multiple of
4.
 If A is an n*n Hadamard matrix then
A^{T}A = n*I. Thus
A/sqrt(n) is orthogonal.
 If A is an n*n Hadamard matrix then det(A) =
n^{n/2}.
A real 2n*2n matrix, A, is Hamiltonian if
KA is symmetric where K = [0 I; I
0].
See also: symplectic
A Hankel matrix has constant antidiagonals. In other words
a(i,j) depends only on (i+j).
 A Hankel matrix is symmetric.
 [A:Hankel] If J is the exchange matrix, then JAJ is Hankel; JA and
AJ are Toepliz.
 [A:Hankel] A+B and AB
are Hankel.
A square matrix A is Hermitian if A =
A^{H}, that is
A(i,j)=conj(A(j,i))
For real matrices, Hermitian and symmetric
are equivalent. Except where stated, the following properties apply to real
symmetric matrices as well.
 [Complex]: A is Hermitian iff
x^{H}Ax is real for all (complex) x.
 The following are equivalent
 A is Hermitian and +ve
semidefinite
 A=B^{H}B for some B
 A=C^{2} for some Hermitian C.
 Any matrix A has a unique decomposition A = B +
jC where B and C are Hermitian: B =
(A+A^{H})/2 and
C=(AA^{H})/2j
 Hermitian matrices are closed under addition, multiplication by a scalar,
raising to an integer power, and (if nonsingular) inversion.
 Hermitian matrices are normal with real eigenvalues,
that is A = UDU^{H} for some unitary U
and real diagonal D.
 A is Hermitian iff
x^{H}Ay=x^{H}A^{H}y
for all x and y.
 If A and B are hermitian then so are AB+BA and
j(ABBA) where j =sqrt(1).
 For any complex a with a=1, there is a 1to1
correspondence between the unitary matrices, U,
not having a as an eigenvalue and hermitian matrices, H, given by
U=a(jHI)(jH+I)^{1}
and
H=j(U+aI)(UaI)^{1} where
j =sqrt(1). These are Caley's formulae.
 Taking a=1 gives
U=(IjH)(I+jH)^{1}=(I+jH)^{1}(IjH)
and
H=j(UI)(U+I)^{1}=j(U+I)^{1}(UI).
See also: Definiteness, Loewner partial order
A Hessenberg matrix is like a triangular matrix except that the
elements adjacent to the main diagonal can be nonzero.
A is upper Hessenberg if A(i,j)=0 whenever i>j+1.
It is like an upper triangular matrix except for the
elements immediately below the main diagonal.
A is lower Hessenberg if a(i,j)=0 whenever
i<j1. It is like a lower triangular matrix
except for the elements immediately above the main diagonal.
A Hilbert matrix is a square Hankel matrix with elements
a(i,j)=1/(i+j1).
If we define an equivalence relation in
which X ~ Y iff X = cY for some nonzero
scalar c, then the equivalence classes are called homogeneous
matrices and homogeneous vectors.
 Multiplication: If X ~ A and Y ~ B, then
XY ~ AB
 Addition: If X ~ A and Y ~ B then it is
not generally true that X+Y ~ A+B
 The projective space
RP^{n}, consists of all nonzero homogeneous vectors from
R^{n+1}.
A Householder matrix (also called Householder
reflection or transformation) is a matrix of the form
(I2vv^{H}) for some vector v with
v=1.
Multiplying a vector by a Householder transformation reflects it in the
hyperplane that is orthogonal to v.
Householder matrices are important because they can be chosen to annihilate
any contiguous block of elements in any chosen vector.
 A Householder matrix is symmetric and orthogonal.
 Given a vector x, we can choose a Householder matrix P such
that Px=[k 0 0 ... 0]^{H} where
k=sgn(x(1))*x. To do so, we choose v =
(x + ke_{1})/x +
ke_{1} where e_{1} is the first column
of the identity matrix. The first row of P equals
k^{1}x^{T} and the remaining rows form
an orthonormal basis for the null space
of x^{T}.
 [2*2]: A 2*2 matrix is Householder iff it is a Givens Reflection.
The hypercompanion matrix of the polynomial
p(x)=(xa)^{n} is an n#n
upper bidiagonal matrix, H, that is zero
except for the value a along the main diagonal and the value 1 on the
diagonal immediately above it. That is, h_{i,j} = a if
j=i, 1 if j=i+1 and 0 otherwise.
If the real polynomial
p(x)=(x^{2}axb)^{n} with
a^{2}+4b<0 (i.e. the quadratic term has no real
factors) then its Real hypercompanion matrix is a 2n#2n
tridiagonal matrix that is zero except for a
at even positions along the main diagonal, b at odd positions along the
subdiagonal and 1 at all positions along the superdiagonal. Thus for odd
i, h_{i,j} = 1 if j=i+1 and 0 otherwise while
for even i, h_{i,j} = 1 if j=i+1, a if
j=i and b if j=i1.
P matrix P is idempotent if P^{2} =
P . An idempotent matrix that is also hermitian
is called a projection matrix.
WARNING: Some people call any idempotent matrix a projection matrix
and call it an orthogonal projection matrix if it is also hermitian.
 The following conditions are equivalent
 P is idempotent
 P is similar to a diagonal matrix each of whose diagonal elements equals 0 or
1.
 2PI is involutary.
 If P is idempotent, then:
 rank(P)=tr(P).
 The eigenvalues of P are all either 0 or 1. The geometric multiplicity of the eigenvalue 1 is
rank(P).
 P^{H}, IP and IP^{H}
are all idempotent.
 P(IP) = (IP)P = 0.
 Px=x iff x lies in the range of P.
 The null space of P equals the range of IP. In other words
Px=0 iff x lies in the range of IP.
 P is its own generalized
inverse, P^{#}.
 [A: n#n, F,G: n#r] If
A=FG^{H} where F and G are of full
rank, then A is idempotent iff G^{H}F =
I.
The identity matrix , I, has a(i,i)=1 for all i and
a(i,j)=0 for all i !=j
A nonnegative matrix T is impotent if
min(diag(T^{n})) = 0 for all integers n>0 [see
potency].
An incidence matrix is one whose elements all equal 1 or 0.
An Integral matrix is one whose elements are all integers.
Involutary (also written
Involutory)
An Involutary matrix is one whose square equals the identity.
 A is involutary iff ½(A+I) is idempotent.
 A_{[2#2]} is involutary iff A = +I or else
A = [a b; (1a^{2})/b
a] for some real or complex a and b.
Irreducible
see under Reducible
Jacobi
see under Tridiagonal
A matrix, A, is monotone iff A^{1}
is nonnegative, i.e. all its entries are >=0.
In computer science a matrix is monotone if its entries are monotonically
nondecreasing as you move away from the main diagonal along either a row or
column.
A matrix A is nilpotent to index k if
A^{k} = 0 but A^{k1} !=
0.
 The determinant of a nilpotent matrix is 0.
 The eigenvalues of a nilpotent matrix
are all 0.
 If A is nilpotent to index k, its minimal polynomial is
t^{k}.
Nonnegative
see under positive
A square matrix A is normal if
A^{H}A = AA^{H}
 A_{n#n} is normal iff any of the following
equivalent conditions is true
 A is unitarily
similar to a diagonal matrix.
 A has an orthonormal set of n eigenvectors
 eig(A)^{H}eig(A) =
A_{F}^{2} where
A_{F} is the Frobenius
norm.
 The following types of matrix are normal: diagonal,
hermitian, skewhermitian and unitary.
 A normal matrix is hermitian iff its eigenvalues
are all real.
 A normal matrix is skewhermitian iff its
eigenvalues all have zero real parts.
 A normal matrix is unitary iff its eigenvalues all
have an absolute value of 1.
 For any X_{m#n},
X^{H}X and XX^{H} are
normal.
 The singular values of a normal matrix are the absolute values of the
eigenvalues.
 [A: normal] The eigenvalues of
A^{H} are the conjugates of the eigenvalues of A
and have the same eigenvectors.
 Normal matrices are closed under raising to an integer power and (if
nonsingular) inversion.
 If A and B are normal and AB=BA then AB
is normal.
A real square matrix Q is orthogonal if Q'Q =
I. It is a proper orthogonal matrix if det(Q)=1 and an
improper orthogonal matrix if det(Q)=1.
For real matrices, orthogonal and unitary mean
the same thing. Most properties are listed under unitary.
Geometrically: Orthogonal matrices in 2 and 3 dimensions correspond to
rotations and reflections.
 The determinant of an orthogonal matrix equals +1 according to whether it
is proper or improper.
 Q is a proper orthogonal matrix iff Q = exp(K) or
K=ln(Q) for some real skewsymmetric K.
 A 2#2 orthogonal matrix is either a Givens
rotation or a Givens reflection according to
whether it is proper or improper.
 A 3#3 orthogonal matrix is either a rotation matrix
or else a rotation matrix plus a reflection in the plane of the rotation
according to whether it is proper or improper.
 For a=+1 or a=1, there is a 1to1 correspondence between
real skewsymmetric matrices, K, and
orthogonal matrices, Q, not having a as an eigenvalue given by
Q=a(KI)(K+I)^{1} and
K=(aI+Q)(aIQ)^{1}. These
are Caley's formulae.
 For a=1 this gives
Q=(IK)(I+K)^{1} and
K=(IQ)(I+Q)^{1}. Note that
(I+K) is always nonsingular.
A square matrix P is a permutation matrix if its columns
are a permutation of the columns of I.
 A permutation matrix is orthogonal and doubly stochastic.
 The set of permutation matrices is closed under multiplication and
inversion.1
 If P is a permutation matrix:
 P is a permutation matrix iff each row and each column contains a
single 1 with all other elements equal to 0.
A matrix A_{[n#n]} is persymmetric
if it is symmetric about its antidiagonal, i.e. if
A=JA^{T}J where J is the exchange matrix. It is perhermitian if
A=JA^{H}J and perskewsymmetric if
A= JA^{T}J.
WARNING: The term persymmetric is sometimes used for a bisymmetric matrix.
 If A is persymmetric then so is A^{k} for any
positive or, providing A is nonsingular, negative k.
 A Toeplitz matrix is persymmetric.
A polynomial matrix of order p is one whose elements are
polynomials of a single variable x. Thus
A=A(0)+A(1)x+...+A(p)x^{p}
where the A(i) are constant matrices and A(p) is
not all zero.
See also regular.
A real matrix is positive if all its elements are strictly >
0.
A real matrix is nonnegative if all its elements are >= 0.
 [Perron's theorem] If A_{n#n} is
positive with spectral radius
r, then the real positive value r is an eigenvalue with the
following properties:
 the eigenvector, x, satisfying Ax = rx
can be chosen to have strictly positive real elements.
 the eigenvector, y, satisfying
A^{T}y = ry can be
chosen to have strictly positive real elements.
 all other eigenvalues have magnitude strictly less than r and
their corresponding eigenvectors cannot be chosen to have all elements strictly
positive and real.
 The rank1 impotent matrix, T =
xy^{T}/x^{T}y, is the
projection onto the eigenspace spanned by x. The
limit, lim_{m>inf}(r^{1}A)^{m}
= T =
xy^{T}/x^{T}y.
 [PerronFrobenius theorem] If A_{n#n} is
irreducible and nonnegative with spectral radius r, then the real
positive value r is an eigenvalue with the following properties:
 the eigenvector, x, satisfying Ax = rx
can be chosen to have strictly positive real elements.
 the eigenvector, y, satisfying
A^{T}y = ry can be
chosen to have strictly positive real elements.
 the eigenvectors associated with any other eigenvalue cannot be chosen to
have all elements strictly positive and real.
 If there are h eigenvalues of magnitude r, then these
eigenvalues are simple and are given by
r exp(2jπk/h) for k=0, 1,
…, h1. h is the period.
Positive Definite
see under definiteness
If k is the eigenvalue of a
matrix A_{n#n} having the largest absolute value,
then A is primitive if the absolute values of all other
eigenvalues are < k.
 If A_{n#n} is nonnegative then A is primitive iff
A^{m} is positive for some
m>0.
 If A_{n#n} is nonnegative and primitive then
lim_{m>inf}(r^{1}A)^{m}
= xy^{T} where r is the spectral radius of A and x and
y are positive eigenvectors satisfying Ax =
rx, A^{T}y = ry
and x^{T}y = 1.
A projection matrix (or orthogonal projection matrix) is a square matrix
that is hermitian and idempotent: i.e.
P^{H}=P^{2}=P.
WARNING: Some people call any idempotent
matrix a projection matrix and call it an orthogonal projection
matrix if it is also hermitian.
 If P is a projection matrix then P is positive semidefinite.
 IP is a projection matrix iff P is a projection matrix.

X(X^{H}X)^{#}X^{H}
is a projection whose range is the subspace spanned by the columns of X.
 If X has full column rank, we can equivalently write
X(X^{H}X)^{1}X^{H}
 xx^{H}/x^{H}x is a
projection onto the 1dimensional subspace spanned by x.
 If P and Q are projection matrices, then the following are
equivalent:
 PQ is a projection matrix
 PQ is positive
semidefinite
 Px >= Qx for all x.
 PQ=Q
 QP=Q
 [A: idempotent] A is a projection
matrix iff Ax <= x for all x.
Quaternions are a generalization of complex numbers. A
quaternion consists of a real component and three independent imaginary
components and is written as r+xi+yj+zk where
i^{2}=j^{2}=k^{2}=ijk=1.
It is approximately true that whereas the polar decomposition of a complex
number has a magnitude and 2dimensional rotation, that of a quaternion has a
magnitude and a 3dimensionl rotation (see below). Quaternions form a
division ring rather than a field because although every nonzero
quaternion has a multiplicative inverse, multiplication is not in general commutative
(e.g. ij=ji=k). Quaternions are widely used to represent
threedimensional rotations in computer graphics and computer vision as an
alternative to orthogonal matrices with the following advantages: (a) more
compact, (b) possible to interpolate, (c) does not suffer from "gimbal lock",
(d) easy to correct for drift due to rounding errors.
We can represent a quaternion either as a real 4vector
q_{R}=[r x y z]^{T} or a complex
2vector q_{C}=[r+jy
x+jz]^{T}. This gives r+xi+yj+zk = [1 i j
k]q_{R} = [1
i]q_{C}. We can also represent it as a real 4#4
matrix Q_{R}=[r x y z; x r z y; y z
r x; z y x r] or a complex 2#2 matrix
Q_{C}=[r+jy x+jz; x+jz rjy]. Both the
real and the complex quaternion matrices obey the same arithmetic rules as
quaternions, i.e. the quaternion matrix representing the result of applying +,
, * and / operations to quaternions is the same as the result of applying the
same operations to the corresponding quaternion matrices. Note that
q_{R}=Q_{R}[1 0 0
0]^{T} and
q_{C}=Q_{C}[1
0]^{T}; we can also define the inverse functions
Q_{R}=QUAT_{R}(q_{R})
and
Q_{C}=QUAT_{C}(q_{C}).
Note that the real and complex representations given above are not the only
possible choices.
In the following,
P_{R}=QUAT_{R}(p_{R}),
Q_{R}=QUAT_{R}(q_{R}),
K=DIAG([1 1 1 1]) and q_{R}=[r x y
z]^{T}=[r; w].
P_{C},p_{C},Q_{C}
and q_{C} are the corresponding complex quantities; the
subscripts R and C are omitted below for results that apply to
both real and complex representations.
 The magnitude of the quaternion is
m=q=sqrt(r^{2}+x^{2}+y^{2}+z^{2}).
. A unit quaternion has m = 1.
 det(Q_{R})=m^{4}.
det(Q_{C})=q^{H}q=m^{2}
 Any quaternion may be written as m times a unit quaternion.

Q^{1}=(q^{H}q)^{1}Q^{H}
is the reciprocal of the quaternion.
 Q^{H} is the conjugate of the quaternion;
this corresponds to reversing the signs of x, y and
z.
 PQ=QUAT(Pq) and
P+Q=QUAT(p+q). This illustrates that we may
often use the quaternion vectors rather than the matrices when performing
arithmetic with a resultant saving in computation.

P_{R}q_{R}=KQ_{R}^{T}Kp_{R}.
Note however that KQ_{R}^{T}K is not a
quaternion matrix unless Q is a multiple of I (i.e. the
corresponding quaternion is purely real).

(Q_{R}K)^{2}=(KQ_{R})^{2}

(P_{R}Q_{R}K)^{2}=(P_{R}K)^{2}(Q_{R}K)^{2}
 Q_{R}=rI+[0
w^{T}; w SKEW(w)]
 [q=1]
(Q_{R}K)^{2}=(KQ_{R})^{2}=[1
0; 0 S] where S is a 3#3 rotation matrix corresponding to an angle of
2cos^{1}(r) about an axis whose unit vector is
w/sqrt(1r^{2}).
 Every 3#3 rotation matrix corresponds to a unit quaternion matrix
that is unique except for its sign, i.e. +Q and Q correspond to
the same rotation matrix. Thus the decomposition of a quaternion into a
magnitude and 3dimensional rotation is only invertible to within a sign
ambiguity.
 [p=q=1] If
(P_{R}K)^{2}==[1 0; 0
R] and (Q_{R}K)^{2}==[1 0;
0 S], then
(P_{R}K)^{2}(Q_{R}K)^{2}=(P_{R}Q_{R}K)^{2}=[1
0; 0 RS]. This shows that multiplying unit quaternions is
equivalent to multiplying rotation matrices but may be more efficient
computationally if it is possible to use quaternion vectors rather than
matrices for intermediate results.
A nonzero matrix A is a rankone matrix iff it can be
decomposed as A=xy^{T}.
 If A=xy^{T} is a rankone matrix then
 If A=pq^{T} then
p=kx and q=y/k for some
scalar k. That is, the decomposition is unique to within a scalar
multiple.
 If A=xy^{T} is a square rankone matrix then
 A has a single nonzero eigenvalue equal to
x^{T}y=y^{T}x.
The associated right and left eigenvectors are respectively x and
y.
 Frobenius Norm:
A_{F}^{2}=tr(A^{H}A)=x^{H}x×y^{H}y
 Pseudoinverse:
A^{+}=A^{H}/
A_{F}^{2}
=A^{H}/tr(A^{H}A)=A^{H}/(x^{H}x×y^{H}y)
where A_{F} is the Frobenius Norm.
A matrix A_{n#n} is reducible (or fully decomposable) if if there exists a permutation matrix P such that
P^{T}AP is of the form [B C; 0 D]
where B and D are square. As a special case
0_{1#1} is regarded as reducible. A matrix that is not
reducible is irreducible.
WARNING: The term reducible is sometimes used to mean one that
has more than one block in its Jordan Normal
Form.
 An irreducible matrix has at least one nonzero offdiagonal element
in each row and column.
 A_{n#n} is irreducible iff (I +
ABS(A))^{n1} is positive.
A polynomial matrix, A, of order p
is regular if det(A) is nonzero.
 An n#n square polynomial matrix, A(x), of order
p is regular iff det(A) is a polynomial in x of degree
n*p.
[Real]: A Rotation matrix,
R, is an n*n matrix of the form R=U[Q 0 ; 0
I]U^{T} where U is any orthogonal matrix and Q is a matrix of the form
[cos(x) sin(x); sin(x) cos(x)]. Multiplying a
vector by R rotates it by an angle x in the plane containing
u and v, the first two columns of U. The direction of
rotation is such that if x=90 degrees, u will be rotated to
v.
 A Rotation matrix is orthogonal with a
determinant of +1.
 All but two of the eigenvalues of R equal unity and the remaining
two are exp(jx) and exp(jx) where j is the square root of
1. The corresponding unit modulus eigenvectors are [u v][1
j]^{T}/sqrt(2) and [u v][1
+j]^{T}/sqrt(2).

R=I+(cos(x)1)(uu^{T}+vv^{T})+sin(x)(vu^{T}uv^{T})
where u and v are the first two columns of U.
 If x=90 degrees then
R=Iuu^{T}vv^{T}+vu^{T}uv^{T}
.
 If x=180 degrees then
R=I2uu^{T}2vv^{T}
 If x=270 degrees then
R=Iuu^{T}vv^{T}vu^{T}+uv^{T}
 [3#3] R =
ww^{T}+cos(x)(Iww^{T})+sin(x)SKEW(w)
=
I+sin(x)SKEW(w)+(1cos(x))SKEW(w)^{2}
where the unit vector w = u × v is the axis of
rotation. [See skewsymmetric for the definition
and properties of SKEW()].
 tr(R) = 2 cos(x) + 1
 Every 3#3 orthogonal matrix is either a rotation
matrix or else a rotation matrix plus a reflection in the plane of the rotation
according to whether its determinant is +1 or 1.
 The product of two 3#3 rotation matrices is a rotation matrix.
 A 3#3 rotation matrix may be expressed as the product of three rotations
about the x, y and z axes respectively. The corresponding
rotation angles are the Euler angles. The order in which the rotations
are performed is significant and is not standardised. Using Euler angles is
often a bad idea because their relation to the rotation axis direction is not
continuous.
 R=(IK)(I+K)^{1} where
K=tan(x/2)*SKEW(w) except when x=180 degrees. This
is the Caley transform.
 If x=90 degrees then
R=ww^{T}+SKEW(w)
=(I+SKEW(w))(ISKEW(w))^{1}
 If x=180 degrees then
R=2ww^{T}I
 If x=270 degrees then
R=ww^{T}SKEW(w)=(ISKEW(w))(I+SKEW(w))^{1}
 ADJ(RI)=2(1cos(x))ww^{T} where
ADJ() denotes the adjoint. All
columns of this rank1 matrix are multiples of w.
 Every 3#3 rotation matrix corresponds to a quaternion matrix that is unique except for its sign.
A shift matrix, or lower shift matrix, Z,
is a matrix with ones below the main diagonal and zeros elsewhere.
Z^{T} has ones above the main diagonal and zeros
elsewhere and is an upper shift matrix.
 ZA, Z^{T}A, AZ,
AZ^{T}, ZAZ^{T} are equal to the
matrix A shifted one position down, up left, right, and down along the
main diagonal respectively.
 Z_{n#n} is nilpotent.
A signature matrix is a diagonal matrix
whose diagonal entries are all +1 or 1.
An n*n square matrix is simple (or, equivalently,
diagonalizable or diagonalizable or
nondefective) if all its eigenvalues are regular, otherwise it is
defective.
A matrix is singular if it has no inverse.
 A matrix A is singular iff det(A)=0.
A square matrix K is SkewHermitian (or
antihermition) if K = K^{H}, that is
a(i,j)=conj(a(j,i))
For real matrices, SkewHermitian and skewsymmetric are equivalent. The following properties
apply also to real skewsymmetric matrices.
 S is Hermitian iff jS is
skewHermitian where j = sqrt(1)
 K is skewHermitian iff x^{H}Ky =
x^{H}K^{H}y for all x
and y.
 SkewHermitian matrices are closed under addition, multiplication by a
scalar, raising to an odd power and (if nonsingular) inversion.
 SkewHermitian matrices are normal.
 If K is skewhermitian, then K^{2} is hermitian.
 The eigenvalues of a skewHermitian matrix are either 0 or pure
imaginary.
 Any matrix A has a unique decomposition A = S +
K where S is Hermitian and K is
skewhermitian.
 K is skewhermitian iff K=ln(U) or
U=exp(K) for some unitary U
.
 For any complex a with a=1, there is a 1to1
correspondence between the unitary matrices, U,
not having a as an eigenvalue and skewhermitian matrices, K,
given by U=a(KI)(I+K)^{1}
and
K=(aI+U)(aIU)^{1}. These
are Caley's formulae.
 Taking a=1 gives
U=(IK)(I+K)^{1} and
K=(IU)(I+U)^{1}.
A square matrix K is skewsymmetric (or
antisymmetric) if K = K^{T}, that is
a(i,j)=a(j,i)
For real matrices, skewsymmetric and SkewHermitian are equivalent. Most properties are listed
under skewHermitian .
 Skewsymmetry is preserved by congruence.
 The diagonal elements of a skewsymmetric matrix are all 0. [1.10]
 The rank of a real or complex skewsymmetric matrix is even. [1.11]
 [Real] The nonzero eigenvalues of a
real skewsymmetric matrix are all purely imaginary and occur in complex
conjugate pairs.
 If K is skewsymmetric, then I  K is
nonsingular
 [Real] If A is skewsymmetric,
then x^{T}Ax = 0 for all real x.
 [Real] If a=+1 or a=1,
there is a 1to1 correspondence between real skewsymmetric matrices,
K, and those orthogonal matrices, Q,
not having a as an eigenvalue given by
Q=a(KI)(K+I)^{1} and
K=(aI+Q)(aIQ)^{1}
. These are Caley's formulae.
 K is real skewsymmetric iff K=ln(Q) or Q =
exp(K) for some real proper orthogonal matrix
Q.
 [Real 3#3] All 3#3 skewsymmetric
matrices have the form SKEW(a)
= [0 a_{3} a_{2}; a_{3} 0
a_{1}; a_{2} a_{1} 0] for some
vector a.
 SKEW(ka) = k SKEW(a) for any
scalar k
 The vector cross product is given by a × b =
SKEW(a) b = SKEW(b) a
 SKEW(a) b = 0 iff a = kb for some
scalar k

SKEW(a)^{2}^{n}=(a^{T}a)^{n1}aa^{T}+(a^{T}a)^{n}I=(a^{T}a)^{n1}(aa^{T}(a^{T}a)I)
for integer n>=1

SKEW(a)^{2}=aa^{T}(a^{T}a)I

SKEW(a)^{2n+1}=(a^{T}a)^{n}SKEW(a)
for integer n>=0

SKEW(a)^{3}=(a^{T}a)SKEW(a)
 The eigenvalues of SKEW(a) are 0 and
+sqrt(a^{T}a)
 The eigenvector associated with 0 is ka
 [Real a]: Eigenvalues are 0 and +ja where
j is sqrt(1). Unless q=r=0 a suitable pair of eigenvectors are
[q^{2}r^{2} jrpq
prjq]^{T} and
[q^{2}r^{2} jrpq
pr+jq]^{T}.
 The singular values of SKEW(a)
are a, a and 0.
 If z=a and w=[z_{2}
z_{3}]^{T}, then a singular value decomposition is
SKEW(a)=USV^{T} where
U=[z^{T};
w I+(z_{1}1)^{1}ww^{T}]J,
S=DIAG(a, a, 0) and
V=U [0 1 0; 1 0 0; 0 0 1]
where J is the exchange matrix (i.e. I
with the column order reversed). All other decompositions may be obtained by
postmultiplying both U and V by
DIAG(Q_{[2#2]}, 1) for some orthogonal Q and/or negating the final column of one
or both of U and V.
 SKEW(a)^{T} SKEW(a) =
SKEW(a) SKEW(a)^{T} =
a^{2} I  aa^{T}
 tr( SKEW(a)^{T}
SKEW(a))=2a^{T}a
 det([a b c]) = a^{T}
SKEW(b) c = b^{T}
SKEW(c) a = c^{T}
SKEW(a) b, this is the scalar triple product.
 a^{T} SKEW(b) a =
a^{T} SKEW(a) b =
b^{T} SKEW(a) a = 0 for all a
and b
 SKEW(a)SKEW(b) =
ba^{T}(b^{T}a)I
 SKEW(a)SKEW(b) c =
(a^{T}c)b 
(a^{T}b)c, this is the vector triple
product.
 For any a and B_{[3#3]},
 B^{T}SKEW(Ba)B = det(B) *
SKEW(a)
 SKEW(Ba)B = ADJ(B)^{T}
SKEW(a) where ADJ(B) denotes the adjoint matrix.
 [det(B)!=0]: SKEW(Ba) =
det(B) *
B^{T}SKEW(a)B^{1}
 SKEW(SKEW(a)b) = ba^{T} 
ab^{T}
 [U orthogonal] The product E =
U SKEW(a) = SKEW(Ua) U is an essential matrix
 E^{T}E =
(a^{T}a) I 
aa^{T}
 tr(E^{T}E) =
2a^{T}a.
A matrix is sparse if it has relatively few nonzero
elements.
A Stability or Stable matrix is one whose eigenvalues
all have strictly negative real parts.
A semistable matrix is one whose eigenvalues all have nonpositive
real parts.
See also: Convergent
A real nonnegative square matrix A is
stochastic if all its rows sum to 1.. If all its columns also sum to 1 it is Doubly Stochastic.
 All eigenvalues of A are <= 1.
 1 is an eigenvalue with eigenvector [1 1 ... 1]^{T}
Substochastic
A real nonnegative square matrix A is
substochastic if all its rows sum to <=1.
A is subunitary if AA^{H}x =
A^{H}x for all x. A is also called a
partial isometry.
The following are equivalent:
 A is subunitary
 A^{H}A is a projection matrix
 AA^{H}A = A
 A^{+} = A^{H}
 A is subunitary iff A^{H} is subunitary iff
A^{+} is subunitary.
 If A is subunitary and nonsingular than A is unitary.
A square matrix A is symmetric if A =
A^{T}, that is a(i,j) = a(j,i).
Most properties of real symmetric matrices are listed under Hermitian .
 [Real]: If A is real,
symmetric, then A=0 iff x^{T}Ax = 0 for all
real x.
 [Real]: A real symmetric matrix is
orthogonally similar to a
diagonal matrix.
 [Real, 2#2] A=[a b; b
d]=RDR^{T} where D is diagonal and
R=[cos(t) sin(t); sin(t) cos(t)] and
t=½tan^{1}(2b/(ad)).
 A is symmetric iff it is congruent to a diagonal matrix.
 Any square matrix may be uniquely decomposed as the sum of a symmetric matrix and a skewsymmetric matrix.
 Any symmetric matrix A can be expressed as
A=UDU^{T} where
U is unitary and D is
real, nonnegative and diagonal with its diagonal elements arranged in
nonincreasing order (i.e. d_{i,i} <= d_{j,j}
for i < j). This is the Takagi decomposition and is a special case of the
singular value decomposition.
See also Hankel.
A real matrix, A, is symmetrizable if
A^{T}M = MA for some positive definite M.
A matrix, A_{[2n#2n]}, is
symplectic if A^{H}KA=K where K
is the antisymmetric orthogonal matrix [0 I; I
0].
 A is symplectic iff
A^{1}=K^{T}A^{H}K
 If a symplectic matrix A=[P Q; R S] where
P,Q,R,S are all n#n, then
A^{1}=[S^{H} R^{H}
; Q^{H} P^{H}]
 The set of symplectic matrices of size 2n#2n is closed under
multiplication and inversion and so forms a multiplicative group.
 A is symplectic iff it preserves the symplectic form
x^{H}Ky, that is
(Ax)^{H}K(Ay) =
x^{H}Ky for all x and y. This
is analogous to the way that a unitary matrix, U, preserves the inner
product:
(Ux)^{H}(Uy)=x^{H}y.
See also: hamiltonian
A toeplitz matrix, A, has constant diagonals. In other
words a_{i,j} depends only on ij.
We define
A=TOE(b_{[m+n1]})_{[m#n]}
to be the m#n matrix with a_{i,j} =
b_{ij+n}. Thus, b is the column
vector formed by starting at the top right element of A, going backwards
along the top row of A and then down the left column of A.
In the topics below, J is the exchange
matrix.
 A toeplitz matrix is persymmetric and so, if it
exists, is its inverse. A symmetric toeplitz matrix is
bisymmetric.
 If A and B are toeplitz, then so are A+B and
AB. Note that AB and A^{1} are not necessarily
toeplitz.
 If A is toeplitz, then A^{T},
A^{H} and JAJ are Toeplitz while JA,
A^{T}J, AJ and
JA^{T} are Hankel.
 If A_{[n#n]} is toeplitz, then
JA^{T}J=(JAJ)^{T}=A while
JA=A^{T}J and
AJ=JA^{T} are Hankel.
 TOE(a+b) = TOE(a) +
TOE(b)

TOE(b_{[m+n1]})_{[m#n]}=TOE(Jb)_{[n#m]}^{T}

TOE(b_{[2n1]})_{[n#n]}=TOE(Jb)_{[n#n]}^{T}
 If the lower triangular matrices
A_{[n#n]}=TOE([0_{[n1]};
p_{[n]}]) and
B_{[n#n]}=TOE([0_{[n1]};
q_{[n]}]) then:
 Aq = Bp =
conv(p,q)_{1:n}
 AB = BA = TOE([0_{[n1]};
Aq]) = TOE([0_{[n1]}; Bp]) =
TOE([0_{[n1]};
conv(p,q)_{1:n}])
 A^{1} and B^{1} are toeplitz
lower triangular if they exist.
 If the upper triangular matrices
A_{[n#n]}=TOE([
p_{[n]}; 0_{[n1]}]) and
B_{[n#n]}=TOE([
q_{[n]}; 0_{[n1]}]) then:
 Aq = Bp =
conv(p,q)_{n:2n1}
 AB = BA = TOE([Aq;
0_{[n1]}]) = TOE([Bp;
0_{[n1]}]) =
TOE([conv(p,q)_{n:2n1};
0_{[n1]}])
 A^{1} and B^{1} are toeplitz
lower triangular if they exist.
 The product
TOE(a)_{[m#r]}TOE(b)_{[r#n]}
is toeplitz iff
a_{r+1:r+m1}b_{1:n1}^{T}
=
a_{1:m1}b_{r+1:r+n1}^{T}
[1.21]. This m1#n1
rankone matrix identity is equivalent to requiring one
of the following conditions:
 Both
a_{r+1:r+m1}=ka_{1:m1}
and
b_{r+1:r+n1}=kb_{1:n1}
for the same scalar k. Note that a_{1:m1} and
a_{r+1:r+m1} will overlap if
m>r+1 and similarly for b if
n>r+1.

 For TOE(a) to be square and symmetric,
a_{1:m1} must be either symmetric or antisymmetric with
k=+1 or 1 respectively (a similar condition applies to
TOE(b)).
 Either a_{r+1:r+m1}= 0 or
b_{1:n1} = 0 and also either
a_{1:m1}= 0 or
b_{r+1:r+n1}= 0 . If
m=r=n then this condition is equivalent to requiring
that A and B are either both upper triangular or both lower
triangular or else one of them is diagonal.
Some special cases of this are:

TOE(a)_{[m#r]}TOE(b)_{[r#n]}
is toeplitz if a_{r+1:r+m1} =
a_{1:m1} and b_{r+1:r+n1}=
b_{1:n1}. Note that this does not make the matrices
symmetrical even for square matrices because a_{1:m1}
goes backwards along the top row of the matrix.
 TOE([0_{[m1]};
a_{[}_{r]}])_{[m#r]}TOE([0_{[n1]};
b_{[r}_{]}])_{[r#n]} =
TOE([0_{[n+mr1]};
conv(a,b)_{1:r}])
 TOE([a_{[r]};
0_{[m1]}])_{[m#r]}TOE([b_{[r]};
0_{[n1]}])_{[r#n]} =
TOE([ conv(a,b)_{r:2r1};
0_{[n+mr1]}])
 If A=TOE(b)_{[m#n]} then
JAJ=TOE(Jb)_{[m#n]}
 TOE([0_{[np]};
a_{[m}_{]};
0_{[}_{qm}_{]}])_{[qp+1#n]}
b_{[}_{n}_{]} =
TOE([0_{[mp]};
b_{[}_{n}_{]};
0_{[}_{qn}_{]}])_{[qp+1#m]}
a_{[}_{m}_{]} =
conv(a,b)_{p:q} provided that
p<=m,n<=q and
conv(a,b)_{i} is taken to be 0 for i
outside the range 1 to m+n1.

TOE(a_{[m}_{]})_{[mn+1#n]}
b_{[}_{n}_{]} =
conv(a,b)_{n}_{:m}
 TOE([0_{[np]};
a_{[}_{n]}])_{[np+1#n]}
b_{[}_{n}_{]} =
TOE([0_{[np]};
b_{[}_{n}_{]}])_{[np+1#n]}
a_{[}_{n]} =
conv(a,b)_{p:}_{n}
 TOE([0_{[n1]};
a_{[n]}])_{[n#n]}
b_{[}_{n}_{]} =
TOE([0_{[n1]};
b_{[}_{n}_{]}])_{[n#n]}
a_{[n]} =
conv(a,b)_{1:}_{n}
 TOE([a_{[n]};
0_{[}_{q}_{n]}])_{[qn+1#n]}
b_{[}_{n}_{]} =
TOE([b_{[}_{n}_{]};
0_{[}_{qn}_{]}])_{[qn+1#n]}
a_{[}_{n]} =
conv(a,b)_{n}_{:q}
 TOE([a_{[n]};
0_{[n1]}])_{[n#n]}
b_{[}_{n}_{]} =
TOE([b_{[}_{n}_{]};
0_{[n1]}])_{[n#n]}
a_{[n]} =
conv(a,b)_{n}_{:}_{2n1}
 TOE([0_{[n1]};
a_{[m}_{]};
0_{[n1]}])_{[m+n1#n]}
b_{[}_{n}_{]} =
TOE([0_{[m1]};
b_{[}_{n}_{]}
;0_{[m1}_{]}])_{[m+n1#m]}
a_{[}_{m}_{]} =
conv(a,b)
 A symmetric toeplitz matrix is of the form
S_{[n#n]} =
TOE([Ja_{[n]};
0_{[n1]}]+[0_{[n1]};
a_{[n]}])
 JSJ = S
 Sb = (TOE([b_{[n]};
0_{[n1]}])_{[n#n]}J+TOE([0_{[n1]};
b_{[n]}])_{[n#n]})a . The
matrix on the right is the sum of a lower triangular toeplitz and an upper
triangular hankel matrix.
A is upper triangular if a(i,j)=0 whenever
i>j.
A is lower triangular if a(i,j)=0
whenever i<j.
A is triangular iff it is either upper or lower
triangular.
A triangular matrix A is strictly triangular if its diagonal
elements all equal 0.
A triangular matrix A is unit triangular if its diagonal
elements all equal 1.
 [Real]: An orthogonal triangular matrix must be diagonal
 [n*n]: The determinant of a triangular matrix is the product of its
diagonal elements.
 If A is unit triangular then inv(A) exists and is unit
triangular.
 A strictly triangular matrix is nilpotent .
 The set of upper triangular matrices are closed under multiplication and
addition and (where possible) inversion.
 The set of lower triangular matrices are closed under multiplication and
addition and (where possible) inversion.
A is tridiagonal or Jacobi if A(i,j)=0 whenever
ij>1. In other words its nonzero elements lie either on or immediately
adjacent to the main diagonal.
 A is tridiagonal iff it is both upper and lower Hessenberg.
A complex square matrix A is unitary if
A^{H}A = I. A is also sometimes
called an isometry.
A real unitary matrix is called orthogonal .The
following properties apply to orthogonal matrices as well as to unitary
matrices.
 Unitary matrices are closed under multiplication, raising to an integer
power and inversion
 U is unitary iff U^{H} is unitary.
 Unitary matrices are normal.
 U is unitary iff Ux = x for all x.
 The eigenvalues of a unitary matrix all have an absolute value of 1.
 The determinant of a unitary matrix has an absolute value of 1.
 A matrix is unitary iff its columns form an orthonormal basis.
 U is unitary iff U=exp(K) or K=ln(U)
for some skewhermitian K.
 For any complex a with a=1, there is a 1to1
correspondence between the unitary matrices, U, not having a as
an eigenvalue and skewhermitian matrices,
K, given by
U=a(KI)(I+K)^{1} and
K=(aI+U)(aIU)^{1}. These
are Caley's formulae.
 Taking a=1 gives
U=(IK)(I+K)^{1} and
K=(IU)(I+U)^{1}.
An Vandermonde matrix, V_{[n#n]}, has the
form [1 x
x^{•2} …
x^{•n1}] for some column vector x. (where
x^{•2} denotes elementwise squaring). A general
element is given by v(i,j) =
(x_{i})^{j1}. All elements of the first column
of the matrix equal 1. Vandermonde matrices arise in connection with fitting
polynomials to data.
WARNING: Some authors define a Vandermonde matrix to be either the
transpose or the horizontally flipped version of the above definition.
The vectorized transpose matrix, TVEC(m,n), is
the mn#mn permutation matrix
whose i,j^{th} element is 1 if
j=1+m(i1)(mn1)floor((i1)/n) or 0
otherwise.
For clarity, we write T_{m,n} =
TVEC(m,n) in this section.
 [A_{[m#n]}]
(A^{T}): = T_{m,n}A:
[see vectorization, R.6]
 T_{m,n} is a permutation
matrix and is therefore orthogonal.
 T_{1,n} = T_{n,1} =
I
 T_{n,}_{m} =
T_{m,n}^{T} =
T_{m,n}^{1}
 [A_{[m#n]},
B_{[p#q]}] B ⊗ A =
T_{p,}_{m} (A ⊗
B) T_{n,}_{q}
 [A_{[m#n]},
B_{[p#q]}]
(A ⊗ B)
T_{n,}_{q} =
T_{m,}_{p} (B ⊗
A)
 [a_{[n}_{]},
B_{[p#q]}] (a ⊗
B) = T_{n,p} (B ⊗
a)
The zero matrix, 0, has a(i,j)=0 for all i,j
 [Complex]: A=0 iff x^{H}Ax = 0
for all x .
 [Real]: If A is symmetric, then A=0 iff
x^{T}Ax = 0 for all x.
 [Real]: A=0 iff x^{T}Ay = 0
for all x and y.
 A=0 iff A^{H}A = 0
This page is part of The Matrix Reference
Manual. Copyright © 19982022 Mike Brookes, Imperial
College, London, UK. See the file gfl.html for copying
instructions. Please send any comments or suggestions to "mike.brookes" at
"imperial.ac.uk".
Updated: $Id: special.html 11291 20210105 18:26:10Z dmb $