Go to: Introduction, Notation, Index

Square matrices **A** and **B** are congruent if there
exists a non-singular **X** such that **B**=
**X**^{T}**AX** . *Congruence* is an equivalence
relation.

For *Hermitian congruence*, see Conjuctivity.

Congruence implies equivalence.

- Congruence preserves symmetry, skewsymmetry and definiteness
**A**is congruent to a diagonal matrix iff it is Hermitian.

Square matrices **A** and **B** are conjunctive or
*hermitely congruent* or *star-congruent* if there exists a non-singular **X** such that
**B**= **X**^{H}**AX**. *Conjunctivity* is an
equivalence relation.

- If
**A**is hermitian, it is conjunctive to a diagonal matrix of the form**D=DIAG**(**I**_{p#p},**-I**_{n#n},**0**_{z#z}).**D**is the intertia matrix of**A**and the inertia of**A**is the scalar triple (*p*,*n*,*z*).- Two Hermitian matrices are conjunctive iff they have the same inertia.

- If
**A**is skew-hermitian, it is conjunctive to a matrix of the form**DIAG**(*j***I**,-*j***I**,**0**). **A**is conjunctive to**I**iff it is positive definite hermitian in which case**A=U**^{H}**IU**for some non-singular upper triangular**U**.

Two matrices are *conjugate* iff they are similar.

The *Direct sum* of matrices **A**, **B**,
... is written **A**⊕**B**⊕... =
**DIAG**(**A**, **B**, ...).

Two *m#n* matrices, **A** and **B**, are equivalent
iff there exists a non-singular *m#m* matix, **M**, and a non-singular
*n#n* matrix, **N**, with **B**=**MAN**.
*Equivalence* is an equivalence relation.

- A and B, are equivalent iff they have the same rank.

The *Hadamard product* of two *m*#*n* matrices **A** and
**B**, written in this website **A** • **B**, is formed by the
elementwise multiplication of their elements. The matrices must be the same
size.…

- If
**A**and**B**are +ve definite then**A**•**B**is +ve definite. - If
**A**and**B**are +ve semi-definite then**A**•**B**is +ve semi-definite and rank(**A**•**B**) <=rank(**A**)•rank(**B**) **A**•**B = B**•**A****A**•^{T}**B**= (^{T}**A**•**B**)^{T}- (
**a**•**b**)(**c**•**d**)^{T}=**a****c**^{T}•**bd**^{T}=**ad**^{T}•**bc**^{T} **a • b**=**DIAG(a) b****DIAG(a • b)**=**DIAG**(**a**)**DIAG**(**b**)- (
**X**•**ab**^{T}) =**DIAG**(**a**)**X****DIAG**(**b**)- (
**X**•**ab**^{T})**(Y • cd**^{T}**)**=**ad**^{T}• (**X****DIAG**(**b**•**c**)**Y**) - (
**X**•**ab**^{T})**y**=**a**• (**X**(**b**•**y**) )

- (

The *Kronecker product* of **A**_{[m#n]} and
**B**_{[p#q]}, written **A** ⊗ **B** or
**KRON**(**A**,**B**), is equal to the *mp#nq* matrix
[*a*(1,1)**B** • *a*(1,*n*)**B** ; • ;
*a*(*m*,1)**B** • *a*(*m*,*n*)**B** ]. It
is also known as the *direct product* or *tensor product* of **A**
and **B**. The Kronecker Product operation is often denoted by a • sign
enclosed in a circle which we approximate with ⊗. Note that in general
**A** ⊗ **B** != **B** ⊗ **A**. In the expressions
below a **:** suffix denotes vectorization.

- Associative:
**A**⊗**B**⊗**C**=**A**⊗**(B**⊗**C**) = (**A**⊗**B**) ⊗**C** - Distributive:
**A**⊗**(B**+**C**) =**A**⊗**B**+**A**⊗**C** **Not**Commutative:**A**⊗**B**=**B**⊗**A**iff**A**=*c***B**for some scalar*c*- det(
**A**_{[m#m]}⊗**B**_{[n#n]}) = det(**A**)det(^{n}**B**).^{m} - tr(
**A**_{[n#n]}⊗**B**_{[n#n]}) = tr(**A**) tr(**B**) **A**⊗^{T}**B**= (^{T}**A**⊗**B**)^{T}**A**⊗^{H}**B**= (^{H}**A**⊗**B**)^{H}**A**^{-1}⊗**B**^{-1}= (**A**⊗**B**)^{-1}**A**⊗**B**is singular iff**A**or**B**is singular.

**A**⊗**B**=**I**iff**A***= c***I**and**B***= c*^{-1}**I**for some scalar*c*.**I**_{[m#m]}⊗**I**_{[n#n]}=**I**_{[mn#mn]}

**A**⊗**B**is orthogonal iff*c***A**and*c*^{-1}**B**are orthogonal for some scalar*c*.**A**⊗**B**is diagonal iff**A**and**B**are diagonal.- If
**Aa**=*p***a**and**Bb**=*q***b**then (**A**⊗**B**)(**a**⊗**b**)=*pq*(**a**⊗**b**). The algebraic multiplicity of the eigenvalue*pq*is the product of the corresponding multiplicities of*p*and*q*. - If
**Aa**=*p***a**and**Bb**=*q***b**then (**A**⊗**I + I**⊗**B**)(**a**⊗**b**)=(*p*+*q*)(**a**⊗**b**) - (
**ba**^{T})**:**= (**a**^{T}⊗**b**)**:**=**a**⊗**b**where the**:**suffix denotes vectorization. **a**⊗**(BC**)= (**a**⊗**B**)**C****a**⊗^{T}**(BC**)=**B**(**a**⊗^{T}**C**)**(AB**) ⊗**c**= (**A**⊗**c**)**B****(AB**) ⊗**c**=^{T}**A**(**B**⊗**c**)^{T}

**a**⊗**b**^{T}=**ab**^{T}**a**^{T}⊗**b**=**ba**^{T}

**a**^{T}⊗**BC**^{T}⊗**d**= (**B**⊗**d**)(**a**⊗**C**)^{T}**a**⊗**BC**^{T}⊗**d**^{T}= (**a**⊗**B**)(**C**⊗**d**)^{T}- (
**a**⊗**b**)(**c**⊗**d**)^{T}= (**ba**^{T})**:**(**dc**^{T})**:**^{T}=**a**⊗**bc**^{T}⊗**d**^{T}=**c**^{T}⊗**ad**^{T}⊗**b =****ac**^{T}⊗**bd**^{T} **AB**⊗**CD**=**(A**⊗**C**)**(B**⊗**D**)**A**_{[m#n]}⊗**B**_{[p#q]}=**(A**⊗**I**_{[p#p]})**(****I**_{[n#n]}⊗**B**) =**(I**_{[m#m]}⊗**B**)**(A**⊗**I**_{[q#q]})**a**_{[m]}⊗**B**_{[p#q]}=**(a**⊗**I**_{[p#p]})**B****A**_{[m#n]}⊗**b**_{[p]}=**(I**_{[m#m]}⊗**b**)**A****a**_{[m]}⊗**B**_{[p]}=**(a**⊗**I**_{[p#p]})**b**=**(I**_{[m#m]}⊗**b**)**a**

**I**_{[n#n]}⊗**AB**=**(I**_{[n#n]}⊗**A**)**(****I**_{[n#n]}⊗**B**)**AB**⊗**I**_{[n#n]}=**(A**⊗**I**_{[n#n]})**(B**⊗**I**_{[n#n]})**ab**^{H}⊗**cd**^{H}= (**a**⊗**c**)(**b**⊗**d**)^{H}= (**ca**^{T})**:**(**db**^{T})**:**^{H}**a**^{H}**b****c**^{H}**d**=**a**^{H}**b**⊗**c**^{H}**d**= (**a**⊗**c**)^{H}(**b**⊗**d**) = (**ca**^{T})**:**^{H}(**db**^{T})**:**- (
**A**⊗**B**)^{H}**(A**⊗**B**) =**A**^{H}**A**⊗**B**^{H}**B**

- (
**ABC**)**: =**(**C**^{T}⊗**A**)**B:**- (
**AB**)**: =**(**I**⊗**A**)**B: =**(**B**^{T}⊗**I**)**A:=**(**B**^{T}⊗**A**)**I:** - (
**Abc**^{T})**: =**(**c**⊗**A**)**b = c**⊗**Ab** **ABc =**(**c**^{T}⊗**A**)**B:****a**^{T}**Bc =**(**c**⊗**a**)^{T}**B: =**(**ac**^{T})**:**^{T}**B: =**(**c**⊗**a**)^{T}**B: =**(**a**⊗**c**)^{T}**B**^{T}**: = B:**^{T}(**a**⊗**c**)**= B:**^{T}(**ca**^{T})**:**

- (
- (
**ABC**)**:**^{T}**=****B:**^{T}(**C**⊗**A**^{T})- (
**AB**)**:**^{T}**=****B:**^{T}(**I**⊗**A**^{T})**=****A:**^{T}(**B**⊗**I**)**=****I:**^{T}(**B**⊗**A**^{T}) - (
**Abc**^{T})**:**^{T}**=****b**^{T}(**c**^{T}⊗**A**^{T})**= c**^{T}⊗**b**^{T}**A**^{T} **a**^{T}**B**^{T}**C =****B:**^{T}(**a**⊗**C**)

- (
- ((
**ABC**)^{T}**): =**(**C**^{T}**B**^{T}**A**^{T}**): =**(**A**⊗**C**^{T})**B**^{T}**:****ABc =**(**A**⊗**c**)^{T}**B**^{T}**:****B**_{[m#n]}**c =**(**I**_{[m#m]}⊗**c**^{T})**B**^{T}**:**

- If
**Y**=**AXB+CXD+...**then**Y:**= (**B**⊗^{T}**A**+**D**⊗^{T}**C+...**)**X:**however this is a slow and often ill-conditioned way of solving such equations for**X**.

In the identities below, **I**_{n} =
**I**_{[n#n]} and **T**_{m,n} =
**TVEC**(*m,n*) [see vectorized
transpose]

**B**_{[p#q]}⊗**A**_{[m#n]}=**T**_{p,m}(**A**⊗**B**)**T**_{n,q}- (
**A**_{[m#n]}⊗**B**_{[p#q]})**T**_{n,q}=**T**_{m,p}(**B**⊗**A**) **a**_{[m]}⊗**B**_{[p#q]}=**(a**⊗**I**_{p})**B**=**T**_{m,p}(**B**⊗**a**)**A**_{[m#n]}⊗**b**_{[p]}=**(I**_{m]}⊗**b**)**A**=**T**_{m,p}(**b**⊗**A**)**a**_{[m]}⊗**b**_{[p]}= (**a**⊗**I**_{p})**b**=**(I**_{m]}⊗**b**)**a**

- (
- (
**A**⊗**b**): =**A**: ⊗**b** - (
**a**_{[m]}⊗**B**_{[p,q]}): = (**T**_{q,m}⊗**I**_{p})(**a**⊗**B:**) = (**I**_{q}⊗ a ⊗**I**_{p})**B****:** - (
**A**⊗**B**): = (**I**_{n}⊗**T**_{q,m}⊗**I**_{p})(**A:**⊗**B:**) = (**I**_{n}⊗**T**_{q,m}⊗**I**_{p})(**A**⊗**B:**)**:**= (**T**_{n,}_{q}⊗**I**_{mp})(**A****:**⊗**B**)**:**= (**I**_{nq}⊗**T**_{m,p})(**B:**⊗**A**)**:**

The *Kronecker sum* of two square matrices,
**A**_{[m#m]} and **B**_{[n#n]}, is equal
to (**A** ⊗ **I**_{n}) +
(**I**_{m} ⊗ **B**). It is sometimes written
**A**⊕**B** but in these pages, this notation
is reserved for the direct sum.

We can define a partial order on the set of Hermitian matrices by writing
**A**>=**B** iff **A**-**B** is positive semidefinite and **A**>**B** iff
**A**-**B** is positive
definite.

- The partial order is:
*reflexive*:**A**>=**A**for all**A**.*antisymmetric*:**A**>=**B**and**B**>=**A**are both true iff**A**=**B**.*transitive*: If**A**>=**B**and**B**>=**C**then**A**>=**C**.

- Any pair of hermitian matrices,
**A**and**B**, satisfy precisely one of the following:- None of the relations
**A**<**B**,**A**<=,**B****A**=**B**,**A**>=**B**,**A**>**B**is true. **A**<**B**and**A<=B**only are true.**A**<=**B**only is true.**A**=**B**,**A<=B**and**A**>=**B**only are true.**A**>=**B**only is true.**A**>**B**and**A>=B**only are true.

- None of the relations
**A**>=**B**iff**x**^{H}**Ax**>=**x**^{H}**Bx**for all**x**where >= has its normal scalar meaning (likewise for >)**A**>=**B**iff**D**^{H}**AD**>=**D**^{H}**BD**for any, not necessarily square,**D**. (not true for >).**A**>**B**iff**D**^{H}**AD**>**D**^{H}**BD**for any non-singular**D**.

Real square matrices **A** and **B** are orthogonally
similar if there exists an orthogonal **Q** such that **B**=
**Q**^{T}**AQ** .

Orthogonal similarity implies both similarity and congruence.

See also: Unitary similarity

Square matrices **A** and **B** are similar (also called
*conjugate*) if there exists a non-singular **X** such that
**B**=**X**^{-1}**AX** . Similarity is an equivalence relation,
i.e. it is reflexive, symmetric and transitive.

Similar matrices represent the same linear transformation in a different basis. Similarity implies equivalence .

- Similar matrices have the same trace, determinant, rank, nullity, eigenvalues, characteristic polynomial and minimum polynomial.
- Two square matrices are similar iff their Jordan forms contain the same hypercompanion blocks (possibly in a different order).
- Two square matrices,
**A**and**B**, are similar iff there exist matrices**X**and**Y**(not both singular) such that**A**=**XY**and**B**=**YX**.

Square matrices **A** and **B** are unitarily similar if
there exists a unitary **Q** such that **B**=
**Q**^{H}**AQ** . Unitary similarity is an equivalence
relation and implies both similarity and
conjunctivity.

- If two unitarily similar matrices,
**A**and**B**, are both real then they are orthogonally similar which also implies congruence. - Unitarily similar matrices have the same Frobenius Norm.
**A**_{n#n}

This page is part of The Matrix Reference Manual. Copyright © 1998-2022 Mike Brookes, Imperial College, London, UK. See the file gfl.html for copying instructions. Please send any comments or suggestions to "mike.brookes" at "imperial.ac.uk".

Updated: $Id: relation.html 11291 2021-01-05 18:26:10Z dmb $