Linear Algebra Review Appendix A 2 Duda et
Linear Algebra Review Appendix A. 2 (Duda et al. ) CS 479/679 Pattern Recognition Dr. George Bebis 1
n-dimensional vector • An n-dimensional vector v is denoted as follows: • The transpose v. T is denoted as follows:
Inner (or dot) product • Given v. T = (x 1, x 2, . . . , xn) and w. T = (y 1, y 2, . . . , yn), their dot product defined as follows: (scalar) or
Orthogonal / Orthonormal vectors • A set of vectors x 1, x 2, . . . , xn is orthogonal if k • A set of vectors x 1, x 2, . . . , xn is orthonormal if
Linear combinations • A vector v is a linear combination of the vectors v 1, . . . , vk if: where c 1, . . . , ck are constants. Example: any vector in R 3 can be expressed as a linear combinations of the unit vectors i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1) j i k
Space spanning • A set of vectors S=(v 1, v 2, . . . , vk ) span some space W if every vector v in W can be written as a linear combination of the vectors in S Example: the unit vectors i, j, and k span R 3 j k i
Linear dependence • A set of vectors v 1, . . . , vk are linearly dependent if at least one of them (e. g. , vj) can be written as a linear combination of the rest: (i. e. , vj does not appear on the right side of the above equation)
Linear independence • A set of vectors v 1, . . . , vk is linearly independent if no vector vj can be represented as a linear combination of the remaining vectors, i. e. : Example: c 1=c 2=0
Vector basis • A set of vectors v 1, . . . , vk forms a basis in some vector space W if: (1) (v 1, . . . , vk) span W (2) (v 1, . . . , vk) are linearly independent Some standard bases: R 2 R 3 Rn
Orthogonal vector basis • Basis vectors might not be orthogonal. • Any set of basis vectors (v 1, . . . , vk) can be transformed to an orthogonal basis using the Gram -Schmidt orthogonalization algorithm. • Normalizing the basis vectors to “unit” length will yield an orthonormal basis. • More useful in practice since they simplify calculations.
Vector Expansion/Projection • Suppose v 1, v 2, . . . , vn is an orthogonal base in W, then any v є W can be represented in this basis as follows: (vector expansion or projection) • The xi of the expansion can be computed as follows: where: (coefficients of expansion or projection) Note: if the basis is orthonormal, then vi. vi=1
Vector basis (cont’d) • Why do we care about set of basis vectors? – Given a set of basis vectors, each vector can be represented (i. e. , projected) “uniquely” in this basis. • Do vector spaces have a unique vector basis? – No, simply translate/rotate the basis vectors to obtain a new basis! – Some sets of basis vectors are preferred than others though. – We will see this when we discuss Principal Components Analysis (PCA).
Matrix Operations • Matrix addition/subtraction – Add/Subtract corresponding elements. – Matrices must be of same size. • Matrix multiplication mxn Condition: n = q qxp mxp n
Diagonal Matrices Special case: Identity matrix
Matrix Transpose
Symmetric Matrices Example:
Determinants 2 x 2 3 x 3 (expanded along 1 st column) nxn (expanded along kth column) Properties:
Matrix Inverse • The inverse of a matrix A, denoted as A-1, has the property: A A-1 = A-1 A = I • A-1 exists only if • Definitions – Singular matrix: A-1 does not exist – Ill-conditioned matrix: A is “close” to being singular
Matrix Inverse (cont’d) • Properties of the inverse:
Matrix trace Properties:
Rank of matrix • Defined as the size of the largest square sub-matrix of A that has a non-zero determinant. Example: has rank 3
Rank of matrix (cont’d) • Alternatively, it can be defined as the maximum number of linearly independent columns (or rows) of A. Example: i. e. , rank is not 4!
Rank of matrix (cont’d) • Useful properties:
Eigenvalues and Eigenvectors • The vector v is an eigenvector of matrix A and λ is an eigenvalue of A if: (assume v is non-zero) Geometric interpretation: the linear transformation implied by A cannot change the direction of the eigenvectors v, only their magnitude.
Computing λ and v • To compute the eigenvalues λ of a matrix A, find the roots of the characteristic polynomial. Example: • The eigenvectors can then be computed:
Properties of λ and v • Eigenvalues and eigenvectors are only defined for square matrices. • Eigenvectors are not unique (e. g. , if v is an eigenvector, so is kv) • Suppose λ 1, λ 2, . . . , λn are the eigenvalues of A, then:
Matrix diagonalization • Given an n x n matrix A, find P such that: P-1 AP=Λ where Λ is diagonal • Solution: Set P = [v 1 v 2. . . vn], where v 1, v 2 , . . . vn are the eigenvectors of A: eigenvalues of A P-1 AP=Λ
Matrix diagonalization (cont’d) Example: P-1 AP=Λ
Matrix diagonalization (cont’d) • If A is diagonalizable, then the corresponding eigenvectors v 1, v 2 , . . . vn form a basis in Rn • If A is also symmetric, its eigenvalues are real and the corresponding eigenvectors are orthogonal.
Are all n x n matrices diagonalizable? • An n x n matrix A is diagonalizable iff rank(P)=n, where P-1 AP=Λ. – i. e. , A has n linearly independent eigenvectors. • Theorem: If the eigenvalues of A are all distinct, then the corresponding eigenvectors are linearly independent (i. e. , A is diagonalizable).
Matrix decomposition • If A is diagonalizable, then A can be decomposed as follows:
Matrix decomposition (cont’d) • Matrix decomposition can be simplified in the case of symmetric matrices (i. e. , orthogonal eigenvectors): P-1=PT A=PDPT=
- Slides: 32