Linear Algebra and Matrices Methods for Dummies 20
Linear Algebra and Matrices Methods for Dummies 20 th October, 2010 Melaine Boly Christian Lambert
Overview • • • Definitions-Scalars, vectors and matrices Vector and matrix calculations Identity, inverse matrices & determinants Eigenvectors & dot products Relevance to SPM and terminology Linear Algebra & Matrices, Mf. D 2010
Part I Matrix Basics Linear Algebra & Matrices, Mf. D 2010
Scalar • A quantity (variable), described by a single real number e. g. Intensity of each voxel in an MRI scan Linear Algebra & Matrices, Mf. D 2010
Vector • Not a physics vector (magnitude, direction) EXAMPLE: VECTOR= i. e. A column of numbers Linear Algebra & Matrices, Mf. D 2010
Matrices • Rectangular display of vectors in rows and columns • Can inform about the same vector intensity at different times or different voxels at the same time • Vector is just a n x 1 matrix Linear Algebra & Matrices, Mf. D 2010
Matrices Matrix locations/size defined as rows x columns (R x C) d i j : ith row, jth column Square (3 x 3) Rectangular (3 x 2) 3 dimensional (3 x 5) Linear Algebra & Matrices, Mf. D 2010
Matrices in MATLAB Description Matrix(X) Type into MATLAB Meaning X=[1 4 7; 2 5 8; 3 6 9] ; =end of a row Reference matrix values (X(row, column)) Note the : refers to all of row or column and , is the divider between rows and columns 3 rd row X(3, : ) 2 nd Element of 3 rd column X(2, 3) Elements 2&3 of column 2 X( [1 2], 2) Special types of matrix All zeros size 3 x 1 zeros(3, 1) All ones size 2 x 2 ones(2, 2) Linear Algebra & Matrices, Mf. D 2010 8
Transposition column row Linear Algebra & Matrices, Mf. D 2010 row column
Matrix Calculations Addition – Commutative: A+B=B+A – Associative: (A+B)+C=A+(B+C) Subtraction - By adding a negative matrix Linear Algebra & Matrices, Mf. D 2010
Scalar multiplication Scalar * matrix = scalar multiplication Linear Algebra & Matrices, Mf. D 2010
Matrix Multiplication “When A is a mxn matrix & B is a kxl matrix, AB is only possible if n=k. The result will be an mxl matrix” Simply put, can ONLY perform A*B IF: Number of columns in A = Number of rows in B m n l A 1 A 2 A 3 B 14 A 5 A 6 x A 7 A 8 A 9 B 15 B 16 k = m x l matrix B 17 B 18 A 10 A 11 A 12 Hint: 1) If you see this message in MATLAB: ? ? ? Error using ==> mtimes Inner matrix dimensions must agree Linear Algebra & Matrices, Mf. D 2010 -Then columns in A is not equal to rows in B
Matrix multiplication • Multiplication method: Sum over product of respective rows and columns Matlab does all this for you! Simply type: C = A * B Hints: 1) You can work out the size of the output (2 x 2). In MATLAB, if you pre-allocate a matrix this size (e. g. C=zeros(2, 2)) then the calculation is quicker Linear Algebra & Matrices, Mf. D 2010
Matrix multiplication • Matrix multiplication is NOT commutative i. e the order matters! – AB≠BA • Matrix multiplication IS associative – A(BC)=(AB)C • Matrix multiplication IS distributive – A(B+C)=AB+AC – (A+B)C=AC+BC Linear Algebra & Matrices, Mf. D 2010
Identity matrix A special matrix which plays a similar role as the number 1 in number multiplication? For any nxn matrix A, we have A In = In A = A For any nxm matrix A, we have In A = A, and A Im = A (so 2 possible matrices) If the answers always A, why use an identity matrix? Can’t divide matrices, therefore to solve may problems have to use the inverse. The identity is important in these types of calculations. Linear Algebra & Matrices, Mf. D 2010
Identity matrix Worked example A I 3 = A for a 3 x 3 matrix: 1 2 3 4 5 6 7 8 9 X 1 0 0 0 1 = 1+0+0 0+2+0 0+0+3 4+0+0 0+5+0 0+0+6 7+0+0 0+8+0 0+0+9 • In Matlab: eye(r, c) produces an r x c identity matrix Linear Algebra & Matrices, Mf. D 2010
Part II More Advanced Matrix Techniques Linear Algebra & Matrices, Mf. D 2010
Vector components & orthonormal base • A given vector (a b) can be summarized by its components, but only in a particular base (set of axes; the vector itself can be independent from the choice of this particular base). y axis example a and b are the components of in the given base (axes chosen for expression of the coordinates in vector space) Orthonormal base: set of vectors chosen to express the components of the others, perpendicular to each other and all with norm (length) = 1 Linear Algebra & Matrices, Mf. D 2010 b a x axis
Linear combination & dimensionality Vectorial space: space defined by different vectors (for example for dimensions…). The vectorial space defined by some vectors is a space that contains them and all the vectors that can be obtained by multiplying these vectors by a real number then adding them (linear combination). A matrix A (m n) can itself be decomposed in as many vectors as its number of columns (or lines). When decomposed, one can represent each column of the matrix by a vector. The ensemble of n vector-column defines a vectorial space proper to matrix A. Similarly, A can be viewed as a matricial representation of this ensemble of vectors, expressing their components in a given base. Linear Algebra & Matrices, Mf. D 2010
Linear dependency and rank If one can find a linear relationship between the lines or columns of a matrix, then the rank of the matrix (number of dimensions of its vectorial space) will not be equal to its number of column/lines – the matrix will be said to be rankdeficient. Example When representing the vectors, we see that x 1 and x 2 are superimposed. If we look better, we see that we can express one by a linear combination of the other: x 2 = 2 x 1. The rank of the matrix will be 1. In parallel, the vectorial space defined will has only one dimension. Linear Algebra & Matrices, Mf. D 2010
Linear dependency and rank • The rank of a matrix corresponds to the dimensionality of the vectorial space defined by this matrix. It corresponds to the number of vectors defined by the matrix that are linearly independents from each other. • Linealy independent vectors are vectors defining each one more dimension in space, compared to the space defined by the other vectors. They cannot be expressed by a linear combination of the others. Note. Linearly independent vectors are not necessarily orthogonal (perpendicular). Example: take 3 linearly independent vectors x 1, x 2 et x 3. Vectors x 1 and x 2 define a plane (x, y) And vector x 3 has an additional non-zero component in the z axis. But x 3 is not perpendicular to x 1 or x 2. Linear Algebra & Matrices, Mf. D 2010
Eigenvalues et eigenvectors One can represent the vectors from matrix X (eigenvectors of A) as a set of orthogonal vectors (perpendicular), and thus representing the different dimensions of the original matrix A. The amplitude of the matrix A in these different dimensions will be given by the eigenvalues corresponding to the different eigenvectors of A (the vectors composing X). Note: if a matrix is rank-deficient, at least one of its eigenvalues is zero. For A’: u 1, u 2 = eigenvectors k 1, k 2 = eigenvalues In Principal Component Analysis (PCA), the matrix is decomposed into eigenvectors and eigenvalues AND the matrix is rotated to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. Linear Algebra & Matrices, Mf. D 2010
Vector Products Two vectors: Inner product = scalar Inner product XTY is a scalar (1 xn) (nx 1) Outer product = matrix Outer product XYT is a matrix (nx 1) (1 xn) Linear Algebra & Matrices, Mf. D 2010
Scalar product of vectors Calculate the scalar product of two vectors is equivqlent to make the projection of one vector on the other one. One can indeed show that x 1 x 2 = x 1 . x 2 . cos where is the angle that separates two vectors when they have both the same origin. x 1 x 2 = . . cos In parallel, if two vectors are orthogonal, their scalar product is zero: the projection of one onto the other will be zero. Linear Algebra & Matrices, Mf. D 2010
Determinants Le déterminant d’une matrice est un nombre scalaire représentant certaines propriétés intrinsèques de cette matrice. Il est noté det. A ou |A|. Sa définition est un détour indispensable avant d’aborder l’opération correspondant à la division de matrices, avec le calcul de l’inverse. Linear Algebra & Matrices, Mf. D 2010
Determinants For a matrix 1 1: For a matrix 2 2: For a matrix 3 3: a 11 a 12 a 13 a 21 a 22 a 23 = a 11 a 22 a 33+a 12 a 23 a 31+a 13 a 21 a 32–a 11 a 23 a 32 –a 12 a 21 a 33 –a 13 a 22 a 31 a 32 a 33 = a 11(a 22 a 33 –a 23 a 32)–a 12(a 21 a 33 –a 23 a 31)+a 13(a 21 a 32 –a 22 a 31) The determinant of a matrix can be calculate by multiplying each element of one of its lines by the determinant of a sub-matrix formed by the elements that stay when one suppress the line and column containing this element. One give to the obtained product the sign (-1)i+j. Linear Algebra & Matrices, Mf. D 2010
Determinants • Determinants can only be found for square matrices. • For a 2 x 2 matrix A, det(A) = ad-bc. Lets have at closer look at that: det(A) = [ ] a c b d = ad - bc • In Matlab: det(A) = det(A) The determinant gives an idea of the ’volume’ occupied by the matrix in vector space A matrix A has an inverse matrix A-1 if and only if det(A)≠ 0. Linear Algebra & Matrices, Mf. D 2010
Determinants The determinant of a matrix is zero if and only if there exist a linear relationship between the lines or the columns of the matrix – if the matrix is rank-deficient. In parallel, one can define the rank of a matrix A as the size of the largest square sub-matrix of A that has a non-zero determionant. Here x 1 and x 2 are superimposed in space, because one can be expressed by a linear combination of the other: x 2 = 2 x 1. The determinant of the matrix X will thus be zero. The largest square sub-matrix with a nonzero determinant will be a matrix of 1 x 1 => the rank of the matrix is 1. Linear Algebra & Matrices, Mf. D 2010
Determinants • In a vectorial space of n dimensions, there will be no more than n linearly independent vectors. • If 3 vectors (2 1) x’ 1, x’ 2, x’ 3 are represented by a matrix X’: Graphically, we have: Here x 3 can be expressed by a linear combination of x 1 and x 2. The determinant of the matrix X’ will thus be zero. The largest square sub-matrix with a non-zero determinant will be a matrix of 2 x 2 => the rank of the matrix is 2. Linear Algebra & Matrices, Mf. D 2010
Determinants The notions of determinant, of the rank of a matrix and of linear dependency are closely linked. Take a set of vectors x 1, x 2, …, xn, all with the same number of elements: these vectors are linearly dependent if one can find a set of scalars c 1, c 2, …, cn non equal to zero such as: c 1 x 1+ c 2 x 2+…+ cn xn= 0 A set of vectors are linearly dependent if one of then can be expressed as a linear combination of the others. They define in space a smaller number of dimensions than the total number of vectors in the set. The resulting matrix will be rank-deficient and the determinant will be zero. Similarly, if all the elements of a line or column are zero, the determinant of the matrix will be zero. If a matrix present two rows or columns that are equal, its determinant will also be zero Linear Algebra & Matrices, Mf. D 2010
Matrix inverse • Definition. A matrix A is called nonsingular or invertible if there exists a matrix B such that: 1 1 -1 2 X 2 3 -1 3 1 3 = 2+1 3 3 -1 + 1 3 3 -2+ 2 3 3 1+2 3 3 = 1 0 0 1 • Notation. A common notation for the inverse of a matrix A is A-1. So: • The inverse matrix is unique when it exists. So if A is invertible, then A-1 is also invertible and then (AT)-1 = (A-1)T • In Matlab: A-1 = inv(A) Linear Algebra & Matrices, Mf. D 2010 • Matrix division: A/B= A*B-1
Matrix inverse • For a Xx. X square matrix: • The inverse matrix is: • E. g. : 2 x 2 matrix For a matrix to be invertible, its determinant has to be non-zero (it has to be square and of full rank). A matrix that is not invertible is said to be singular. Reciprocally, a matrix that is invertible is said to be non-singular. Linear Algebra & Matrices, Mf. D 2010
Pseudoinverse In SPM, design matrices are not square (more lines than columns, especially for f. MRI). The system is said to be overdetermined – there is not a unique solution, i. e. there is more than one solution possible. SPM will use a mathematical trick called the pseudoinverse, which is an approximation used in overdetermined systems, where the solution is constrained to be the one where the values that are minimum. Linear Algebra & Matrices, Mf. D 2010
Part III How are matrices relevant to f. MRI data? Linear Algebra & Matrices, Mf. D 2010
Image time-series Realignment Spatial filter Design matrix Smoothing General Linear Model Statistical Parametric Map Statistical Inference Normalisation Anatomical reference Linear Algebra & Matrices, Mf. D 2010 RFT p <0. 05 Parameter estimates
Voxel-wise time series analysis Model specification Time Parameter estimation Hypothesis Statistic e m Ti BOLD signal single voxel time series Linear Algebra & Matrices, Mf. D 2010 SPM
How are matrices relevant to f. MRI data? ve ror ct or er ra pa da ve ta ct or d m esig at n ri x m et er s GLM equation a m N of scans b 3 b 4 = b 5 + b 6 b 7 b 8 b 9 Y = X Linear Algebra & Matrices, Mf. D 2010 ´ b + e
da ve ta ct or How are matrices relevant to f. MRI data? Response variable A single voxel sampled at successive time points. Each voxel is considered as independent observation. Y Linear Algebra & Matrices, Mf. D 2010 Y Ti Time me e. g BOLD signal at a particular voxel Preprocessing. . . Intens ity Y= X. β +ε
pa ra de m sig at n ri x m et er s How are matrices relevant to f. MRI data? a m b 3 b 4 b 5 b 6 Explanatory variables – These are assumed to be measured without error. – May be continuous; – May be dummy, indicating levels of an experimental factor. b 7 b 8 b 9 X ´ b Linear Algebra & Matrices, Mf. D 2010 Solve equation for β – tells us how much of the BOLD signal is explained by X Y= X. β +ε
In Practice • Estimate MAGNITUDE of signal changes • MR INTENSITY levels for each voxel at various time points • Relationship between experiment and voxel changes are established • Calculation and notation require linear algebra and matrices manipulations Linear Algebra & Matrices, Mf. D 2010
Summary • SPM builds up data as a matrix. • Manipulation of matrices enables unknown values to be calculated. Y = X. β + ε Observed = Predictors * Parameters + Error BOLD = Design Matrix * Betas + Error Linear Algebra & Matrices, Mf. D 2010
References • SPM course http: //www. fil. ion. ucl. ac. uk/spm/course/ • Web Guides http: //mathworld. wolfram. com/Linear. Algebra. html http: //www. maths. surrey. ac. uk/explore/emmaspages/option 1. ht ml http: //www. inf. ed. ac. uk/teaching/courses/fmcs 1/ (Formal Modelling in Cognitive Science course) • http: //www. wikipedia. org • Previous Mf. D slides Linear Algebra & Matrices, Mf. D 2010
ANY QUESTIONS ?
- Slides: 43