Prototypebased models in machine learning Michael Biehl Johann
Prototype-based models in machine learning Michael Biehl Johann Bernoulli Institute for Mathematics and Computer Science University of Groningen www. cs. rug. nl/biehl
review: WIRES Cognitive Science (2016) Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 2
overview 1. Introduction / Motivation prototypes, exemplars neural activation / learning 2. Unsupervised Learning Vector Quantization (VQ) Competitive Learning in VQ and Neural Gas Kohonen’s Self-Organizing Map (SOM) 3. Supervised Learning Vector Quantization (LVQ) Adaptive distances and Relevance Learning (Examples: three bio-medical applications) 4. Summary Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 3
1. Introduction prototypes, exemplars: representation of information in terms of typical representatives (e. g. of a class of objects), much debated concept in cognitive psychology neural activation / learning: external stimulus to a network of neurons response acc. to weights (expected inputs) best matching unit (and neighbors) learning -> even stronger response to the same stimulus in future weights represent different expected stimuli (prototypes) Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 4
even independent from the above: attractive framework for machine learning based data analysis - trained system is parameterized in the feature space (data) - facilitates discussions with domain experts - transparent (white box) and provides insights into the applied criteria (classification, regression, clustering etc. ) - easy to implement, efficient computation - versatile, successfully applied in many different application areas Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 5
2. Unsupervised Learning Some potential aims: dimension reduction: - compression - visualization for human insight - principal {independent} component analysis exploration / structure detection: - clustering - similarities / dissimilarities - source identification - density estimation - neighborhood relation, topology pre-processing for further analysis - supvervised learning, e. g. classification, regression, prediction Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 6
Vector Quantization (VQ) Vector Quantization: identify (few) typical representatives of data which capture essential features VQ system: data: set of prototypes set of feature vectors assignment to prototypes: based on dis-similarity/distance measure given vector xμ , determine winner → assign xμ to prototype w* one popular example: (squared) Euclidean distance Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
competitive learning initially: randomized wk, e. g. in randomly selected data points random sequential (repeated) presentation of data … the winner takes it all: η (<1): learning rate, step size of update comparison: K-means: updates all prototypes, considers all data at a time, EM for Gaussian mixtures in the limit of zero width competitive VQ: updates only the winner, random sequ. presentation of single examples (stochastic gradient descent) Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 8
quantization error competitive VQ (and K-means) aim at optimizing a cost function: here: Euclidean distance - assign each data to closest prototype - measure the corresponding (squared) distance quantization error (sum over all data points) measures the quality of the representation defines a (one) criterion to evaluate / compare the quality of different prototype configurations Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 9
VQ and clustering Remark 1: VQ ≠ clustering minimal quantization error: in general: representation of observations in feature space ideal clustering scenario: well-separate, spherical clusters small clusters irrelevant with respect to quantization error sensitive to cluster shape, coordinate transformations (even linear) Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 10
VQ and clustering Remark 2: clustering is an ill-defined problem “obviously three clusters” our criterion: lower HVQ → “ better clustering ” Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 “well, maybe only two? ” ? ? ? higher HVQ 11
VQ and clustering K=60 K=1 HVQ = 0 → “ the best clustering ” ? the simplest clustering … HVQ (and similar criteria) allow only to compare VQ with the same K ! more general: heuristic compromise between “error” and “simplicity” Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 12
competitive learning practical issues of VQ training: data training initial prototypes dead units solution: rank-based updates (winner, second, third, … ) more general: local minima of the quantization error, initialization-dependent outcome of training Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 13
Neural Gas (NG) many prototypes (gas) to represent the density of observed data [Martinetz, Berkovich, Schulten, IEEE Trans. Neural Netw. 1993] introduce rank-based neighborhood cooperativeness: upon presentation of xμ : • determine the rank of the prototypes • update all prototypes: with neighborhood function and rank-based range λ • potential annealing of λ from large to smaller values Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
Self-Organizing Map T. Kohonen. Self-Organizing Maps. Springer (2 nd edition 1997) neighborhood cooperativeness on a predefined low-dim. lattice A of neurons i. e. prototypes upon presentation of xμ : - determine the winner (best matching unit) at position s in the lattice - update winner and neighborhood: where range ρ w. r. t. distances in lattice A Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
Self-Organizing Map © Wikipedia - lattice deforms reflecting the density of observation SOM provides topology preserving low-dim representation e. g. for inspection and visualization of structured datasets Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 16
Self-Organizing Map illustration: Iris flower data set [Fisher, 1936]: 4 num. features representing Iris flowers from 3 different species SOM (4 x 6 prototypes in a 2 -dim. grid) training on 150 samples (without class label information) component planes: 4 arrays representing the prototype values Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 17
Self-Organizing Map U-Matrix: elements Ur = average distance d(wr, ws) from n. n. sites post labelling: assign prototype to the majority class of data it wins Setosa (undefined) Versicolor Virginica reflects cluster structure larger U at cluster borders Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 here: Setosa well separated from Virginica/Versicolor 18
Vector Quantization Remarks: - presentation of approaches not in historical order - many extensions of the basic concept, e. g. Generative Topographic Map (GTM), probabilistic formulation of the mapping to low-dim. lattice [Bishop, Svensen, Williams, 1998] SOM and NG for specific types of data - time series - “non-vectorial” relational data - graphs and trees Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 19
3. Supervised Learning Potential aims: - classification: assign observations (data) to categories or classes as inferred from labeled training data - regression: assign a continuous target value to an observation dto. - prediction: predict the evolution of a time series (sequence) inferred from observations of the history Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 20
distance based classification assignment of data (objects, observations, . . . ) to one or several classes (crisp/soft) (categories, labels) based on comparison with reference data (samples, prototypes) in terms of a distance measure (dis-similarity, metric) representation of data (a key step!) - collection of qualitative/quantitative descriptors - vectors of numerical features - sequences, graphs, functional data - relational data, e. g. in terms of pairwise (dis-) similarities Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 21
K-NN classifier a simple distance-based classifier - store a set of labeled examples - classify a query according to the label of the Nearest Neighbor (or the majority of K NN) ? feature space + - - local decision boundary acc. to (e. g. ) Euclidean distances - piece-wise linear class borders parameterized by all examples conceptually simple, no training required, one parameter (K) expensive storage and computation, sensitivity to “outliers” can result in overly complex decision boundaries Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
prototype based classification a prototype based classifier [Kohonen 1990, 1997] - represent the data by one or several prototypes per class - classify a query according to the label of the nearest prototype (or alternative schemes) ? feature space + - - local decision boundaries according to (e. g. ) Euclidean distances - piece-wise linear class borders parameterized by prototypes less sensitive to outliers, lower storage needs, little computational effort in the working phase training phase required in order to place prototypes, model selection problem: number of prototypes per class, etc. Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
Nearest Prototype Classifier set of prototypes carrying class-labels nearest prototype classifier (NPC): based on dissimilarity/distance measure given - determine the winner - assign x to the class reasonable requirements: most prominent example: Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 (squared) Euclidean distance
Learning Vector Quantization N-dimensional data, feature vectors ∙ identification of prototype vectors from labeled example data ∙ distance based classification (e. g. Euclidean) heuristic scheme: LVQ 1 [Kohonen, 1990, 1997] • initialize prototype vectors for different classes • present a single example • identify the winner (closest prototype) • move the winner - closer towards the data (same class) - away from the data (different class) Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
Learning Vector Quantization N-dimensional data, feature vectors ∙ identification of prototype vectors from labeled example data ∙ distance based classification (e. g. Euclidean) ∙ distance-based classification [here: Euclidean distances] ∙ tesselation of feature space [piece-wise linear] ∙ aim: discrimination of classes ( ≠ vector quantization or density estimation ) ∙ generalization ability correct classification of new data Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
LVQ 1 iterative training procedure: randomized initial , e. g. close to the class-conditional means sequential presentation of labelled examples … the winner takes it all: LVQ 1 update step: learning rate many heuristic variants/modifications: - learning rate schedules ηw (t) - update more than one prototype per step Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 [Kohonen, 1990, 1997]
LVQ 1 update step: LVQ 1 -like update for generalized distance: requirement: update decreases (increases) distance if classes coincide (are different) Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
Generalized LVQ one example of cost function based training: GLVQ minimize two winning prototypes: linear E favors large margin separation of classes, e. g. sigmoidal (linear for small arguments), e. g. E approximates number of misclassifications Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 [Sato & Yamada, 1995]
GLVQ training = optimization with respect to prototype position, e. g. single example presentation, stochastic sequence of examples, update of two prototypes per step based on non-negative, differentiable distance Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
GLVQ training = optimization with respect to prototype position, e. g. single example presentation, stochastic sequence of examples, update of two prototypes per step based on non-negative, differentiable distance Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
GLVQ training = optimization with respect to prototype position, e. g. single example presentation, stochastic sequence of examples, update of two prototypes per step based on Euclidean distance moves prototypes towards / away from sample with prefactors Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
prototype/distance based classifiers + intuitive interpretation prototypes defined in feature space + natural for multi-class problems + flexible, easy to implement + frequently applied in a variety of practical problems - often based on purely heuristic arguments … or … cost functions with unclear relation to classification error - model/parameter selection (# of prototypes, learning rate, …) Important issue: which is the ‘right’ distance measure ? features may - scale differently - be of completely different nature - be highly correlated / dependent … Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 simple Euclidean distance ?
distance measures fixed distance measures: - select distance measures according to prior knowledge - data driven choice in a preprocessing step - determine prototypes for a given distance - compare performance of various measures example: divergence based LVQ Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 34
Relevance Matrix LVQ generalized quadratic distance in LVQ: [Schneider et al. , 2009] normalization: variants: one global, several local, class-wise relevance matrices → piecewise quadratic decision boundaries diagonal matrices: single feature weights rectangular discriminative low-dim. representation e. g. for visualization [Bunte et al. , 2012] possible constraints: rank-control, sparsity, … Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 [Bojer et al. , 2001] [Hammer et al. , 2002]
Generalized Relevance Matrix LVQ optimization of prototypes and distance measure Generalized Matrix-LVQ (GMLVQ) gradients of cost function: Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
heuristic interpretation standard Euclidean distance for linearly transformed features summarizes - the contribution of the original dimension - the relevance of original features for the classification interpretation assumes implicitly: features have equal order of magnitude e. g. after z-score-transformation → (averages over data set) Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 37
Relevance Matrix LVQ Iris flower data revisited (supervised analysis by GMLVQ) relevance matrix GMLVQ prototypes Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 38
Relevance Matrix LVQ empirical observation / theory: relevance matrix becomes singular, dominated by very few eigenvectors prevents over-fitting in high-dim. feature spaces facilitates discriminative visualization of datasets confirms: Setosa well-separated from Virginica / Versicolor Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 39
projection on second eigenvector a multi-class example classification of coffee samples based on hyperspectral data (256 -dim. feature vectors) [U. Seiffert et al. , IFF Magdeburg] projection on first eigenvector prototypes Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
Relevance Matrix LVQ optimization of prototype positions distance measure(s) in one training process (≠ pre-processing) motivation: improved performance - weighting of features and pairs of features simplified classification schemes - elimination of non-informative, noisy features - discriminative low-dimensional representation insight into the data / classification problem - identification of most discriminative features - intrinsic low-dim. representation, visualization Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
related schemes Linear Discriminant Analysis (LDA) one prototype per class + global matrix, different objective function! Relevance Learning related schemes in supervised learning. . . RBF Networks Neighborhood Component Analysis Large Margin Nearest Neighbor [Backhaus et al. , 2012] [Goldberger et al. , 2005] [Weinberger et al. , 2006, 2010] and many more! Relevance LVQ variants local, rectangular, structured, restricted. . . relevance matrices for visualization, functional data, texture recognition, etc. relevance learning in Robust Soft LVQ, Supervised NG, etc. combination of distances for mixed data. . . Brain Inspired Computing - Brain. Comp, Cetraro, June 2017
links Matlab code: Relevance and Matrix adaptation in Learning Vector Quantization (GRLVQ, GMLVQ and Li. Ra. M LVQ): http: //matlabserver. cs. rug. nl/gmlvqweb/ A no-nonsense beginners’ tool for GMLVQ: http: //www. cs. rug. nl/~biehl/gmlvq (see also: Tutorial, Thursday 9: 30) Pre- and re-prints etc. : http: //www. cs. rug. nl/~biehl/ Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 43
Questions ? ? Brain Inspired Computing - Brain. Comp, Cetraro, June 2017 44
- Slides: 44