Dimensionality Reduction and Metric Learning STAT 946 Ali

  • Slides: 40
Download presentation
Dimensionality Reduction and Metric Learning STAT 946 Ali Ghodsi Spring 2009

Dimensionality Reduction and Metric Learning STAT 946 Ali Ghodsi Spring 2009

Knowledge is power 1113152312050407073009301 615230518512351481245284 6547832594759089614824800 6547328409832570943854365 8857843750439670134768659 3598534976943764309670431 6043764560582114362459587 9857348957069576532853784 7078748290698576058635487 9408739826476458404943265 2437346853186438764573467

Knowledge is power 1113152312050407073009301 615230518512351481245284 6547832594759089614824800 6547328409832570943854365 8857843750439670134768659 3598534976943764309670431 6043764560582114362459587 9857348957069576532853784 7078748290698576058635487 9408739826476458404943265 2437346853186438764573467 3474734645064482543763846 5484573648324532986493264 9324693425045032504327575 6873659843556986985698696

Two Problems Classical Statistics • Infer information from small data sets (Not enough data)

Two Problems Classical Statistics • Infer information from small data sets (Not enough data) Machine Learning • Infer information from large data sets (Too many data)

Other Names for ML • • Data mining, Applied statistics Adaptive (stochastic) signal processing

Other Names for ML • • Data mining, Applied statistics Adaptive (stochastic) signal processing Probabilistic planning or reasoning are all closely related to the second problem.

Applications Machine Learning is most useful when the structure of the task is not

Applications Machine Learning is most useful when the structure of the task is not well understood but can be characterized by a dataset with strong statistical regularity. • • Search and recommendation (e. g. Google, Amazon) Automatic speech recognition and speaker verification Text parsing Face identification Tracking objects in video Financial prediction, fraud detection (e. g. credit cards) Medical diagnosis

Tasks • Supervised Learning: given examples of inputs and corresponding desired outputs, predict outputs

Tasks • Supervised Learning: given examples of inputs and corresponding desired outputs, predict outputs on future inputs. e. g. : classification, regression • Unsupervised Learning: given only inputs, automatically discover representations, features, structure, etc. e. g. : clustering, dimensionality reduction, Feature extraction

Dimensionality Reduction • • • Dimensionality: The number of measurements available for each item

Dimensionality Reduction • • • Dimensionality: The number of measurements available for each item in a data set. The dimensionality of real world items is very high. For example: The dimensionality of a 600 by 600 image is 360, 000. The Key to analyzing data is comparing these measurements to find relationships among this plethora of data points. Usually these measurements are highly redundant, and relationships among data points are predictable.

Dimensionality Reduction • Knowing the value of a pixel in an image, it is

Dimensionality Reduction • Knowing the value of a pixel in an image, it is easy to predict the value of nearby pixels since they tend to be similar. • Knowing that the word “corporation” occurs often in articles about economics, but not very often in articles about art and poetry then it is easy to predict that it will not occur very often in articles about love. • Although there are lots of measurements per item, there are far fewer that are likely to vary. Using a data set that only includes the items likely to vary allows humans to quickly and easily recognize changes in high dimensionality data.

Data Representation

Data Representation

Data Representation

Data Representation

Data Representation 1 1 1 0 1 1 1 1 0. 5 1 1

Data Representation 1 1 1 0 1 1 1 1 0. 5 1 1 1

2 by 103 644 by 103 23 by 28 644 by 2 -2. 19

2 by 103 644 by 103 23 by 28 644 by 2 -2. 19 -3. 19 -0. 02 1. 02 2 by 1

Arranging words: Each word was initially represented by a high-dimensional vector that counted the

Arranging words: Each word was initially represented by a high-dimensional vector that counted the number of times it appeared in different encyclopedia articles. Words with similar contexts are collocated

Different Features

Different Features

Glasses vs. No Glasses

Glasses vs. No Glasses

Beard vs. No Beard

Beard vs. No Beard

Beard Distinction

Beard Distinction

Glasses Distinction

Glasses Distinction

Multiple-Attribute Metric

Multiple-Attribute Metric

Embedding of sparse music similarity graph Platt, 2004

Embedding of sparse music similarity graph Platt, 2004

Reinforcement learning Mahadevan and Maggioini, 2005

Reinforcement learning Mahadevan and Maggioini, 2005

Semi-supervised learning Use graph-based discretization of manifold to infer missing labels. Belkin & Niyogi,

Semi-supervised learning Use graph-based discretization of manifold to infer missing labels. Belkin & Niyogi, 2004; Zien et al, Eds. , 2005 Build classifiers from bottom eigenvectors of graph Laplacian.

Learning correspondences How can we learn manifold structure that is shared across multiple data

Learning correspondences How can we learn manifold structure that is shared across multiple data sets? c et al, 2003, 2005

Mapping and robot localization Bowling, Ghodsi, Wilkinson 2005 Ham, Lin, D. D. 2005

Mapping and robot localization Bowling, Ghodsi, Wilkinson 2005 Ham, Lin, D. D. 2005

The Big Picture

The Big Picture

Manifold and Hidden Variables

Manifold and Hidden Variables

Reading • • • Journals: Neural Computation, JMLR, ML, IEEE PAMI Conferences: NIPS, UAI,

Reading • • • Journals: Neural Computation, JMLR, ML, IEEE PAMI Conferences: NIPS, UAI, ICML, AI-STATS, IJCAI, IJCNN Vision: CVPR, ECCV, SIGGRAPH Speech: Euro. Speech, ICSLP, ICASSP Online: citesser, google Books: – – – Elements of Statistical Learning, Hastie, Tibshirani, Friedman Learning from Data, Cherkassky, Mulier Machine Learning, Mitchell Neural Networks for pattern Recognition, Bishop Introduction to Graphical Models, Jordan et. al