SemiSupervised Learning Outline n n Fully supervised learning

  • Slides: 56
Download presentation
Semi-Supervised Learning

Semi-Supervised Learning

Outline n n Fully supervised learning (traditional classification) Semi-supervised learning (or classification) q q

Outline n n Fully supervised learning (traditional classification) Semi-supervised learning (or classification) q q q LU learning: Learning with a small set of Labeled examples and a large set of Unlabeled examples PU learning: Learning with Positive and Unlabeled examples (no labeled negative examples). One-class learning. CS 583, Bing Liu, UIC 2

Learning from a small labeled set and a large unlabeled set LU learning

Learning from a small labeled set and a large unlabeled set LU learning

Unlabeled Data n One of the bottlenecks of classification is the labeling of a

Unlabeled Data n One of the bottlenecks of classification is the labeling of a large set of examples (data records or text documents). q q n n Often done manually Time consuming Can we label only a small number of examples and make use of a large number of unlabeled examples to learn? Possible in many cases. CS 583, Bing Liu, UIC 4

Why unlabeled data are useful? n Unlabeled data are usually plentiful, labeled data are

Why unlabeled data are useful? n Unlabeled data are usually plentiful, labeled data are expensive. n Unlabeled data provide information about the joint probability distribution over words and word collocations (in texts). n We will use text classification to study this problem. CS 583, Bing Liu, UIC 5

CS 583, Bing Liu, UIC 6

CS 583, Bing Liu, UIC 6

How to use unlabeled data n One way is to use the EM algorithm

How to use unlabeled data n One way is to use the EM algorithm q n The EM algorithm is a popular iterative algorithm for maximum likelihood estimation in problems with missing data. q n EM: Expectation Maximization maximizing a likelihood function so that under the assumed statistical model the observed data is most probable The EM algorithm consists of two steps, q q Expectation step, i. e. , filling in the missing data Maximization step – calculate a new maximum a posteriori estimate for the parameters. CS 583, Bing Liu, UIC 7

Filling missing values probabilistically Pr(T|d), Pr(F|d) 1 1 1 0 0. 5 0. 2

Filling missing values probabilistically Pr(T|d), Pr(F|d) 1 1 1 0 0. 5 0. 2 … CS 583, Bing Liu, UIC 0 0 0 1 1 1 0. 5 0. 8 8

Incorporating unlabeled Data with EM (Nigam et al, 2000) n Basic EM n Augmented

Incorporating unlabeled Data with EM (Nigam et al, 2000) n Basic EM n Augmented EM with weighted unlabeled data n Augmented EM with multiple mixture components per class CS 583, Bing Liu, UIC 9

Algorithm Outline 1. 2. 3. 4. Train a classifier with only the labeled documents.

Algorithm Outline 1. 2. 3. 4. Train a classifier with only the labeled documents. Use it to probabilistically classify the unlabeled documents. Use ALL the documents to train a new classifier. Iterate steps 2 and 3 to convergence. CS 583, Bing Liu, UIC 10

Basic Algorithm CS 583, Bing Liu, UIC 11

Basic Algorithm CS 583, Bing Liu, UIC 11

Basic EM: E Step & M Step E Step: M Step: CS 583, Bing

Basic EM: E Step & M Step E Step: M Step: CS 583, Bing Liu, UIC 12

Experimental Evaluation n Newsgroup postings q n Web page classification q q n 20

Experimental Evaluation n Newsgroup postings q n Web page classification q q n 20 newsgroups, 1000/group student, faculty, course, project 4199 web pages Reuters newswire articles q q 12, 902 articles 10 main topic categories CS 583, Bing Liu, UIC 13

20 Newsgroups CS 583, Bing Liu, UIC 14

20 Newsgroups CS 583, Bing Liu, UIC 14

20 Newsgroups CS 583, Bing Liu, UIC 15

20 Newsgroups CS 583, Bing Liu, UIC 15

The problem n It has been shown that the EM algorithm in Fig. 5.

The problem n It has been shown that the EM algorithm in Fig. 5. 1 works well if these assumptions are satisfied: q q n n The data (or the text documents) are generated by a mixture model. There is one-to-one correspondence between mixture components and document classes. The two mixture model assumptions, however, can cause major problems when they do not hold. In practice, it may happen that a class (or topic) contains a number of sub-classes (or sub-topics). q For example, the class Sports may contain documents about different sub-classes of sports, Baseball, Basketball, Tennis, and Softballs CS 583, Bing Liu, UIC 16

When generative model isn’t suitable n Weight labeled and unlabeled data (Nigam et al,

When generative model isn’t suitable n Weight labeled and unlabeled data (Nigam et al, 2000) n Multiple Mixture Components per Class (M-EM). E. g. , a class --- a number of sub-topics or clusters. Results of an example using 20 newsgroup data n q q n 40 labeled; 2360 unlabeled; 1600 test Accuracy: NB: 68%, EM: 59. 6% Solutions q q M-EM (Nigam et al, 2000): Cross-validation on the training data to determine the number of components. Partitioned-EM (Cong, et al, 2004): using hierarchical clustering. It does significantly better than M-EM. CS 583, Bing Liu, UIC 17

Weighting the influence of unlabeled examples by factor New M step: The prior probability

Weighting the influence of unlabeled examples by factor New M step: The prior probability also needs to be weighted. But this approach is not super effective. CS 583, Bing Liu, UIC 18

Another approach: Co-training n n Again, learning with a small labeled set and a

Another approach: Co-training n n Again, learning with a small labeled set and a large unlabeled set. The attributes describing each example or instance can be partitioned into two subsets. Each of them is sufficient for learning the target function f. q n E. g. , hyperlinks (anchor text) and page contents in Web page classification. Two classifiers (f 1 and f 2) can be learned from two separate portions of the same data. CS 583, Bing Liu, UIC 19

Co-training Algorithm [Blum and Mitchell, 1998] n Key idea: f 1 adds examples to

Co-training Algorithm [Blum and Mitchell, 1998] n Key idea: f 1 adds examples to the labeled set that is used for learning f 2 based on x 2 view, and vice versa. CS 583, Bing Liu, UIC 20

Co-training: Experimental Results n n Begin with 12 labeled web pages (academic course) Provide

Co-training: Experimental Results n n Begin with 12 labeled web pages (academic course) Provide 1, 000 additional unlabeled web pages Average error: learning from labeled data 11. 1%; Average error: co-training 5. 0% q Using naïve Bayesian: multiple two probabilities in testing Page-base classifier Link-based classifier Combined classifier Supervised training 12. 9 12. 4 11. 1 Co-training 6. 2 11. 6 5. 0 CS 583, Bing Liu, UIC 21

Summary of LU learning n Using unlabeled data can improve the accuracy of classifier

Summary of LU learning n Using unlabeled data can improve the accuracy of classifier when the data fits the generative model. q n Partitioned EM and the EM classifier based on multiple mixture components model (M-EM) are more suitable for real data when multiple mixture components are in one class. Co-training is another effective technique when redundantly sufficient features are available. CS 583, Bing Liu, UIC 22

Learning from Positive and Unlabeled Examples PU learning

Learning from Positive and Unlabeled Examples PU learning

Learning from Positive & Unlabeled data n Positive examples: One has a set of

Learning from Positive & Unlabeled data n Positive examples: One has a set of examples of a class P, and n Unlabeled set: also has a set U of unlabeled (or mixed) examples with instances from P and also not from P (negative examples). n n n Build a classifier: Build a classifier to classify the examples in U and/or future (test) data. Key feature of the problem: no labeled negative training data. We call this problem, PU-learning. CS 583, Bing Liu, UIC 24

Applications of the problem n n With the growing volume of online texts available

Applications of the problem n n With the growing volume of online texts available through the Web and digital libraries, one often wants to find those documents that are related to one's work or one's interest. For example, given an ICML proceedings, q q find all machine learning papers from AAAI and IJCAI, which are AI conferences. No labeling of negative examples from each of these collections. CS 583, Bing Liu, UIC 25

Direct Marketing n Company has a database with details of its customers – positive

Direct Marketing n Company has a database with details of its customers – positive examples, but no information about those who are not their customers, i. e. , no negative examples. n Want to find people who are similar to their customers for marketing n Buy a database consisting of details of people (unlabeled examples), some of whom may be potential customers – hidden positive examples. CS 583, Bing Liu, UIC 26

Are Unlabeled Examples Helpful? x 1 < 0 ++ + n + ++ +

Are Unlabeled Examples Helpful? x 1 < 0 ++ + n + ++ + n Function known to be either x 1 < 0 or x 2 > 0 Which one is it? x 2 > 0 CS 583, Bing Liu, UIC 27

Are Unlabeled Examples Helpful? x 1 < 0 n ++u + u +u +

Are Unlabeled Examples Helpful? x 1 < 0 n ++u + u +u + + ++ + uu u u uu uu CS 583, Bing Liu, UIC n x 2 > 0 Function known to be either x 1 < 0 or x 2 > 0 Which one is it? “Not learnable” with only positive examples. However, addition of unlabeled examples makes it learnable. 28

Theoretical foundations n n n (X, Y): X - input vector, Y {1, -1}

Theoretical foundations n n n (X, Y): X - input vector, Y {1, -1} - class label. f : classification function We rewrite the probability of error Pr[f(X) Y] = Pr[f(X) = 1 and Y = -1] + Pr[f(X) = -1 and Y = 1] (1) We have Pr[f(X) = 1 and Y = -1] = Pr[f(X) = 1] – Pr[f(X) = 1 and Y = 1] = Pr[f(X) = 1] – (Pr[Y = 1] – Pr[f(X) = -1 and Y = 1]). Plug this into (1), we obtain Pr[f(X) Y] = Pr[f(X) = 1] – Pr[Y = 1] (2) + 2 Pr[f(X) = -1|Y = 1]Pr[Y = 1] CS 583, Bing Liu, UIC 29

Theoretical foundations (cont) n n n Pr[f(X) Y] = Pr[f(X) = 1] – Pr[Y

Theoretical foundations (cont) n n n Pr[f(X) Y] = Pr[f(X) = 1] – Pr[Y = 1] (2) + 2 Pr[f(X) = -1|Y = 1] Pr[Y = 1] Note that Pr[Y = 1] is constant. If we can hold Pr[f(X) = -1|Y = 1] small, then learning is approximately the same as minimizing Pr[f(X) = 1]. Holding Pr[f(X) = -1|Y = 1] small while minimizing Pr[f(X) = 1] is approximately the same as q minimizing Pru[f(X) = 1] q while holding Pr. P[f(X) = 1] ≥ r (r is recall Pr[f(X)=1| Y=1]) which is the same as (Prp[f(X) = -1] ≤ 1 – r) if the set of positive examples P and the set of unlabeled examples U are large enough. Theorem 1 and Theorem 2 in [Liu et al 2002] state these formally in the noiseless case and in the noisy case. CS 583, Bing Liu, UIC 30

Put it simply n n A constrained optimization problem. A reasonably good generalization (learning)

Put it simply n n A constrained optimization problem. A reasonably good generalization (learning) result can be achieved q q If the algorithm tries to minimize the number of unlabeled examples labeled as positive subject to the constraint that the fraction of errors on the positive examples is no more than 1 -r. CS 583, Bing Liu, UIC 31

An illustration n Assume a linear classifier. Line 2 is the best solution. 1

An illustration n Assume a linear classifier. Line 2 is the best solution. 1 CS 583, Bing Liu, UIC 2 3 4 32

Existing 2 -step strategy n Step 1: Identifying a set of reliable negative documents

Existing 2 -step strategy n Step 1: Identifying a set of reliable negative documents from the unlabeled set. q S-EM [Liu et al, 2002] uses a Spy technique, PEBL [Yu et al, 2002] uses a 1 -DNF technique Roc-SVM [Li & Liu, 2003] uses the Rocchio algorithm. q … q q n Step 2: Building a sequence of classifiers by iteratively applying a classification algorithm and then selecting a good classifier. q q q S-EM uses the Expectation Maximization (EM) algorithm, with an error based classifier selection mechanism PEBL uses SVM, and gives the classifier at convergence. I. e. , no classifier selection. Roc-SVM uses SVM with a heuristic method for selecting the final classifier. CS 583, Bing Liu, UIC 33

Step 1 positive Step 2 negative Reliable Negative (RN) U positive P CS 583,

Step 1 positive Step 2 negative Reliable Negative (RN) U positive P CS 583, Bing Liu, UIC Using P, RN and Q to build the final classifier iteratively or Q =U - RN Using only P and RN to build a classifier 34

Illustration of step 1 n Find reliable negatives CS 583, Bing Liu, UIC 35

Illustration of step 1 n Find reliable negatives CS 583, Bing Liu, UIC 35

Illustration of step 2 n Iteration 1 of step 2. CS 583, Bing Liu,

Illustration of step 2 n Iteration 1 of step 2. CS 583, Bing Liu, UIC 36

Illustration of step 2 n Iteration 2 of step 2 - (1) CS 583,

Illustration of step 2 n Iteration 2 of step 2 - (1) CS 583, Bing Liu, UIC 37

Illustration of step 2 (contd) n Iteration 2 of step 2 - (2) CS

Illustration of step 2 (contd) n Iteration 2 of step 2 - (2) CS 583, Bing Liu, UIC 38

Illustration step 2 n Finally, iteration n. CS 583, Bing Liu, UIC 39

Illustration step 2 n Finally, iteration n. CS 583, Bing Liu, UIC 39

Step 1: The Spy technique n n Sample a certain % of positive examples

Step 1: The Spy technique n n Sample a certain % of positive examples and put them into unlabeled set to act as “spies”. Run a classification algorithm assuming all unlabeled examples are negative, q n we will know the behavior of those actual positive examples in the unlabeled set through the “spies”. We can then extract reliable negative examples from the unlabeled set more accurately. CS 583, Bing Liu, UIC 40

Step 1: Other methods n 1 -DNF method: q q Find the set of

Step 1: Other methods n 1 -DNF method: q q Find the set of words W that occur in the positive documents more frequently than in the unlabeled set. Extract those documents from unlabeled set that do not contain any word in W. These documents form the reliable negative documents. n Rocchio method from information retrieval. n Naïve Bayesian method. CS 583, Bing Liu, UIC 41

Step 2: Running EM or SVM iteratively (1) Running a classification algorithm iteratively q

Step 2: Running EM or SVM iteratively (1) Running a classification algorithm iteratively q q q Run EM using P, RN and Q until it converges, or Run SVM iteratively using P, RN and Q until this no document from Q can be classified as negative. RN and Q are updated in each iteration, or … (2) Classifier selection. CS 583, Bing Liu, UIC 42

Do they follow theory? n Yes, heuristic methods because q q n Step 1

Do they follow theory? n Yes, heuristic methods because q q n Step 1 tries to find some initial reliable negative examples from the unlabeled set. Step 2 tried to identify more and more negative examples iteratively. The two steps together form an iterative strategy of increasing the number of unlabeled examples that are classified as negative while maintaining the positive examples correctly classified. CS 583, Bing Liu, UIC 43

Can SVM be applied directly? n n Can we use SVM to directly deal

Can SVM be applied directly? n n Can we use SVM to directly deal with the problem of learning with positive and unlabeled examples, without using two steps? Yes, with a little re-formulation. CS 583, Bing Liu, UIC 44

Recall: Support Vector Machines n n n Support vector machines (SVM) are linear functions

Recall: Support Vector Machines n n n Support vector machines (SVM) are linear functions of the form f(x) = w. Tx + b, where w is the weight vector and x is the input vector. Let the set of training examples be {(x 1, y 1), (x 2, y 2), …, (xn, yn)}, where xi is an input vector and yi is its class label, yi {1, -1}. To find the linear function: Minimize: Subject to: CS 583, Bing Liu, UIC 45

Recall: Soft margin SVM n To deal with cases where there may be no

Recall: Soft margin SVM n To deal with cases where there may be no separating hyperplane due to noisy labels of both positive and negative training examples, the soft margin SVM is proposed: Minimize: Subject to: where C 0 is a parameter that controls the amount of training errors allowed. CS 583, Bing Liu, UIC 46

Biased SVM (noiseless case) n n Assume that the first k-1 examples are positive

Biased SVM (noiseless case) n n Assume that the first k-1 examples are positive examples (labeled 1), while the rest are unlabeled examples, which we label negative (-1). We regard unlabeled data as negative but there a lot of errors or noises Minimize: Subject to: i 0, i = k, k+1…, n CS 583, Bing Liu, UIC 47

Biased SVM (noisy case) n If we also allow positive set to have some

Biased SVM (noisy case) n If we also allow positive set to have some noisy negative examples, then we have: Minimize: Subject to: i 0, i = 1, 2, …, n. n This turns out to be the same as the asymmetric cost SVM for dealing with unbalanced data. Of course, we have a different motivation. CS 583, Bing Liu, UIC 48

Estimating performance n n We need to estimate the performance in order to select

Estimating performance n n We need to estimate the performance in order to select the parameters. Since learning from positive and negative examples often arise in retrieval situations, we use F score as the classification performance measure F = 2 pr / (p+r) (p: precision, r: recall). To get a high F score, both precision and recall have to be high. However, without labeled negative examples, we do not know how to estimate the F score. CS 583, Bing Liu, UIC 49

A performance criterion n Performance criteria pr/Pr[Y=1]: It can be estimated directly from the

A performance criterion n Performance criteria pr/Pr[Y=1]: It can be estimated directly from the validation set as r 2/Pr[f(X) = 1] Recall r = Pr[f(X)=1| Y=1] q Precision p = Pr[Y=1| f(X)=1] To see this q Pr[f(X)=1|Y=1] Pr[Y=1] = Pr[Y=1|f(X)=1] Pr[f(X)=1] n //both side times r Behavior similar to the F-score (= 2 pr / (p+r)) CS 583, Bing Liu, UIC 50

A performance criterion (cont …) n n r 2/Pr[f(X) = 1] r can be

A performance criterion (cont …) n n r 2/Pr[f(X) = 1] r can be estimated from positive examples in the validation set. Pr[f(X) = 1] can be obtained using the full validation set. This criterion actually reflects theory very well. CS 583, Bing Liu, UIC 51

Empirical Evaluation n Two-step strategy: We implemented a benchmark system, called LPU, which is

Empirical Evaluation n Two-step strategy: We implemented a benchmark system, called LPU, which is available at http: //www. cs. uic. edu/~liub/LPUdownload. html q q n Step 1: n Spy n 1 -DNF n Rocchio n Naïve Bayesian (NB) Step 2: n EM with classifier selection n SVM: Run SVM once. n SVM-I: Run SVM iteratively and give converged classifier. n SVM-IS: Run SVM iteratively with classifier selection Biased-SVM (we used SVMlight package) CS 583, Bing Liu, UIC 52

CS 583, Bing Liu, UIC 53

CS 583, Bing Liu, UIC 53

Results of Biased SVM CS 583, Bing Liu, UIC 54

Results of Biased SVM CS 583, Bing Liu, UIC 54

Summary of PU learning n n n Gave an overview of theory on learning

Summary of PU learning n n n Gave an overview of theory on learning with positive and unlabeled examples. Described the existing two-step strategy for learning. Presented a more principled approach to solve the problem based on a biased SVM formulation. Presented a performance measure pr/P(Y=1) that can be estimated from data. Experimental results using text classification show the superior classification power of Biased-SVM. Many new deep learning-based techniques have been proposed recently. CS 583, Bing Liu, UIC 55

One-class learning n Another semi-supervised learning paradigm is learning with a single class. q

One-class learning n Another semi-supervised learning paradigm is learning with a single class. q q n Traditional techniques are based on SVM. q n No negative data or unlabeled data It is usually used for anomaly or novelty detection E. g. , one class SVM (OCSVM) and support vector data description (SVDD) New techniques are all based on deep learning. q My group has a paper on the topic this year. CS 583, Bing Liu, UIC 56