Online Algorithms Lecturer Yishay Mansour Elad Walach Alex

  • Slides: 53
Download presentation
Online Algorithms Lecturer: Yishay Mansour Elad Walach Alex Roitenberg

Online Algorithms Lecturer: Yishay Mansour Elad Walach Alex Roitenberg

Introduction Up until now, our algorithms start with input and work with it suppose

Introduction Up until now, our algorithms start with input and work with it suppose input arrives a little at a time, need instant response

Oranges example

Oranges example

Introduction

Introduction

Linear Separators

Linear Separators

Linear saperator

Linear saperator

Perceptron

Perceptron

The perceptron algorithm

The perceptron algorithm

The perceptron algorithm

The perceptron algorithm

Mistake Bound Theorem

Mistake Bound Theorem

Mistake Bound Proof

Mistake Bound Proof

Proof Cont.

Proof Cont.

Proof Cont. From Claim 1: From Claim 2: Also: Since Combining:

Proof Cont. From Claim 1: From Claim 2: Also: Since Combining:

The world is not perfect What if there is no perfect separator?

The world is not perfect What if there is no perfect separator?

The world is not perfect

The world is not perfect

The world is not perfect

The world is not perfect

Perceptron for maximizing margins

Perceptron for maximizing margins

Perceptron Algorithm (maximizing margin)

Perceptron Algorithm (maximizing margin)

Mistake Bound Theorem similar to the perceptron proof. Claim 1 remains the same: We

Mistake Bound Theorem similar to the perceptron proof. Claim 1 remains the same: We only have to bound

Mistake bound proof

Mistake bound proof

Proof Cont.

Proof Cont.

Proof Cont.

Proof Cont.

The mistake bound model

The mistake bound model

Con Algorithm

Con Algorithm

CON Algorithm

CON Algorithm

The bounds of CON

The bounds of CON

HAL – halving algorithm

HAL – halving algorithm

HAL –halving algorithm

HAL –halving algorithm

Mistake Bound model and PAC Generates strong online algorithms In the past we have

Mistake Bound model and PAC Generates strong online algorithms In the past we have seen PAC Restrictions for mistake bound are much harsher than PAC If we know that A learns C in mistake bound model , should A learn C in PAC model?

Mistake Bound model and PAC

Mistake Bound model and PAC

Conservative equivalent of Mistake Bound Algorithm

Conservative equivalent of Mistake Bound Algorithm

Building Apac inconsistent …

Building Apac inconsistent …

Building Apac

Building Apac

Disjunction of Conjuctions

Disjunction of Conjuctions

Disjunction of Conjunctions We have proven that every algorithm in mistake bound model can

Disjunction of Conjunctions We have proven that every algorithm in mistake bound model can be converted to PAC Lets look at some algorithms in the mistake bound model

Disjunction Learning

Disjunction Learning

Example

Example

Mistake Bound Analysis

Mistake Bound Analysis

Mistake Bound Analysis

Mistake Bound Analysis

Mistake analysis proof At first mistake we eliminate n literals At any further mistake

Mistake analysis proof At first mistake we eliminate n literals At any further mistake we eliminate at least 1 literal L 0 has 2 n literals So we can have at most n+1 mistakes

k-DNF

k-DNF

k-DNF classification

k-DNF classification

Winnow

Winnow

Winnow

Winnow

Mistake bound analysis

Mistake bound analysis

Winnow Proof: Definitions

Winnow Proof: Definitions

Winnow Proof: Positive Mistakes

Winnow Proof: Positive Mistakes

Winnow Proof: Positive Mistakes

Winnow Proof: Positive Mistakes

Winnow Proof: Negative Mistakes

Winnow Proof: Negative Mistakes

Winnow Proof: Cont.

Winnow Proof: Cont.

What should we know? I

What should we know? I

What should you know? II

What should you know? II

Questions?

Questions?