Heuristic Optimization Methods Greedy algorithms Approximation algorithms and

  • Slides: 30
Download presentation
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP

Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP

Agenda • Greedy Algorithms – A class of heuristics • Approximation Algorithms – Does

Agenda • Greedy Algorithms – A class of heuristics • Approximation Algorithms – Does not prove optimality, but returns a solution that is guaranteed to be within a certain distance from the optimal value • GRASP – Greedy Randomized Adaptive Search Procedure • Other – Sqeaky Weel – Ruin and Recreate – Very Large Neighborhood Search 2

Greedy Algorithms • We have previosly studied Local Search Algorithms, which can produce heuristic

Greedy Algorithms • We have previosly studied Local Search Algorithms, which can produce heuristic solutions to difficult optimization problems • Another way of producing heuristic solutions is to apply Greedy Algorithms • The idea of a Greedy Algorithm is to construct a solution from scratch, choosing at each step the item bringing the ”best” immediate reward 3

Greedy Example (1) • 0 -1 Knapsack Problem: – Maximize: – 12 x 1

Greedy Example (1) • 0 -1 Knapsack Problem: – Maximize: – 12 x 1 + 8 x 2 + 17 x 3 + 11 x 4 + 6 x 5 + 2 x 6 + 2 x 7 – Subject to: – 4 x 1 + 3 x 2 + 7 x 3 + 5 x 4 + 3 x 5 + 2 x 6 + 3 x 7 ≤ 9 – With x binary • Notice that the variables are ordered such that – cj/aj ≥ cj+1/aj+1 – Variable cj gives more ”bang per buck” than cj+1 4

Greedy Example (2) • The greedy solution is to consider each item in turn,

Greedy Example (2) • The greedy solution is to consider each item in turn, and to allow it in the knapsack if it has enough room, starting with the variable that gives most ”bang per buck”: – x 1 = 1 (enough space, and best remaining item) – x 2 = 1 (enough space, and best remaining item) – x 3 = x 4 = x 5 = 0 (not enough space for any of them) – x 6 = 1 (enough space, and best remaining item) – x 7 = 0 (not enough space) 5

Formalized Greedy Algorithm (1) • Let us assume that we can write our combinatorial

Formalized Greedy Algorithm (1) • Let us assume that we can write our combinatorial optimization problem as follows: • For example, the 0 -1 Knapsack Problem: – (S will be the set of items not in the knapsack) 6

Formalized Greedy Algorithm (2) 7

Formalized Greedy Algorithm (2) 7

Adapting Greedy Algorithms • Greedy Algorithms have to be adapted to the particular problem

Adapting Greedy Algorithms • Greedy Algorithms have to be adapted to the particular problem structure – Just like Local Search Algorithms • For a given problem there can be many different Greedy Algorithms – TSP: ”nearest neighbor”, ”pure greedy” (select shortest edges first) 8

Approximation Algorithms • We remember three classes of algorithms: – Exact (returns the optimal

Approximation Algorithms • We remember three classes of algorithms: – Exact (returns the optimal solution) – Approximation (returns a solution within a certain distance from the optimal value) – Heuristic (returns a hopefully good solution, but with no guarantees) • For Approximation Algorithms, we need some kind of proof that the algorithm returns a value within some bound • We will look at an example of a Greedy Algorithm that is also an Approximation Algorithm 9

Approximation: Example (1) • We consider the Integer Knapsack Problem – Same as the

Approximation: Example (1) • We consider the Integer Knapsack Problem – Same as the 0 -1 Knapsack Problem, but we can select any number of each item (that is, we have available an unlimited number of each item) 10

Approximation: Example (2) • We can assume that – aj ≤ b – c

Approximation: Example (2) • We can assume that – aj ≤ b – c 1/a 1 ≥ cj/aj for all items j – (That is, the first item is the one that gives the most ”bang per buck”) • We will show that a greedy solution to this gives a value that is at least half of the optimal value 11

Approximation: Example (3) • The first step of a Greedy Algorithm will create the

Approximation: Example (3) • The first step of a Greedy Algorithm will create the following solution: • We could imagine that some of the other variables were non-0 as well (if x 1 is very large, and there are some smaller ones to fill the gap that is left) 12

Approximation: Example (4) • Now, the Linear Programming Relaxation of the problem will have

Approximation: Example (4) • Now, the Linear Programming Relaxation of the problem will have the following solution: – x 1 = b/a 1 – xj = 0 for all j=2, . . . , n • We let the value of the greedy heuristic be z. H • We let the value of the LP-relaxation be z. LP • We should show that z. H/z > ½, where z is the optimal value 13

Approximation: Example (5) • The proof goes as follows: • where, for some 0≤f

Approximation: Example (5) • The proof goes as follows: • where, for some 0≤f ≤ 1: 14

Approximation: Summary • It is important to note that the analysis depends on finding

Approximation: Summary • It is important to note that the analysis depends on finding – A lower bound on the optimal value – An upper bound on the optimal value • The practical importance of such analysis might not be too high – Bounds are usually not very good, and alternative heuristics will often work much better 15

GRASP • Greedy Randomized Adaptive Search Procedures • A Metaheuristic that is based on

GRASP • Greedy Randomized Adaptive Search Procedures • A Metaheuristic that is based on Greedy Algorithms – A constructive approach – A multi-start approach – Includes (optionally) a local search to improve the constructed solutions 16

Spelling out GRASP • Greedy: Select the best choice (or among the best choices)

Spelling out GRASP • Greedy: Select the best choice (or among the best choices) • Randomized: Use some probabilistic selection to prevent the same solution to be constructed every time • Adaptive: Change the evaluation of choices after making each decision • Search Procedure: It is a heuristic algorithm for examining the solution space 17

Two Phases of GRASP • GRASP is an iterative process, in which each iteration

Two Phases of GRASP • GRASP is an iterative process, in which each iteration has two phases • Construction – Build a feasible solution (from scratch) in the same way as using a Greedy Algorithm, but with some randomization • Improvement – Improve the solution by using some Local Search (Best/First Improvement) • The best overall solution is retained 18

The Constructive Phase (1) 19

The Constructive Phase (1) 19

The Constructive Phase (2) • Each step is both Greedy and Randomized • First,

The Constructive Phase (2) • Each step is both Greedy and Randomized • First, we build a Restricted Candidate List – The RCL contains the best elements that we can add to the solution • Then we select randomly one of the elements in the Restricted Candidate List • We then need to reevaluate the remaining elements (their evaluation should change as a result of the recent change in the partial solution), and repeat 20

The Restricted Candidate List (1) • Assume we have evaluated all the possible elements

The Restricted Candidate List (1) • Assume we have evaluated all the possible elements that can be added to the solution • There are two ways of generate a restricted list – Based on rank – Based on value • In each case, we introduce a parameter α that controls how large the RCL will be – Include the (1 - α)% elements with highest rank – Include all elements that has a value within α% of the best element 21

The Restricted Candidate List (2) • In general: • A small RCL leads to

The Restricted Candidate List (2) • In general: • A small RCL leads to a small variance in the values of the constructed solutions • A large RCL leads to worse average solution values, but a larger variance • High values (=1) for α result in a purely greedy construction • Low values (=0) for α result in a purely random construction 22

The Restricted Candidate List (3) 23

The Restricted Candidate List (3) 23

The Restricted Candidate List (4) • The role of α is thus critical •

The Restricted Candidate List (4) • The role of α is thus critical • Usually, a good choice will be to modify the value of α during the search – Randomly – Based on results • The approach where α is adjusted based on previous results is called ”Reactive GRASP” – The probability distribution of α changes based on the performance of each value of α 24

Effect of α on Local Search 25

Effect of α on Local Search 25

GRASP vs. Other Methods (1) • GRASP is the first pure constructive method that

GRASP vs. Other Methods (1) • GRASP is the first pure constructive method that we have seen • However, GRASP can be compared to Local Search based methods in some aspects • That is, a GRASP can sometimes be interpreted as a Local Search where the entire solution is destroyed (emptied) whenever a local optimum is reached – The construction reaches a local optimum when no more elements can be added 26

GRASP vs. Other Methods (2) • In this sense, we can classify GRASP as

GRASP vs. Other Methods (2) • In this sense, we can classify GRASP as – Memoryless (not using adaptive memory) – Randomized (not systematic) – Operating on 1 solution (not a population) • Potential improvements of GRASP would involve adding some memory – Many improvements have been suggested, but not too many have been implemented/tested – There is still room for doing research in this area 27

Squeaky Wheel Optimization • If its not broken, don’t fix it. • Often used

Squeaky Wheel Optimization • If its not broken, don’t fix it. • Often used in constructive meta-heuristics. – Inspect the constructed (complete) solution – If it has any flaws, focus on fixing these in the next constructive run 28

Ruin and Recreate • Also called Very Large Neighborhood search • Given a solution,

Ruin and Recreate • Also called Very Large Neighborhood search • Given a solution, destroy part of it – Random – Geographically – Along other dimensions • Rebuild Greedily – Can also use GRASP-like ideas • Can intersperse with local search (metaheuristics) 29

Summary of Todays’s Lecture • Greedy Algorithms – A class of heuristics • Approximation

Summary of Todays’s Lecture • Greedy Algorithms – A class of heuristics • Approximation Algorithms – Does not prove optimality, but returns a solution that is guaranteed to be within a certain distance from the optimal value • GRASP – Greedy Randomized Adaptive Search Procedure • Other – Sqeaky Weel – Ruin and Recreate – Very Large Neighborhood Search 30