MultiObjective Nonlinear Optimization via Parameterization and Inverse Function
Multi-Objective Non-linear Optimization via Parameterization and Inverse Function Approximation University of Regina Industrial Systems Engineering M. A. Sc. Thesis Defense May 23, 2003 Mariano Arriaga Marín 1
Thesis Contributions u u Novel technique for attaining the global solution of nonlinear optimization problems. Novel technique for multi-objective nonlinear optimization (MONLO). Artificial Neural Networks (ANN) Implementation Methods tested in: – Highly nonlinear optimization problems, – MONLO problems, and – Practical scheduling problem. 2
Current Global Optimization Techniques n Common Techniques ¨ Multistart ¨ Clustering Method ¨ Genetic Algorithms ¨ Simulated Annealing ¨ Tabu Search 3
Multiple Objective Optimization u Current MONLO procedure: – Divide the problem in two parts 1. Multi-Objective Single Objective Problem 2. Solve with a Nonlinear Optimization Technique 4
Multi-Objective Single Objective u Common Techniques u Problems – Weighting Method – E-Constraint – Interactive Surrogate Worth Trade-Off Solution – Lexicographic Ordering – Goal Programming – Include extra parameters which might be difficult to determine their value. – Determining their value gets more difficult as the number of objective functions increases 5
Proposed Optimization Algorithm u Min F(x) = {f 1(x), …, fm(x)} – where x Î n fi(x) Î ; i = 1, …, m u Optimization of: – Non-Linear functions – Multi-objective – Avoid local minima and inflection points 6
General Idea u u u Set an initial value for x and calculate f(x) Decrease the value of the function via a parameter Calculate corresponding f -1(x) – Note: The algorithm does not necessarily follow the function x 0 7
General Idea u When the algorithm reaches a local minima u This process continues until the algorithm reaches the global minimum. – it looks for a lower value – if this value exists, the algorithm “jumps” to it and continues the process x 0 xf 8
Inverse Function Approximation u Continuous functions u Full Theoretical Justification 1 u Mayorga R. V. and Carrera J. , (2002), “A Radial Basis Function Network Approach for the Computation of Inverse Time Variant Functions”, IASTED Int. Conf. on Artificial Intelligence & Soft Computing, Banff, Canada. (To appear in the 9 International Journal of Neural Systems). 1
Global Optimization Example u Consider the function: 10
Initial Model – Part 1 u Initial point in “front” side of curve (1) u Gets out of two local minima (2 & 3) u Converges to the global minimum (4) u v=0 and d. Z-1=0 11
Initial Model – Part 2 u Initial point in “back” side of curve u Gets stuck in an inflection point u Does not get to the global minimum u v=0 and d. Z-1=0 12
Model with vector v and d. Z-1 u Initial point in “back” side of curve u Goes around the curve (null space vector) u Converges to the global minimum 13
Artificial Neural Networks Model u Initial point in “back” side of curve u Calculate J(x) and v with ANNs u Follows almost the same trajectory as previous model u Converges to the global minimum 14
The Griewank Function - Example u Consider the function: 15
Griewank Function Optimization Initial Model d. Z-1=0 & v=0 Model with d. Z-1 & v Model Using ANN 16
Multi-Objective Nonlinear Example 2 u I-Beam Design Problem 2 – Determine best tradeoff dimensions u Minimize conflicting objectives – Cross-sectional Area – Static Deflection Osyczka, A. , (1984), Multicriterion Optimization in Engineering with FORTRAN programs. Ellis Horwood Limited. 17
What if both objectives are solved separately? ↓ u Cross-Sectional Area ↑ Static Deflection u ↓ Static Deflection ↑ Cross-Sectional Area 18
I-Beam Results u ― Feasible Solutions u ― Strong Pareto Solutions u ― Weak Pareto Solutions 19
I-Beam Results u Result – Proposed approach achieves very similar results to state -of-the-art Genetic Algorithms (GA) – Gives a diverse set of strong Pareto solutions – The result of the ANN implementations varies by 0. 88% u Computational Time 3 – If compared to a standard floating point GA 4, the computational time decreases in 83% – From 15. 2 sec to 2. 56 sec 3 Experiments performed in a Sun Ultra 4 Digital Computer. GA: 100 individuals and 50 generations. 4 Passino, K. , (1998), Genetic Algorithms Code, September 21 st, http: //eewww. eng. ohio-state. edu/~passino/ICbook/ic_code. html (accessed February 2003). 20
Multi-Objective Optimization: Just-In-Time Scheduling Problem u Consider 5 products manufactured in 2 production lines u Minimize: u Variables: u Production Constraints – Cost – Line Unbalance – Plant Unbalance – Production Rate – Level Loading – Production Time 21
Scheduling Problem – Optimization Results u Minimize: – Cost 22
Multi-Objective Optimization Example u Minimize: – Cost – Line unbalance u Production Rate variance / line 23
Multi-Objective Optimization Example u Minimize: – Cost – Line unbalance u Production Rate variance / line – Plant unbalance u Distribute production in both production lines 24
Conclusion u u Novel global optimization method It avoids local minima and inflection points The algorithm leads to convexities via a null space vector v It can also be used for constraint nonlinear optimization 25
Conclusion (cont. ) u Novel MONLO deterministic method u Starts from a single point instead of a population u Computational Time – For the I-Beam example, the computational time is 83% less than Genetic Algorithms – The implementation of ANN reduces the number of calculations to compute the Inverse Function – For the scheduling example, the ANN implementation reduces computational time by 70% 26
Publications u u u 3 rd ANIROB/IEEE-RAS International Symposium of Robotics and Automation, Toluca, Mexico. Sept 1 -4, 2002 Three journal papers already submitted: – Journal of Engineering Applications of Artificial Intelligence – International Journal of Neural Systems – Journal of Intelligent Manufacturing One paper to be published as a Chapter in a book on Intelligent Systems. Editor: Dr. Alexander M. Meystel 27
- Slides: 27