Repetition Optimal operation A typical dynamic optimization problem
Repetition Optimal operation • A typical dynamic optimization problem • Implementation: “Open-loop” solutions not robust to disturbances or model errors • Want to introduce feedback 1
Repetition Implementation of optimal operation • Paradigm 1: On-line optimizing control where measurements are used to update model and states • Paradigm 2: “Self-optimizing” control scheme found by exploiting properties of the solution in control structure design – Usually feedback solutions – Use off-line analysis/optimization to find “properties of the solution” – “self-optimizing ” = “inherent optimal operation” 2
Repetition Repetitio Implementation: Paradigm 1 • Paradigm 1: Online optimizing control • Measurements are primarily used to update the model • The optimization problem is resolved online to compute new inputs. • Example: Conventional MPC • This is the “obvious” approach (for someone who does not know control) 3
Example paradigm 1: Marathon runner • Even getting a reasonable model requires > 10 Ph. D’s … and the model has to be fitted to each individual…. • Clearly impractical! 4
Repetition Implementation: Paradigm 2 • Paradigm 2: Precomputed solutions based on off-line optimization • Find properties of the solution suited for simple and robust on-line implementation • Examples – – 5 Marathon runner Hierarchical decomposition Optimal control Explicit MPC
Repetition Example paradigm 2: Marathon runner select one measurement c = heart rate 6 • Simple and robust implementation • Disturbances are indirectly handled by keeping a constant heart rate • May have infrequent adjustment of setpoint (heart rate)
Repetition Example paradigm 2: Optimal operation of chemical plant • Hierarchial decomposition based on time scale separation Self-optimizing control: Acceptable operation (=acceptable loss) achieved using constant set points (cs) for the controlled variables c • No or infrequent online optimization. • Controlled variables c are found based on off-line analysis. 7 cs
Example paradigm 2: Feedback implementation of optimal control (LQ) • • • Optimal solution to infinite time dynamic optimization problem Originally formulated as a “open-loop” optimization problem (no feedback) “By chance” the optimal u can be generated by simple state feedback u = KLQ x 8 • KLQ is obtained off-line by solving Riccatti equations • Explicit MPC: Extension using different KLQ in each constraint region • Summary: Two paradigms MPC 1. Conventional MPC: On-line optimization 2. Explicit MPC: Off-line calculation of KLQ for each region (must determine regions online)
Example paradigm 2: Explicit MPC: Model predictive control Note: Many regions because of future constraints 9 A. Bemporad, M. Morari, V. Dua, E. N. Pistikopoulos, ”The Explicit Linear Quadratic Regulator for Constrained Systems”, Automatica, vol. 38, no. 1, pp. 3 -20 (2002).
Issues Paradigm 2: Precomputed on-line solutions based on off-line optimization Issues (expected research results for specific application): 1. Find structure of optimal solution for specific problems • Typically, identify regions where different set of constraints are active 2. Find optimal values (or trajectories) for unconstrained variables 3. Find analytical or precomputed solutions suitable for on-line implementation 4. Find good “self-optimizing” variables c to control in each region • • Find good single variables to control If not possible: Find good variable combinations to control 5. Determine a switching policy between different regions 10
Unconstrained variables: What should we control? • Intuition: “Dominant variables” (Shinnar) • Is there any systematic procedure? 11
1. The “easy” Active Constraints J c ≥ cconstraint “Obvious” : Want to keep (control) c at copt = cconstraint Jopt copt 1. If c = u = manipulated input (MV): • 2. Implementation trivial: Keep u at constraint (umin or umax) If c = y = output variable (CV): • • 12 c Use u to control y at constraint BUT: Need to introduce back-off (safety margin)
Back-off for active output constraints c ≥ cconstraint J Loss Back-off Jopt copt a) If constraint can be violated dynamically (only average matters) • b) Required Back-off = “bias” (steady-state measurement error for c) If constraint cannot be violated dynamically (“hard constraint”) • 13 c Required Back-off = “bias” + maximum dynamic control error Rule for control of hard output constraints: “Squeeze and shift”! Reduce variance (“Squeeze”) and “shift” setpoint cs to reduce backoff
Hard Constraints: «SQUEEZE AND SHIFT» SQUEEZE 14 © Richalet SHIFT
2. The difficult Unconstrained Variables J Jopt copt • c Selected controlled variable (unconstrained) How to find them (CVs)? – Intuition: Dominant variables (Shinnar) – Rijnsdorp (1990): “…requires good process insight and control structure know-how. ” – Any systematic procedure? 15
Effect of Setpoint Error J Jopt(d 2) Loss Jopt(d 1) cs = copt(d 1) copt(d 2) c Optimum moves because of disturbances d: copt(d) • 16 Constant set-point policy results in loss
Effect of Implementation Error J J(n) Jopt Loss cs cs + n c CV c deviates from cs due to implementation error n (due to measurement and control error) • • 17 • • • Lack of perfect control results in loss Want FLAT optimum, that is want Jcc small Linearize c = G u where u are ”base” unconstrained degrees of freedom Then Jcc = G-T Juu G-1. So flat optimum corresponds to small Jcc, which corresponds to small inverse G-1 , which corresponds to large gain |G|, that is, want to control sensitive variables c
Guidelines for CV Selection 18 • Rule 1: The optimal value for CV c should be insensitive to disturbances d (minimizes effect of setpoint error) • Rule 2: c should be easy to measure and control (small implementation error n) • Rule 3: c should be sensitive to changes in u (large gain |G| from u to c) or equivalently the optimum Jopt should be flat with respect to c (minimizes effect of implementation error n) • Rule 4: For case of multiple CVs, the selected CVs should not be correlated. Reference: S. Skogestad, “Plantwide control: The search for the self-optimizing control structure”, Journal of Process Control, 10, 487 -507 (2000).
Further Details on Rule 1 • Rule 1: The optimal value for c should be insensitive to disturbances d (minimizes effect of setpoint error) J copt Good Jopt(d 2) Jopt(d 1) copt(d 1) = copt(d 2) c Note that c itself must be affected by d 19 cs = copt(d*) d
Further Details on Rule 3 • Rule 3: Objective function J should be flat with respect to c -> WANT LARGE GAIN |G| from u to c Good Flat optimum Implementation easy 20 Bad Sharp optimum Sensitive to implementation error
Mathematical Formulation • • • Jopt(d) – truly optimal operation Jc(d, n) – operation with c = cs + n Loss is given as L(d, n) = Jc(d, n) – Jopt(d) • CVs can be selected by minimizing loss over allowable d and n – Allowable d from process insight – dmin ≤ dmax – Allowable n based on sensor accuracy – nmin ≤ nmax • 21 Usually, setpoint cs is selected as copt(dnominal)
Worst-Case and Average Loss • Worst-case loss over allowable d and n: • Average loss over allowable d and n (with uniform distribution or equal probability of occurrence): - Allowable sets - size of set CVs can be selected by comparing worst-case or average loss for different alternatives 22
Practical Issue: Loss Estimation • Exact computation of worst-case or average loss is difficult (obviously!) nmax nmin dmin dmax CVs can be selected by comparing losses at edge or a few randomly selected points of allowable sets 23
Practical Issue: Setpoint Value • • Usually, setpoint cs is selected as copt(d*), but this may lead to infeasible operation for some disturbances In general, setpoint cs can be chosen freely, e. g. by minimizing back-off from copt(d*), while ensuring feasibility subject to 24 Reference: M. S. Govatsmark and Skogestad “Selection of controlled variables and robust setpoints”, Industrial & Engineering Chemistry Research, 44 (7), 2207 -2217 (2005).
Toy Example 25 Reference: I. J. Halvorsen, S. Skogestad, J. Morud and V. Alstad, “Optimal selection of controlled variables”, Industrial & Engineering Chemistry Research, 42 (14), 3273 -3284 (2003).
Toy Example: Intuitive Rule 1 (Insensitive Optimal Value) • 26 Since uopt(d) = d, the setpoint error for y 3 is 10 u – 5 d = 10 d – 5 d = 5 d (loss = 0, if setpoint for y 3 is updated by 5 d) Candidate setpoint error y 1 0 y 2 20 d y 3 5 d y 4 d Best Worst
Toy Example: Intuitive Rule 3 (Large |G|) • • 27 Effect of implementation error on loss depends on d. Here, we consider d = 0 (nominal point). For y 3, n 3 = 10 u or u = 0. 1 n 3. Thus, J = (u - d)2 = (0. 1 n 3)2 Candidate Gain, G Effect of implementation error y 1 0. 1 (10 n 1)2 y 2 20 (0. 05 n 2)2 y 3 10 (0. 1 n 3)2 y 4 1 (n 4)2 Worst Best
Toy Example: Loss • • • For y 3, n 3 = 10 u – 5 d or u = 0. 1 n 3 + 0. 5 d. Loss = J – 0 = (u – d)2 = (0. 1 n 3 + 0. 5 d)2 Worst-case Loss: (0. 1 + 0. 5)2(1)2 = 0. 36 Candidate Loss W. C. Loss y 1 (10 n 1)2 100 y 2 (0. 05 n 2 - d)2 1. 0025 y 3 (0. 1 n 3 + 0. 5 d)2 0. 36 y 4 (n 4 - d)2 4 Worst Best Although y 1 “seemed” promising without implementation error, y 3 has better self-optimizing properties 28 NEED IMPROVED SIMPLE RULE!! (Scaled gain |Gs|)
Summary 29 • Most of the dofs are used for satisfying constraints (active constraint control) • For remaining unconstrained dofs, CVs can be selected by minimizing loss • Loss results due to setpoint error (optimum changes with disturbances) and implementation error (lack of perfect control). Both these effects need to be considered simultaneously • Systematic procedure provides insight about existing practice and can lead to nonintuitive set of CVs
Lecture 3: Local Analysis 30
Loss with given control policy c Worst-case loss: Average loss: • • 31 Using general formulation, evaluation of loss for different candidate measurements is time-consuming, especially when many alternatives are available Local methods are used for quick pre-screening of alternatives
Local Methods • Local (second-order accurate) methods – Approximate maximum gain or minimum singular value rule – Exact local method 32 • Local methods are used for quick pre-screening of alternatives • Viability of promising alternatives can be checked using general formulation (important!)
Problem Formulation subject to • Main assumption: set of active constraints does not change with disturbances d (inactive constraints) (active constraints) 33
Problem Formulation • Eliminate internal variables (states) • Remaining unconstrained problem • In general, x = f(u, d) may be difficult to find analytically. The local analysis can still be carried out with minor modifications. 34
Local Loss • • 35 Consider Taylor series expansion of J(u, d) around the moving optimal point (uopt(d), when u differs from uopt(d) Ignoring higher order terms
Local Loss • Let G be the (unscaled) steady-state gain matrix • Now, loss can be expressed as • 36 Note that span(c) = c – copt(d) is given as
Maximum Gain Rule (exact version) – To minimize the loss we want to maximize Select CVs that maximize (Gs) 37
Maximum gain rule (simplified) • Let the steady-state gain matrix G be scaled such that – Optimal span for every CV is less than 1 or – Unit deviation in each input has same effect on J or (NOT ALWAYS POSSIBLE…) (U is unitary matrix) CVs can be selected by maximizing scaled gain or 38 Reference: S. Skogestad and I. Postlethwaite, “Multivariable Feedback Control: Analysis and Design'', 1 st edition, John Wiley & Sons, Chichester, UK (1996)
Maximum Gain Rule in words Select CVs that maximize (Gs) In words, select controlled variables c for which the gain G (= “controllable range”) is large compared to its span (= sum of optimal variation and control error) 39
Why is Large Gain Good? J, c Loss Jopt cs= copt G Variation of u n uopt u With large gain G, even large implementation error n require lower deviation of u from uopt(d) leading to lower loss 40
Procedure • From a (nonlinear) model, compute the optimal inputs and outputs for different disturbances • For each candidate output, compute the variation in its optimal value, v i = (y iopt, max y iopt, min)/2 • Scale the candidate outputs such that for each output, the sum of the magnitudes of v i and n is same (e. g. 1); this makes S 1=I • 41 • If possible: Scale the inputs such that a unit deviation from the optimal value has same effect on J (this makes Juu=I); otherwise keep Juu • Select those outputs as controlled variables, which has a large
Toy Example Revisited 42 Reference: I. J. Halvorsen, S. Skogestad, J. Morud and V. Alstad, “Optimal selection of controlled variables”, Industrial & Engineering Chemistry Research, 42 (14), 3273 -3284 (2003).
Toy Example: Maximum Gain Rule Candidate G Δy opt(d) Span(y) Gs = G/(span×√Juu) y 1 0 0+1=1 0. 1/(1×√ 2) = 0. 07 y 2 20 20 d 20 + 1 = 21 20/(21×√ 2) = 0. 67 y 3 10 5 d 5+1=6 10/(6×√ 2) = 1. 18 y 4 1 d 1+1=2 1/(2×√ 2) = 0. 35 y 3 is the best candidate for self-optimizing control (same result as non-linear analysis) 43
Unconstrained degrees of freedom: How find “self-optimizing” variable combinations in a systematic manner? • The ideal “self-optimizing” variable is the gradient (first-order optimality condition (ref: Bonvin and coworkers)): • Optimal setpoint = 0 • • BUT: Gradient can not be measured in practice Possible approach: Estimate gradient Ju based on measurements y • Here alternative approach: Find optimal linear measurement combination which when kept constant ( § n) minimize the effect of d on loss. Loss = J(u, d) – J(uopt, d); where u is input for c = constant § n • 44 Candidate measurements (y): Include also inputs u
Unconstrained degrees of freedom: B. Optimal measurement combination H 45
Unconstrained degrees of freedom: B. Optimal measurement combination B 1. Nullspace method for n = 0 (Alstad and Skogestad, 2007) Basis: Want optimal value of c to be independent of disturbances • • Find optimal solution as a function of d: uopt(d), yopt(d) Linearize this relationship: yopt = F d Amazingly simple! Want: To achieve this for all values of d: • To find a F that satisfies HF=0 we must require Sigurd is told how easy it is to find H • Optimal when we disregard implementation error (n) 46 V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind. Eng. Chem. Res, 46 (3), 846 -853 (2007).
Unconstrained degrees of freedom: B. Optimal measurement combination B 2. Combined disturbances and implementation errors (“exact local method”) Loss L = J(u, d) – Jopt(d). Keep c = Hy constant , where y = Gyu + Gydd + ny Theorem 1. Worst-case loss for given H (Halvorsen et al, 2003): Applies to any H (selection/combination) Optimization problem for optimal combination: 48 • I. J. Halvorsen, S. Skogestad, J. C. Morud and V. Alstad, ``Optimal selection of controlled variables'', Ind. Eng. Chem. Res. , 42 (14), 3273 -3284 (2003).
Unconstrained degrees of freedom: B. Optimal measurement combination B 2. Exact local method for combined disturbances and implementation errors. Theorem 2. Explicit formula for optimal H. (Alstad et al, 2008): F – optimal sensitivity matrix = dyopt/dd Theorem 3. (Kariwala et al, 2008). 49 V. Alstad, S. Skogestad and E. S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 18, in press (2008). V. Kariwala, Y. Cao, S. jarardhanan, “Local self-optimizing control with average loss minimization”, Ind. Eng. Chem. Res. , in press (2008)
Toy Example 50
Toy Example: Measurement combinations 52
Toy Example 53
B 1. Nullspace method (no noise) • Loss caused by measurement error only • Recall rank single measurements: 3 > 2 > 4 > 1 54
B 2. Exact local method (with noise) 55
B 2. Exact local method, 2 measurements Combined loss for disturbances and measurement errors 56
B 2. Exact local method, all 4 measurements 57
• Stop here on 02 Sep. 2008 58
Example: CO 2 refrigeration cycle Unconstrained DOF (u) Control what? c=? 59 p. H
CO 2 cycle: Maximum gain rule 60
CO 2 refrigeration cycle Step 1. One (remaining) degree of freedom (u=z) Step 2. Objective function. J = Ws (compressor work) Step 3. Optimize operation for disturbances (d 1=TC, d 2=TH, d 3=UA) • Optimum always unconstrained Step 4. Implementation of optimal operation • No good single measurements (all give large losses): – ph, Th, z, … • Nullspace method: Need to combine nu+nd=1+3=4 measurements to have zero disturbance loss • Simpler: Try combining two measurements. Exact local method: – c = h 1 ph + h 2 Th = ph + k Th; k = -8. 53 bar/K • Nonlinear evaluation of loss: OK! 61
Refrigeration cycle: Proposed control structure 62 Control c= “temperature-corrected high pressure”
Summary: Procedure selection controlled variables 1. 2. 3. 4. Define economics and operational constraints Identify degrees of freedom and important disturbances Optimize for various disturbances Identify active constraints regions (off-line calculations) For each active constraint region do step 5 -6: 5. Identify “self-optimizing” controlled variables for remaining degrees of freedom 6. Identify switching policies between regions 63
Example switching policies – 10 km 1. ”Startup”: Given speed or follow ”hare” 2. When heart beat > max or pain > max: Switch to slower speed 3. When close to finish : Switch to max. power Another example: Regions for LNG plant (see Chapter 7 in thesis by J. B. Jensen, 2008) 64
Current research 1 (Sridharakumar Narasimhan and Henrik Manum): Conditions for switching between regions of active constraints Idea: • Within each region it is optimal to 1. Control active constraints at ca = c, a, constraint 2. Control self-optimizing variables at cso = c, so, optimal • Define in each region i: • Keep track of ci (active constraints and “self-optimizing” variables) in all regions i • Switch to region i when element in ci changes sign • Research issue: can we get lost? 65
Current research 2 (Håkon Dahl-Olsen): Extension to dynamic systems • Basis. From dynamic optimization: – Hamiltonian should be minimized along the trajectory • Generalize steady-state local methods: – Generalize maximum gain rule – Generalize nullspace method (n=0) – Generalize “exact local method” 66
Current research 3 (Sridharakumar Narasimhan and Johannes Jäschke): Extension of noise-free case (nullspace method) to nonlinear systems • Idea: The ideal self-optimizing variable is the gradient • Optimal setpoint = 0 • Certain problems (e. g. polynomial) – Find analytic expression for Ju in terms of u and d – Derive Ju as a function of measurements y (eliminate disturbances d) 67
Current research 4 (Henrik Manum and Sridharakumar Narasimhan): Self-optimizing control and Explicit MPC • Our results on optimal measurement combination (keep c = Hy constant) • Nullspace method for n=0 (Alstad and Skogestad, 2007) • Explicit expression (“exact local method”) for n≠ 0 (Alstad et al. , 2008) • Observation 1: Both result are exact for quadratic optimization problems • Observation 2: MPC can be written as a quadratic optimization problem and optimal solution is to keep c = u – Kx constant. • Must be some link! 68
Quadratic optimization problems • Noise-free case (n=0) • Reformulation of nullspace method of Alstad and Skogestad (2007) – Can add linear constraints (c=Hy) to quadratic problem withno loss – Need ny ≥ nu + nd. H is unique if ny = nu + nd (nym = nd) – H may be computed from nullspace method, 69 • V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind. Eng. Chem. Res, 46 (3), 846 -853 (2007).
Quadratic optimization problems • With noise / implementation error (n ≠ 0) • Reformulation of exact local method of Alstad et al. (2008) – Can add linear constraints (c=Hy) with minimum loss. – Have explicit expression for H from “exact local method” 70 • V. Alstad, S. Skogestad and E. S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 18, in press (2008).
Optimal control / Explicit MPC • Treat initial state x 0 as disturbance d. Discrete time constrained MPC problem: • In each active constraint region this becomes an unconstrained quadratic optimization problem ) Can use above results to find linear constraints 1. State feedback with no noise (LQ problem) • • Measurements: y = [u x] Linear constraints: c=Hy=u–Kx nx = nd: No loss (solution unchanged) by keeping c = 0, so u = Kx optimal! Can find optimal feedback K from “nullspace method”, – • 71 Same result as solving Riccatti equations NEW INSIGHT EXPLICIT MPC: Use change in sign of c for neighboring regions to decide when to switch regions • H. Manum, S. Narasimhan and S. Skogestad, ``A new approach to explicit MPC using self-optimizing control”, ACC, Seattle, June 2008.
Explicit MPC. State feedback. Second-order system Phase plane trajectory 72 time [s]
Optimal control / Explicit MPC 2. Output feedback (All statesnot measured). No noise • Option 1: State estimator • Option 2: Direct use of measurements for feedback “Measurements”: y = [u ym] Linear constraints: c = H y = u – K ym • No loss (solution unchanged) by keeping c = 0 (constant), so u = Kym is optimal, provided we have enough independent measurements: ny ≥ nu + nd • • 73 ) nym ≥ nd Can find optimal feedback K from “self-optimizing nullspace method” Can also add previous measurements, but get some loss due to causality (cannot affect past outputs) H. Manum, S. Narasimhan and S. Skogestad, ``Explicit MPC with output feedback using self-optimizing control”, IFAC World Congress, Seoul, July 2008.
Explicit MPC. Output feedback Second-order system State feedback 74 time [s]
Optimal control / Explicit MPC 2. Output feedback – Further extensions • Explicit expressions for certain fix-order optimal controllers • Example: Can find optimal multivariable PID controller by using as “measurements” • Current output (P) • Sum of outputs (I) • Change in output (D) 75
Optimal control / Explicit MPC 3. Further extension: Output feedbackwith noise • Option 1: State estimator • Option 2: Direct use of measurements for feedback “Measurements”: y = [u ym] Linear constraints: c = H y = u – K ym • Loss by using this feedback law (adding these constraints) is minimized by computing feedback K using “exact local method” 76 H. Manum, S. Narasimhan and S. Skogestad, ``Explicit MPC with output feedback using self-optimizing control”, IFAC World Congress, Seoul, July 2008.
Conclusion • Simple control policies are always preferred in practice (if they exist and can be found) • Paradigm 2: Use off-line optimization and analysis to find simple near -optimal control policies suitable for on-line implementation • Current research: Several interesting extensions – Optimal region switching – Dynamic optimization – Explicit MPC • 77 Acknowledgements – Sridharakumar Narasimhan – Henrik Manum – Håkon Dahl-Olsen – Vinay Kariwala
EXAMPLE: Recycle plant (Luyben, Yu, etc. ) Recycle of unreacted A (+ some B) 5 Feed of A 4 1 Given feedrate F 0 and column pressure: Dynamic DOFs: Nm = 5 Column levels: N 0 y = 2 Steady-state DOFs: N 0 = 5 - 2 = 3 78 2 3 Product (98. 5% B)
Recycle plant: Optimal operation m. T 1 remaining unconstrained degree of freedom 79
Control of recycle plant: Conventional structure (“Two-point”: x. D) LC LC XC x. D XC x. B LC Control active constraints (Mr=max and x. B=0. 015) + x. D 80
Luyben rule 81 Luyben rule (to avoid snowballing): “Fix a stream in the recycle loop” (F or D)
Luyben rule: D constant LC LC XC LC 82 Luyben rule (to avoid snowballing): “Fix a stream in the recycle loop” (F or D)
A. Maximum gain rule: Steady-state gain Conventional: Looks good Luyben rule: Not promising economically 83
How did we find the gains in the Table? 1. Find nominal optimum 2. Find (unscaled) gain G 0 from input to candidate outputs: c = G 0 u. • • In this case only a single unconstrained input (DOF). Choose at u=L Obtain gain G 0 numerically by making a small perturbation in u=L while adjusting the other inputs such that the active constraints are constant (bottom composition fixed in this case) IMPORTANT! 3. Find the span for each candidate variable • • • For each disturbance di make a typical change and reoptimize to obtain the optimal ranges copt(di) For each candidate output obtain (estimate) the control error (noise) n The expected variation for c is then: span(c) = i | copt(di)| + |n| 4. Obtain the scaled gain, G = |G 0| / span(c) 5. 84 Note: The absolute value (the vector 1 -norm) is used here to "sum up" and get the overall span. Alternatively, the 2 -norm could be used, which could be viewed as putting less emphasis on the worst case. As an example, assume that the only contribution to the span is the implementation/measurement error, and that the variable we are controlling (c) is the average of 5 measurements of the same y, i. e. c=sum yi/5, and that each yi has a measurement error of 1, i. e. nyi=1. Then with the absolute value (1 -norm), the contribution to the span from the implementation (meas. ) error is span=sum abs(nyi)/5 = 5*1/5=1, whereas with the two-norn, span = sqrt(5*(1/5^2) = 0. 447. The latter is more reasonable since we expect that the overall measurement error is reduced when taking the average of many measurements. In any case, the choice of norm is an engineering decision so there is not really one that is "right" and one that is "wrong". We often use the 2 -norm for mathematical convenience, but there also physical justifications (as just given!).
B. “Brute force” loss evaluation: Disturbance in F 0 Luyben rule: Conventional 85 Loss with nominally optimal setpoints for Mr, x. B and c
B. “Brute force” loss evaluation: Implementation error Luyben rule: 86 Loss with nominally optimal setpoints for Mr, x. B and c
C. Optimal measurement combination • 1 unconstrained variable (#c = 1) • 1 (important) disturbance: F 0 (#d = 1) • “Optimal” combination requires 2 “measurements” (#y = #u + #d = 2) – For example, c = h 1 L + h 2 F • BUT: Not much to be gained compared to control of single variable (e. g. L/F or x. D) 87
Conclusion: Control of recycle plant Active constraint Mr = Mrmax Self-optimizing 88 L/F constant: Easier than “two-point” control Assumption: Minimize energy (V) Active constraint x. B = x Bmin
Recycle systems: Do not recommend Luyben’s rule of fixing a flow in each recycle loop (even to avoid “snowballing”) 89
Example: Cooling Cycle • Heat supplied to surroundings at high temperature T H and removed at low temperature T C • • • J = W s (work supplied) DOF = u (valve opening) Main disturbance d = T H What should we control? 90 Reference: S. Skogestad and I. Postlethwaite, “Multivariable Feedback Control: Analysis and Design'', 2 nd edition, John Wiley & Sons, Chichester, UK (2005)
Example: Cooling Cycle • • 91 Levels, M h and M l are promising CVs (often used in practice) Setpoint error computed using nonlinear model for d = 0. 1 o. C Note: implementation errors are not considered Confirmation using non-linear model: 10 o. C disturbance in T H increases work W s by 10%, when c = u (valve opening) and by 0. 003%, when c = M h (condenser level)
Limitations of Maximum Gain Rule • • • 92 It may not be possible to scale Juu as a scalar times unitary matrix. This minor limitation can be easily overcome by defining More seriously, the maximum gain rule assumes that the worst-case setpoint errors Δci, opt(d) for each CV can appear together. In general, Δci, opt(d) are correlated. This limitation makes maximum gain rule approximate and can sometimes lead to sub-optimal set of CVs To overcome the limitation, exact local methods are used.
Summary • Evaluation of candidates can be time-consuming using general non-linear formulation – Pre-screening using local methods. – Final verification for few promising alternatives by evaluating actual loss 93 • Maximum gain rule is extremely simple and insightful, but “may” lead to nonoptimal set of CVs • Exact local methods (worst-case or average-case) are more accurate • Lower loss: measurement combinations as CVs
Self-Optimizing Control • Tracking optimal operating point with disturbance changes – Paradigm 1: Estimate disturbances and re-optimize to update degrees of freedom – Paradigm 2: Update degrees of freedom indirectly to hold controlled variables at setpoints using feedback controllers Self-optimizing control: acceptable loss (in comparison with truly optimal operation) with feedback based operational strategy 94
Selection of Controlled Variables • • Control active constraints (with back-off) • For remaining dof, look for self-optimizing CVs Intuitively, select CVs c with – Small optimal variation in setpoints and implementation error – Large gain from u to c • 95 More generally, compare worst-case or average loss for allowable disturbances and implementation errors
Acknowledgements • Co-workers – – – – – 96 Vinay Kariwala (NTNU + NTU, Singapore) Ivar J. Halvorsen (NTNU, Norway) Marius S. Govatsmark (NTNU, Norway) Vidar Alstad (NTNU, Norway) Antonio C. B. de Araujo (NTNU, Norway) Sridharkumar Narasimhan (NTNU, Norway) Henrik Manum (NTNU, Norway) Hakon Dahl-Olsen (NTNU, Norway) Sivaramakrishanan Janardhanan (NTU, Singapore) Yi Cao (Cranfield University, UK)
- Slides: 94