Classification of Discrete Event Simulation Models and Output

  • Slides: 19
Download presentation
Classification of Discrete Event Simulation Models and Output Data: Creating a Sufficient Model Set.

Classification of Discrete Event Simulation Models and Output Data: Creating a Sufficient Model Set. Katy Hoad (kathryn. hoad@wbs. ac. uk) Stewart Robinson, Ruth Davies, Mark Elder www. wbs. ac. uk/go/autosimoa Funded by EPSRC and SIMUL 8 Corporation

AIM: Provide a representative and sufficient set of models / data output for use

AIM: Provide a representative and sufficient set of models / data output for use in discrete event simulation research.

MODEL CLASSIFICATION Creating A Standard Set of Models/Outputs Outline: § Motivation § Identification of

MODEL CLASSIFICATION Creating A Standard Set of Models/Outputs Outline: § Motivation § Identification of model/output characteristics § Creation of a classification system

Motivation Simulation model • Warm-up length • Run-length • Number of replications Output data

Motivation Simulation model • Warm-up length • Run-length • Number of replications Output data Obtain more output data Want to create an automated Analyser to advise user on: Analyser Warm-up analysis Use replications or long-run? Replications analysis Run-length analysis Recommendation possible? Recommendation

Motivation • Needed to test output analysis methods to find the most effective methods

Motivation • Needed to test output analysis methods to find the most effective methods and… • …test created algorithms for effectiveness and robustness. • Required a set of models / output data that sufficiently covered the different types of possible models/output. • Could not find a general set in the public domain.

Identification of model/output characteristics How do you define a sufficient and representative set of

Identification of model/output characteristics How do you define a sufficient and representative set of models/output? AIM To define a set of characteristics that classify/describe a model and its output. Ø Searched the literature. Ø Collected and studied ‘real’ models/output.

Categorising Output Data Sets by Shape & Characteristics Non-t ermin Group A ating Terminating

Categorising Output Data Sets by Shape & Characteristics Non-t ermin Group A ating Terminating Group B o C to Au …Group N ro t n o c f /out o In Cycl ity l a m r o N S n o i t a rrel d tea e t a t ys Tran sien t ing/S easo l nality

2 main categories or groups: Ø Transient (including out-of-control trend) Ø Steady-state (including steady-state

2 main categories or groups: Ø Transient (including out-of-control trend) Ø Steady-state (including steady-state cycle) 9 other characteristics of models / output were chosen to categorize the models / output within these two main groups.

Model characteristics Output data characteristics q Deterministic or Stochastic (random) § Empty-to-empty pattern q

Model characteristics Output data characteristics q Deterministic or Stochastic (random) § Empty-to-empty pattern q Significant predetermined model changes (by time) q Dynamic internal changes i. e. ‘feedback’ § Initial transient (warm-up) § Out of control trend ρ≥ 1 § Cycle § Auto-correlation § Statistical distribution

Looked at over 50 real models - defined as discrete event simulation models of

Looked at over 50 real models - defined as discrete event simulation models of real existing / future systems: For example: Model Call Centre Output / response % of calls answered within 30 seconds Production Line in Manufacturing Plant Fast Food Store Hospital Through-put Average queuing time Average number in system Ø Justification of selection of model output: Picked most likely output result for each model, using already programmed results collection when feasible.

Further Analysis Each real model was statistically analysed as follows: 1. Steady State: Subtract

Further Analysis Each real model was statistically analysed as follows: 1. Steady State: Subtract mean from output data. Test residuals for Auto-correlation and Normality. 2. Steady State Cycle: Run model for many cycles. Take mean of each cycle to create a new time series. Subtract mean from this new output data. Test residuals for Auto-correlation and Normality. 3. Transient: Test for Auto-correlation on output data. Run many replications (1000) Take mean of each replication to create new (non auto-correlated) data set. Test for what type of statistical distribution best fits this new data set. 4. Out-Of-Control: Plot data

Analysis Results Steady State data: • Autocorrelation: AR(1), AR(2), some AR(3+), some ARMA(n, n)

Analysis Results Steady State data: • Autocorrelation: AR(1), AR(2), some AR(3+), some ARMA(n, n) & some with no auto-correlation. • Distributions: Normal and non-normal. Transient data: • AR(1), AR(2), some AR(3+), some ARMA(n, n) & some with no autocorrelation. • Distributions found to be a ‘good’ fit to the various transient data output: Normal, Beta, Pearson 5, Log. Normal, Weibull, Gamma, Pearson 6, Erlang, Chi squared, Bi-modal distribution

Classification Tables Ø MODEL SUMMARY_Steady State. xls Ø MODEL SUMMARY_Transient. xls Ø AIM: Collect

Classification Tables Ø MODEL SUMMARY_Steady State. xls Ø MODEL SUMMARY_Transient. xls Ø AIM: Collect ‘real’ models to cover range of classifications of models. (On-going process) Create artificial models to cover range of classifications of output data.

Sample of Artificial Models from literature: steady state outputs with or without a warm-up

Sample of Artificial Models from literature: steady state outputs with or without a warm-up period. • Cash et al 1992: AR(1); M/M/1; Markov Chain. • Robinson 2007: AR(1); M/M/1. • Goldsman et al. 1994: AR(1); M/M/1. • White, Cobb & Spratt 2000: AR(2). • Ockerman & Goldsman: 1997 Random Walk; AR(1); MA(1). • Kelton & Law 1983: M/M/1 (FIFO); M/M/1 (LIFO); M/M/1(SIRO); M/M/1 (initialized with 10 customers); E 4/M/1; M/H 2/1; M/M/2; M/M/4; M/M/1/M/1. • Hsieh et al 2004: M/M/1/199; M/G/1/199; M/M/1/19; Number-in-stock process single item inventory management system.

3 main methods for creating artificial models / output data sets: 1. Create simple

3 main methods for creating artificial models / output data sets: 1. Create simple simulation models where theoretical value of some output / response is known. Ø E. g. Model: M/M/1. Output: mean waiting time. 2. Create simple simulation models where the value of some output / response is estimated but model characteristics can be controlled. Ø E. g. Model: Single item inventory management system. Output: Number-in-stock. 3. Create data sets from known equations, which closely resemble real model output, with known value for some specific output / response. Ø E. g. AR(1) with Normal(0, 1) errors. Output: mean

Our Project: Replications and Warm-up Method Testing • Replication Method Testing – Data sets

Our Project: Replications and Warm-up Method Testing • Replication Method Testing – Data sets of replicated mean values from transient output – left and right skewed, Normal and Bi-modal. – Real models • Warm-up Method Testing – Steady state functions: AR(1), AR(2), AR(4), MA(2), ARMA(5, 5), no auto-correlation. – Initialisation Bias functions: Severity, Length, Shape. – Real models

SUMMARY • Produced a classification of model and output data types for the purpose

SUMMARY • Produced a classification of model and output data types for the purpose of aiding research into simulation output analysis. • Currently using artificial models that broadly cover each output type in the classification tables in our research into output analysis methods. www. wbs. ac. uk/go/autosimoa

DISCUSSION: YOUR COMMENTS APPRECIATED Ø Using our chosen classification criteria, we have classified a

DISCUSSION: YOUR COMMENTS APPRECIATED Ø Using our chosen classification criteria, we have classified a complete set of possible models / output: But are these criteria sufficient? Ø Main model/output types missing from our collection: • Transient with warm-up. • Deterministic transient. • Cycle with warm-up Are these missing model criteria feasible?

ACKNOWLEDGMENTS This work is part of the Automating Simulation Output Analysis (Auto. Sim. OA)

ACKNOWLEDGMENTS This work is part of the Automating Simulation Output Analysis (Auto. Sim. OA) project that is funded by the UK (EPSRC) Engineering and Physical Sciences Research Council (EP/D 033640/1). The work is being carried out in collaboration with SIMUL 8 Corporation, who are also providing sponsorship for the project. Stewart Robinson, Katy Hoad, Ruth Davies INFORMS November 2007 www. wbs. ac. uk/go/autosimoa