www poulinhugin com Page 1 Overview Brief Project
www. poulinhugin. com Page 1
Overview • Brief Project History • Hugin Expert A/S and Bayesian Technology Discussion • Poulin Automation Tool Discussion Page 2
Hugin Software? • Product maturity and optimisation produce the world’s fastest Bayesian inference engine • State-of-the-art capabilities based on internal and external research and development • Practical experience and theoretical excellence combined form the basis of further product refinement • High-performance and mission critical systems in numerous areas are constructed using Hugin software Page 3
Hugin Expert A/S? • The market leader for more than a decade • Highly skilled researchers and developers • Has strategic cooperation internationally • Has a clear strategy for maintaining its leadership as tool and technology provider • Part of the world’s largest Bayesian research groups • Has experience from numerous, large-scale, international R&D projects Page 4
Client List USA Hewlett-Packard Intel Corporation Dynasty Dr. Red. Duke Xerox Lockheed Martin NASA/Johnson Space Center Boeing Computer Service USDA Forest Service Information Extraction & Transport Inc. Pacific Sierra Research Price Waterhouse Swiss Bank Corporation Bellcore ISX Corporation Lam Research Corporation Orincon Corporation Integrate IT Charles River Analytics Northrop Grumman CHI Systems Inc Voyan Technology Los Alamos National Laboratory Rockwell Science Center Citibank Perkin Elmer Corporation Inscom Honeywell Software Initiative Aragon Consulting Group Raytheon Systems Company Kana Communications Sandia National Laboratories GE Global Research Westhollow Technology Center Great Britain Rolls-Royce Aerospace Group Philips Research Laboratories USB AG Motorola Defence Research Agency Nuclear Electric Plc Marconi Simulation Lucas Engineering & Systems ltd Lloyd´s Register BT Laboratories Brown & Root Limited Silsoe Research Institute Aon Risk Consultants Railtrack Shell Global Solutions Germany Siemens AG Volkswagen AG Daimler. Chrysler AG GSF Medis Reutlingen Kinderklinik France PGCC Technologie Protectic Objectif Technologies Page 5 Usinor Canada Decision Support Technologies Italy ENEA CRE Casassia C. S. E. L. T. Israel IBM Haifa Research Laboratory Australia Department of Defence, DSTO National Australian Bank Netherlands Shell International E&P Japan Sumitomo Metal Industries Dentsu Inc. Scandinavia Defence Research Agency Danish Defence Research Establishm. Aalborg Portland Danish Agricultural Advisory Center COWI FLS Automation Judex Datasystemer AON Denmark ABB Nykredit Swedpower South Africa CSIR
Hugin Expert Bayesian Software Page 6
Bayes’ Theorem • Rev. Thomas Bayes (1702 -1761), an 18 th century priest from England • The theorem, as generalized by Laplace, is the basic starting point for inference problems using probability theory as logic – assigns degree of belief to propositions Page 7
Bayesian Technology • Probablistic graphical models • Model based approach to decision support • Compact and intuitive graphical representation • Sound & coherent handling of uncertainty • Reasoning and decision making under uncertainty • Bayesian networks and influence diagrams Page 8
A Bayesian Network • A Bayesian network consists of: • A set of nodes and a set of directed edges between nodes • The nodes together with the directed edges form a directed acyclic graph (DAG) • Each node has a finite set of states • Attached to each node X with parents there is a conditional probability table • A knowledge representation for reasoning under uncertainty Page 9
Bayesian Expert Systems Generative distribution • Induce structure of the graphical representation • Fusion of data & expert knowledge • Estimate parameters • Fusion of data & expert knowledge Page 10
Implementation • • • Cause and effect relations represented in an acyclic, directed graph Strengths of relations are encoded using probabilities Compute probabilities of events given observations on other events Fusion of data and domain knowledge Analyse results using techniques like conflict & sensitivity analysis Page 11
Example: Car Won’t Start Page 12
Technology Summary • A compact and intuitive graphical representation of causal relations • Coherent and mathematically sound handling of uncertainty and decisions • Construction and adaptation of Bayesian networks based on data sets • Efficient solution of queries against the Bayesian network • Analysis tools such as • Data conflict, Explanation, Sensitivity, Value of information analysis Page 13
What Does This Do For You? • Reasoning and decision making under uncertainty supporting • • Diagnosis Prediction Process analysis and supervision Filterting & classification Control Troubleshooting Predictive maintenance … Page 14
Bayesian Applications • • Medicine – forensic identification, diagnosis of muscle and nerve diseases, antibiotic treatment, diabetes advisory system, triage (Ask. Red. com) Software – software debugging, printer troubleshooting, safety and risk evaluation of complex systems, help facilities in Microsoft Office products Information Processing – information filtering, display of information for time-critical decisions, fault analysis in aircraft control Industry – diagnosis and repair of on-board unmanned underwater vehicles, prediction of parts demand, control of centrifugal pumps, process control in wastewater purification. Economy – prediction of default, credit application evaluation, portfolio risk and return analysis Military – NATO Airborne Early Warning & Control Program, situation assessment Agriculture – blood typing and parentage verification of cattle, replacement of milk cattle, mildew management in winter wheat Page 15
Hugin Products • General purpose decision support • Hugin Explorer • Hugin graphical user interface • Hugin Developer • Hugin graphical user interface • Hugin decision engine • APIs (C, C++, Java) and Active. X server • Troubleshooting • Hugin Advisor • A suite of tools for troubleshooting • Data mining • Hugin Clementine Link Page 16
POULIN HUGIN Automation Tool v 0. 1 Page 17
Vision • To create an application that would provide automation for The Hugin Decision Engine. • • • Focus on main Bayesian Inference capabilites Build automation capabile command line tool Build data parser formating of structured/unstructured data Divide problem space and build meta-database Integrate with Hugin GUI for human based knowledge discovery Page 18
Methodology • The Naive Bayes Model • Structure, variables and states, • Discretization using Principle of Maximum Entropy • Parameter estimation using EM • The Tree Augmented Naive Bayes Model • Interdependence relations between information variables based on mutual information • (extra step compared to NBM) • Model update by adding new nodes as in NBM • Value of information (variables and cases) • Evidence sensitivity analysis (what-if) Page 19
Functionality Analysis Model Data Command-line interface • • • Data preparation Model construction - build Naive Bayes Model or. Tree Augmented NBM Model update - Add additional information variables Inference - compute probability of target given evidence What-if analysis - robustness of probabilities Value of Information analysis • Which case is most informative • Which observation is most informative Page 20
Features • An application for construction of a Naive Bayes Model • • • Updating a Naive Bayes Model Construction of a Tree Augmented Naive Bayes Model Inference base on a case What-if sensitivity analysis (a single piece of evidence) Value-of-information analysis (cases and observations) • Error handling and tracing have been kept at a minimum. • Implemented in C++ using Hugin Decision Engine 6. 3 • Runs on Windows 2 k, Linux Redhat, Sun Solaris. • Program documentation in HTML. Page 21
Tools • The POULIN-HUGIN package consists of a set of tools for • Data preparation : • dat 2 hcs, class 2 net, class 2 dat, weather 2 dat, pull, struct 2 dat, ustruct 2 hcs • Model construction & update : • ph • Inference : • ph • Analysis : • ph Page 22
Data Sample • Source data are from the Global Summary of the Day (GSOD) database archived by the National Climatic Data Center (NCDC). • Used average daily temperature (of 24 hourly temperature readings) in 145 US cities measured from January 1, 1995 to December 29, 2003. • Data of 3, 255 cases split into subsets for learning, updating, cases, and case files. • • learning: 2000 cases update: 1000 cases: 10 cases case files: 245 cases • 2, 698 missing values out of 471, 975 entries: 0. 006% missing values. Page 23
Discretization • Measures on average daily temperatures are continuous by nature. • Continuous variables can be represented as discrete variables through discretization. • Determining intervals: how many, width, equally sized, . . . ? • We discretize using the principle of maximum entropy, but can easily make equidistant (uniform) discretization. Page 24
Discretization • Entropy can be considered as a measure of information. Obtain uninformative distribution under current information. • Principle of Maximum Entropy • By choosing to use the distribution with the maximum entropy allowed by our information, the argument goes, we are choosing the most uninformative distribution possible. To choose a distribution with lower entropy would be to assume information we do not possess; to choose one with a higher entropy would violate the constraints of the information we do possess. Thus the maximum entropy distribution is the only reasonable distribution. • Discretize variables to have uniform distribution based on data. Page 25
Model Specification • A Bayesian network consists of a • qualitative part, the graph structure (DAG G). • quantitative part, the conditional probability distributions (P). • Model specification consists of two parts. • A Bayesian network N is minimal if and only if, for every node X and for every parent Y, X is not independent of Y given the other parents of X Page 26
Naive Bayes Model • A well-suited model for classification tasks and tasks of the following type • An exhaustive set of mutex hypotheses h 1; : : : ; hn are of interest • Measures on indicators I 1; : : : ; In to predict hi • The Naive Bayes Model • h 1; : : : ; hn are represented as states of a hypothesis variable H • Information variables I 1; : : : ; In are children of H • The fundamental assumption is that I 1; : : : ; In are pairwise independent when H is known. • Computationally and representationally a very efficient model that provides good results in many cases. Page 27
Naive Bayes Model • The Naive Bayes Model in more details • Let the possible hypotheses be collected into one hypothesis variable H with prior P(H). • For each information variable I, acquire P(I | H) = L(H | I). • For any set of observations calculate: • The posterior is where • The conclusion may be misleading as the assumption may not hold Page 28
NBM Model Construction ph -nbm <data> <target> <states> <iterations> [-verbose] • This command builds a NBM model from the data contained in <data> with <target> as the hypothesis variable. • All variables will have a maximum of <states> states. • As many as <iterations> iterations of the EM algorithm will be performed • The model constructed is saved in file "nbm. net", which can be loaded into Hugin Graphical User Interface for inspection • Example: ph -nmb model. dat MDWASHDC 2 1 Monday, May 11 th, 2004 Page 29 anders@hugin. com
Binary NBM Model Construction ph -boolnbm <data> <target> <states> <iterations> [-verbose] • This command builds a Boolean NBM model from the data contained in <data> with <target> as the hypothesis variable. • All variables will be Boolean indicating the presence of a word (the word represented by a variable is equal to the label of the variable). • As many as <iterations> iterations of the EM algorithm will be performed. • The model constructed is saved in file ”boolnbm. net", which can be loaded into Hugin Graphical User Interface for inspection. • Example: ph -boolnmb model. dat MDWASHDC 2 1 Monday, May 11 th, 2004 Page 30 anders@hugin. com
Tree-Augmented NBM Model • Let M be a Naive Bayes Model with hypothesis H and information variables I = f. I 1; : : : ; Ing • We can use I(Ii; Ij j H) to measure the conditional dependency between two information variables Ii; Ij conditional on H. • After computing I(Ii; Ij j H) for all Ii; Ij, we use Kruskal’s algorithm to find a maximum weight spanning tree T on I: • The edges of T are directed such that no variable has more than two parents (H and one other I). • Complexity of inference becomes polynomial in the number of information variables. Page 31
TAN Model Construction ph -tan <data> <target> <states> <iterations> [-verbose] • This command builds a Tree-Augmented Naive Bayes model (TAN) from the data contained in <data> with <target> as the hypothesis variable. • All variables will have a maximum of <states> states. • As many as <iterations> iterations of the EM algorithm will be performed. • The model constructed is saved in file "tan. net", which can be loaded into Hugin Graphical User Interface for inspection. • Example: ph -tan model. dat MDWASHDC 2 1 Page 32
Binary TAN Model Construction ph -booltan <data> <target> <states> <iterations> [-verbose] • This command builds a Tree-Augmented Boolean Naive Bayes model (TAN) from the data contained in <data> with <target> as the hypothesis variable. • All variables will be Boolean indicating the presence of a word (the word represented by a variable is equal to the label of the variable). • As many as <iterations> iterations of the EM algorithm will be performed. • The model constructed is saved in file ”booltan. net", which can be loaded into Hugin Graphical User Interface for inspection. • Example: ph -booltan model. dat MDWASHDC 2 1 Page 33
Model Updates ph -update <data> <model> <target> <states> <iterations> [-verbose] • This command updates a model with data contained in <data>. <target> is the hypothesis variable of the model stored in <model>. • Variables in the data not represented in the original model will be added to the model as children of the hypothesis variable (no structure between information variables is added). The data file should contain measures on all variables (old and new). • All new variables will have a maximum of <states> states. • As many as <iterations> iterations of the EM algorithm will be performed. • The updated model is saved in file "update. net", which can be loaded into Hugin Graphical User Interface for inspection. • Example: ph -update. dat model. net MDWASHDC 2 1 Page 34
Parameter Estimation • Parameter learning is identification of the CPTs of the Bayesian network. • theoretical considerations, database of cases, subjective estimates. • The CPTs are constructed based on a database of cases D = fc 1; : • There may be missing values in some of the cases indicated by N/A. • The CPTs are learned by maximum likelihood estimation: • where n(Y = y) is the (expected) number of cases for which Y = y. Page 35
Parameter Estimation Page 36
Parameter Estimation • Prior (domain expert) knowledge can be exploited. • Experience is the number of times pa(Xi) = j has been observed. • Experience count is positive number j > 0. • Also used to turn on/off learning. • Prior knowledge is used both to speed up and guide learning in search of global optimum • Expected counts used when values are missing. • Including parameters not appearing in the data. • The EM algorithm is an iterative procedure using the current estimate of the parameters as the true values. In the first run the initial content is used. Page 37
Inference ph -inference <model> <target> <case> [-verbose] • This command performs inference in <model>, which has <target> as hypothesis variable. The posterior distribution in <target> is displayed for the case stored in the file <case> • Example: ph -inference nbm. net MDWASHDC case. hcs Page 38
VOI in Bayesian Networks • How do we perform value of information analysis without specifying utilities? • The reason for acquiring more information is to decrease the uncertainty about the hypothesis. • The entropy is a measure of how much probability mass is scattered around on the states (the degree of chaos). • Thus, where • Entropy is a measure of randomness. The more random a variable is, the higher entropy its distribution will have. Page 39
Value of Information • If the entropy is to be used as a value function, then • We want to minimize the entropy Page 40
Variables Value of Information • What is the expected most informative observation ? • A measure of the reduction of the entropy of T given X. • The conditional entropy is • Let T be the target, now select X with maximum information gain Page 41
Variables Value of Information • Assume we are interested in B, i. e. B is target: • We are interested in observing variable Y with most information on B: • We select to observe • Thus, and compute: Page 42
Variables VOI Command Line ph -voivariables <model> <target> <case> [-verbose] • This command performs a value-of-information analysis on each nonobserved variable given the observations in <case> relative to <target>. That is, for each unobserved variable in <case>, a measure of how well the variable predicts <target> is displayed. • Example: ph -voivariables nbm. net MDWASHDC case_2. hcs Page 43
Case Value of Information • Assume T is the target of interest and assume we have a database of cases D = fc 1; : : : • The uncertainty in T can be measured as H(T): • A high value of H(T) indicates high uncertainty • A low value of H(T) indicates low uncertainty • Entropy for the binary case E(H) • We compute H(T j c) for all cases c. • The case c producing the lowest value of H(T j c) is considered most informative wrt. T. Page 44
Case VOI Command Line ph -voicase <model> <target> <case> [-verbose] • This command performs a value-of-information analysis on the case stored in <case> relative to <target>. That is, a measure of how well the case predicts <target> is displayed. • Example: ph -voicase tan. net MDWASHDC case_2. dat Page 45
Evidence Sensitivity Analysis • Let = f 1; : : : ; ng be a set of observations and assume a single hypothesis h is of interest. • What-if the observation i had not been made, but instead ? • Involves computing P(h j [ f 0 ig n fig) and comparing results. • This kind of analysis will help you determine, if a subset of evidence acts for or against a hypothesis. Page 46
What-If Analysis • What happens to the temperature in. Washington, DC if the temperature in Austin, TX changes? • Assume evidence = f 1; : : : ; ng and let i be the measured temperature in Austin, TX • We compute P(T = t j n fig [ f 0 ig) for all • Myopic what-if analysis: change finding on one information marginal and monitor the change in probability of T Page 47
What-If Analysis: Cases ph -whatif <model> <target> <case> • This command performs a what-if analysis on each instantiated variable in the case file <case> relative to <target>. That is, the posterior distribution of each hypothesis (each state of the target variable) is displayed for each possible value of the observed variables. • Example: ph -whatif model. net MDWASHDC case. hcs Page 48
What-If Analysis: Variables ph -whatif <model> <target> <case> <variable> • This command performs a what-if analysis on <variable> relative to <target>. That is, the posterior distribution of each hypothesis (each state of the target variable) is displayed for each possible value of the indicated variable. • Example: ph -whatif model. net MDWASHDC case. hcs TXAUSTIN Page 49
Help ph –help • This command displays as simple help. Page 50
Create Weather Data File weather 2 dat <output> <input> … • This command will create a HUGIN data file from the input files specified as arguments. • Example: weather 2 data model. data MDWASHDC. txt MDBALTIM. txt LAVIENTIN. txt Page 51
Pull a Web Page pull <url> <output> • This command will save the web page specified in <url> to a file name <output> • Example: pull http: //www. hugin. com hugin. html Page 52
Parse Structured HTML struct 2 dat <html> [<output>] • This command will parse <html> and output the content of any tables specified in <html>. • The content is either output to standard output or stored in a file named <output> • Example: struct 2 dat page. html model. dat Page 53
Parse Unstructured HTML ustruct 2 dat <html> [<output>] • This command will parse <html> removing all HTML tags • The content is either output to standard output or stored in a file named <output> • Example: ustruct 2 dat page. html model. dat Page 54
Classification to Data File class 2 dat <model> <target> <classification> <output> • This command will create a HUGIN data set based on the variables stored in <model> and the data stored in <classification>. • The resulting data set will be stored in a file named <output> • Example: class 2 dat variables. net classification. txt model. dat Page 55
Feature Selection • The identification of predictor variables proceeds by statistical tests for independence • We test the strength of the dependence between the classification variable and each potential predictor variable • Chi-squarre test between class variable X and predictor Y: • Hypothesis : X and Y are independent (Y is not a predictor) • Compute test statistic • If is sufficiently small, i. e. included in the model) then hypothesis is not rejected (=> Y is not • The hypothesis is rejected if the probability of obtaining a larger statistic is less than 5 % (significance level) Page 56
Identify Predictor Variables class 2 net <target> <classification> <output> • This command will identify a set of predictor variables based on the data stored in <classification>. • The resulting set of predictor variables is saved to a HUGIN network file named <output>. • Each variable is Boolean indicating whether or not the word represented by the variable is present. • The predictor variables are identified based on statistical tests for pair-wise independence between <target> and each potential predictor variable (currently the significance level is 5%). • Example: class 2 net classification. txt variables. net Page 57
Parse Unstructured HTML ustruct 2 hcs <model> <target> <unstruct> … • This command will parse a sequence of unstructed HTML files creating one HUGIN case file for each HTML file • The command identifies whether or not each of the variables (except <target>) in the model stored in <model> is present in the HTML file • The case file is either output to standard output or stored in a file named <output> • Example: ustruct 2 hcs boolnbm. net class page 1. html page 2. html Page 58
Extract Case From Data File dat 2 hcs <model> <target> <data> <index> [-verbose] • This command will save the case with <index> in the data file named <data> to a file name ”case_” + <index> +”. hcs” • Example: dat 2 hcs nbm. net MDWASHDC model. dat 2 Page 59
Contact Information www. poulinhugin. com Anders L. Madsen Hugin Expert A/S Gasværksvej 5 9000 Aalborg Denmark www. hugin. com Phone: +45 96 55 07 90 Fax: +45 96 55 07 99 Chris Poulin Holdings LLC P. O. Box 969 Portsmouth, NH 03802 US www. poulinholdings. com Phone: +1 617 755 9049 Fax: +1 207 351 2509 Page 60
- Slides: 60