COMP 578 Data Warehouse Architecture And Design Keith

  • Slides: 76
Download presentation
COMP 578 Data Warehouse Architecture And Design Keith C. C. Chan Department of Computing

COMP 578 Data Warehouse Architecture And Design Keith C. C. Chan Department of Computing The Hong Kong Polytechnic University Winter, 2001 Keith C. C. Chan

An Overview • The basic architectures used most often with data warehouses are: –

An Overview • The basic architectures used most often with data warehouses are: – – – First, a generic two-level physical architecture for entry-level data warehouses; Second, an expanded three-level architecture that is increasingly used in more complex environments Third, the three-level data architecture that is associated with three-level physical architecture. Winter, 2001 Keith C. C. Chan 2

A Generic Two-Level Architecture other Metadata sources Operational DBs Winter, 2001 Extract Transform Load

A Generic Two-Level Architecture other Metadata sources Operational DBs Winter, 2001 Extract Transform Load Refresh Data Warehouse Keith C. C. Chan Serve End-Users’ Decision Support Applications 3

A Generic Two-Level Architecture • 4 basic steps to building this architecture: – –

A Generic Two-Level Architecture • 4 basic steps to building this architecture: – – Data are extracted from the various source system files and databases. Data from the various source systems are transformed and integrated before being loaded into the DW. The DW contains both detailed and summary data and is a read-only DB organized for decision support. Users access DW by means of a variety of query languages and analytical tools. Winter, 2001 Keith C. C. Chan 4

A Generic Three-Level Architecture OLAP Server other Metadata sources Operational DBs Extract Transform Load

A Generic Three-Level Architecture OLAP Server other Metadata sources Operational DBs Extract Transform Load Refresh Monitor & Integrator Data Warehouse Selection & Aggregation Data Marts Winter, 2001 Keith C. C. Chan 5

A Generic Three-Level Architecture • Two-layer architecture represents the earliest DW applications but is

A Generic Three-Level Architecture • Two-layer architecture represents the earliest DW applications but is still widely used today. • It works well in SMEs with a limited number of HW and SW platforms and a relatively homogeneous computing environment. • Problems in maintaining data quality and managing the data extraction process for larger companies. • Three-level architecture: – – – Operational systems and data Enterprise data warehouse Data marts Winter, 2001 Keith C. C. Chan 6

Enterprise data warehouse • EDW is a centralized, integrated DW. – – It serves

Enterprise data warehouse • EDW is a centralized, integrated DW. – – It serves as a control point for assuring the quality and integrity of data before making it available. It provides a historical record of the business for timesensitive data. It is the single source of all data but is not typically accessed directly by end users. It is too large and complex for users to navigate. • Users access data derived from the data warehouse and stored in data marts. • Users may access the data indirectly through the process of drill-down. Winter, 2001 Keith C. C. Chan 7

Data Mart • It is a DW that is limited in scope. • Its

Data Mart • It is a DW that is limited in scope. • Its contents are obtained by selecting and summarizing data from the enterprise DW. • Each data mart is customized for the decisionsupport applications of a particular end-user group. • An organization may have a marketing data mart, a finance data mart, and so on. Winter, 2001 Keith C. C. Chan 8

A Three Layer Data Architecture • Corresponds to the three-layer DW architecture. • Operational

A Three Layer Data Architecture • Corresponds to the three-layer DW architecture. • Operational data are stored in various operational system throughout the organization. • Reconciled data are the type of data stored in the enterprise DW. • Derived data are data stored in each of the data marts. Winter, 2001 Keith C. C. Chan 9

A Three Layer Data Architecture (2) • Reconciled data: – – Detailed, historical data

A Three Layer Data Architecture (2) • Reconciled data: – – Detailed, historical data intended to be the single, authoritative source for all decision support applications. Not intended to be accessed directly by end users. • Derived data: – Data that have been selected, formatted and aggregated for end user decision support applications • Two components play critical roles in the data architecture: enterprise data model and meta-data. Winter, 2001 Keith C. C. Chan 10

Role of Enterprise Data Model • The reconciled data layer linked to the EDM.

Role of Enterprise Data Model • The reconciled data layer linked to the EDM. • The EDM represents a total picture explaining the data required by an organization. • If the reconciled data layer is to be the single, authoritative source for all data required for decision support, it must conform to the design specified in the EDM. • Organization needs to develop an EDM before it can design a DW to ensure that it meets user needs. Winter, 2001 Keith C. C. Chan 11

Role of Meta-data • Operational meta-data – Typically exist in a number of different

Role of Meta-data • Operational meta-data – Typically exist in a number of different formats and are often of poor quality. • EDW meta-data – – Derived from (or at least consistent with) the enterprise data model. Describe the reconciled data layer as well as the rules for transforming operational data to reconciled data. • Data mart meta-data – Describe the derived data layer and the rules for transforming reconciled data to derived data. Winter, 2001 Keith C. C. Chan 12

Data Characteristics Winter, 2001 Keith C. C. Chan

Data Characteristics Winter, 2001 Keith C. C. Chan

Example of DBMS Log Entry Winter, 2001 Keith C. C. Chan 14

Example of DBMS Log Entry Winter, 2001 Keith C. C. Chan 14

Status vs. Event Data • Example of a log entry recorded by a DBMS

Status vs. Event Data • Example of a log entry recorded by a DBMS when processing a business transaction for a banking application. – – – The before image and after image represent the status of the bank account before and then after withdrawal. A transaction is a business activity that causes one or more business events to occur at a database level. An event is a database action (create, update or delete) that results from a transaction. Winter, 2001 Keith C. C. Chan 15

Status vs. Event Data (2) • Both type of data can be stored in

Status vs. Event Data (2) • Both type of data can be stored in a DB. • Most of the data stored in DW are status data. • Event data may be stored in DB for a short time but are then deleted or archived to save storage space. • Both status and event data are typically stored in database logs for backup and recovery. • The DB log plays an important role in filling the DW. Winter, 2001 Keith C. C. Chan 16

Transient vs. Periodic data • In DW, it is necessary to maintain a record

Transient vs. Periodic data • In DW, it is necessary to maintain a record of when events occurred in the past. – To compare sales on a particular date or during a period on same date or during same period. • Most operational systems are based on the use of transient data. • Transient data are data in which changes to existing records are written over previous records. Winter, 2001 Keith C. C. Chan 17

Transient vs. Periodic data (2) • Records are deleted without preserving the previous contents

Transient vs. Periodic data (2) • Records are deleted without preserving the previous contents of those records. • In a DB log both images are normally preserved. • Periodic data are data that are never physically altered or deleted once added to the store. • Each record contains a timestamp that indicates the date (and time if needed) when the most recent update event occurred. Winter, 2001 Keith C. C. Chan 18

Transient Operational Data Winter, 2001 Keith C. C. Chan 19

Transient Operational Data Winter, 2001 Keith C. C. Chan 19

The Reconciled Data Layer Winter, 2001 Keith C. C. Chan

The Reconciled Data Layer Winter, 2001 Keith C. C. Chan

An Overview • We use the term reconciled data to refer to the data

An Overview • We use the term reconciled data to refer to the data layer associated with the EDW. • The term used by IBM in 1993 paper describing a DW architectures. • Describes the nature of data that should appear in the EDW and the way they are derived. Winter, 2001 Keith C. C. Chan 21

Examples of Heterogeneous Data Winter, 2001 Keith C. C. Chan 22

Examples of Heterogeneous Data Winter, 2001 Keith C. C. Chan 22

Characteristics of reconciled data • Intended to provide a single, authoritative source for data

Characteristics of reconciled data • Intended to provide a single, authoritative source for data that support decision making. • Ideally normalized, this data layer is detailed, historical, comprehensive, and quality- controlled. – Detailed. • Rather than summarized. • Providing maximum flexibility for various user communities to structure the data to best suit their needs. – Historical. • The data are periodic (or point-in-time) to provide a historical perspective. Winter, 2001 Keith C. C. Chan 23

Characteristics of reconciled data (2) – Normalized. • Third normal form or higher. •

Characteristics of reconciled data (2) – Normalized. • Third normal form or higher. • Normalized data provide greater integrity and flexibility of use. • De-normalization is not necessary to improve performance since reconciled data are usually assessed periodically using batch processes. – Comprehensive. • Reconciled data reflect an enterprise-wide perspective, whose design conforms to the enterprise data model. – Quality controlled. • Reconciled data must be of unquestioned quality and integrity, since they are summarized into the data marts and used for decision making Winter, 2001 Keith C. C. Chan 24

Characteristics of reconciled data (3) • Characteristics of reconciled data quite different from the

Characteristics of reconciled data (3) • Characteristics of reconciled data quite different from the typical operational data from which they are derived. • Operational data are typically detailed, but they differ strongly in the other four dimensions. – – Operational data are transient, rather than historical. Operational data may have never been normalized or may have been denormalized for performance reasons. Rather than being comprehensive, oeprational data are generally restricted in scope to a particular application. Operational data are often of poor quality with numerous types of inconsistencies and errors. Winter, 2001 Keith C. C. Chan 25

The Data Reconciliation Process • The process is responsible for transforming operational data to

The Data Reconciliation Process • The process is responsible for transforming operational data to reconciled data. • Because of the sharp differences, the process is the most difficult and technically challenging part of building a DW. • Sophisticaed software products are required to assist with this activity. Winter, 2001 Keith C. C. Chan 26

The Data Reconciliation Process (2) • Data reconciliation occurs in two stages during the

The Data Reconciliation Process (2) • Data reconciliation occurs in two stages during the process of filling the EDW. – – An initial load when the EDW is first created. Subsequent updates to keep the EDW current and/or to expand it. • Data reconciliation can be visualized as a process consisting of four steps: capture, scrub, transform and load and index. • These steps can be combined. Winter, 2001 Keith C. C. Chan 27

Steps in Data Reconciliation Winter, 2001 Keith C. C. Chan 28

Steps in Data Reconciliation Winter, 2001 Keith C. C. Chan 28

Data Capture • Extracting the relevant data from the source files and DBs to

Data Capture • Extracting the relevant data from the source files and DBs to fill the EDW is called capture. • May capture only a subset of source data is based on an extensive analysis of both the source and target systems. • This is best performed by a team directed by data administration and composed of both end users and data warehouse professionals. • Two generic types of data capture are static capture and incremental capture. Winter, 2001 Keith C. C. Chan 29

Static & Incremental Capture • Static capture used to fill the DW initially. –

Static & Incremental Capture • Static capture used to fill the DW initially. – Capture a snapshot of the required source data at a point in time. • Incremental capture used for ongoing warehouse maintenance. – – Captures only the changes that have occurred in the source data since the last capture. The most common method is log capture. DB log contains after images that record the most recent changes to database records. Only after images that are logged after the last capture are selected from the log. Winter, 2001 Keith C. C. Chan 30

Data Cleansing • Data in operational systems are often of poor quality. • Typical

Data Cleansing • Data in operational systems are often of poor quality. • Typical errors and inconsistencies: – – – Mis-spelled names and addresses. Impossible or erroneous dates of birth. Fields used for purposes for which they were never intended. Mismatched addresses and area codes. Missing data. Winter, 2001 Keith C. C. Chan 31

Data Cleansing (2) • Improve quality of the source data through a technique called

Data Cleansing (2) • Improve quality of the source data through a technique called data scrubbing/cleansing • Data cleansing using pattern recognition and other AI techniques to upgrade quality of raw data before transforming and moving the data to the data warehouse. • TQM focus on defect prevention, rather than defect correction. Winter, 2001 Keith C. C. Chan 32

Loading and Indexing • The last step in filling the EDW is to load

Loading and Indexing • The last step in filling the EDW is to load the selected data into the target data warehouse and to create the necessary indexes. • The two basic modes for loading data to the target EDW are refresh and update. Winter, 2001 Keith C. C. Chan 33

Loading in Refresh mode • Fill a DW by employing bulk rewriting of the

Loading in Refresh mode • Fill a DW by employing bulk rewriting of the target data at periodic intervals. • The target data are written initially to fill the warehouse. • Then at periodic intervals the warehouse is rewritten, replacing the previous contents. • Refresh model is generally used to fill the warehouse when it is first created. • Refresh mode is used in conkiunction with static data capture. Winter, 2001 Keith C. C. Chan 34

Loading in Update mode • Only changes in the source data are written to

Loading in Update mode • Only changes in the source data are written to the data warehouse. • To support the periodic nature of warehouse data, these new records are usually written to the DW without overwriting or deleting previous records. • It is generally used for ongoing maintenance of the target warehouse. • It is used in conjunction with incremental data capture. Winter, 2001 Keith C. C. Chan 35

Indexing • With both modes, it is necessary to create and maintain the indexes

Indexing • With both modes, it is necessary to create and maintain the indexes that are used to manage the warehouse data. • A type of indexing called bit-mapped indexing is often used in a data warehouse data. Winter, 2001 Keith C. C. Chan 36

Data Transformation • Most important in data reconciliation process. • Converts data from the

Data Transformation • Most important in data reconciliation process. • Converts data from the format of the source operational systems to the format of the EDW. • Accepts data from the data capture component (after data scrubbing), then maps the data to the format of the reconciled data layer, and then passes them to the load and index component. Winter, 2001 Keith C. C. Chan 37

Data Transformation (2) • Data transformation may range from a simple change in data

Data Transformation (2) • Data transformation may range from a simple change in data format or representation to a highly complex exrecise in data integration. • Three examples that illustrate this range: – – – A salesperson requires a download of customer data from a mainframe DB to her laptop computer. A manufacturing company has product data stored in three different systems -- a manufacturing system, a marketing system and an engineering application. A large health care organization managed a geographically dispersed group of hospitals, clinics, and health care centers. Winter, 2001 Keith C. C. Chan 38

Data Transformation (3) • Case 1: – – Mapping data from EBCDIC to ASCII.

Data Transformation (3) • Case 1: – – Mapping data from EBCDIC to ASCII. Performed with commercial off-the-shelf software. • Case 2: – – – The company needs to develop a consolidated view of these product data. Data transformation involves several different functions, including resolving different key structures, converting to a common set of codes, and integrating data from different sources. These functions are quite straight-forward, and most of the necessary software can be generated using a standard commercial software package with a graphical interface. Winter, 2001 Keith C. C. Chan 39

Data Transformation (4) • Case 3: – – – Because many of the units

Data Transformation (4) • Case 3: – – – Because many of the units have been acquired through acquisition over time, the data are heterogeneous and un-coordinated. For a number of important reasons, the organization needs to develop a DW to provide a single corporate view of the enterprise. This effort will require the full range of transformation functions described below, including some customerized software development. Winter, 2001 Keith C. C. Chan 40

Data Transformation (5) • It is important to understand the distinction between the functions

Data Transformation (5) • It is important to understand the distinction between the functions in data scrubbing and in data transformation. • Goal of data scrubbing is to correct errors in data values in the source data, whereas the goal of data transformation is to convert the data format from the source to the target system • It is essential to scrub the data before they are transformed since if there are errors in the data the errors will remain in the data after transformations. Winter, 2001 Keith C. C. Chan 41

Data transformation functions • Classified into two categories: – – Record-level functions Field-level functions.

Data transformation functions • Classified into two categories: – – Record-level functions Field-level functions. • In most DW applications, a combination of some or even all of these functions is required. Winter, 2001 Keith C. C. Chan 42

Record-level functions • Operating on a set of records such as a file or

Record-level functions • Operating on a set of records such as a file or table. • Most important are selection, joining, normalization and aggregation. Winter, 2001 Keith C. C. Chan 43

Selection • The process of partitioning data according to predefined criteria. • Used to

Selection • The process of partitioning data according to predefined criteria. • Used to extract relevant data from the source systems and used to fill the DW. • When the source data are relational, SQL SELECT statements can be used for selection. • Suppose that the after images for this application are stored in a table name ACCOUNT_HISTORY. SELECT * FROM ACCOUNT_HISTORY WHERE Create_date > 12/31/00 Winter, 2001 Keith C. C. Chan 44

Joining • Joining combines data from various sources into a single table or view.

Joining • Joining combines data from various sources into a single table or view. • Joining data allows consolidation of data from various sources. • E. g. an insurance company may have client data spread throughout several different files and databases. • When the source data are relational, SQL statements can be used to perform a join operation. Winter, 2001 Keith C. C. Chan 45

Joining (2) • Joining is often complicated by factors such as: – – –

Joining (2) • Joining is often complicated by factors such as: – – – The source data are not relational, in which case SQL statements cannot be used and procedural language statements must be coded. Even for relational data, primary keys for the tables to be joined must be reconciled before a SQL join can be performed. Source data may contain errors, which makes join operations hazardous. Winter, 2001 Keith C. C. Chan 46

Normalization • It is a process of decomposing relations with anomalies to produce smaller,

Normalization • It is a process of decomposing relations with anomalies to produce smaller, well-structured relations. • As indicated earlier, source data in operational systems are often denormalized (or simply not normalized). • The data must therefore be normalized as part of data transformation. Winter, 2001 Keith C. C. Chan 47

Aggregration • The process of transforming data from a detailed level to a summary

Aggregration • The process of transforming data from a detailed level to a summary level. • For example, in a retail business, individual sales transactions can be summarized to produce total sales by store, product, date, and so on. • Since the EDW contains only detailed data, aggregation is not normally associated with this component. • Aggregation is an important function in filing the data marts, as explained below. Winter, 2001 Keith C. C. Chan 48

Field-level functions • A field level function converts data from a given format in

Field-level functions • A field level function converts data from a given format in a source record to a different format in a target record. • Field-level functions are of two types: single-field and multi-field. • A single-field transformation converts data from a single source field to a single target field. • An example of a single-field transformation is converting a measurement from imperial to metric representation. Winter, 2001 Keith C. C. Chan 49

Single Field-level functions • Two basic methods for single-field transformation: – An algorithmic transformation

Single Field-level functions • Two basic methods for single-field transformation: – An algorithmic transformation is performed using a formula or logical expression. • An example of a conversion from F to C temperature using a formula. – When a simple algorithm does not apply, a lookup table can be used instead. • An example uses a table to convert state codes to state names (this type of conversion is common in DW applications). Winter, 2001 Keith C. C. Chan 50

Multi-Field-level functions • A multi-field transformation converts data from one or more source fields

Multi-Field-level functions • A multi-field transformation converts data from one or more source fields to one or more target fields. • Two multi-field transformations: – Many-to-one transformation. • Combination of employee name and telephone number is used as the primary key. • In creating a target record, the combination is mapped to a unique employee identification (EMP_ID). • A lookup table would be created to support this transformation. Winter, 2001 Keith C. C. Chan 51

Multi-Field-level functions (2) • One-to-many transformation. – – – In the source record, a

Multi-Field-level functions (2) • One-to-many transformation. – – – In the source record, a product code has been used to encode the combination of brand name and product name. In the target record, it is desired to display the full text describing product and brand names. A lookup table would be employed for this purpose. Winter, 2001 Keith C. C. Chan 52

Tools to Support Data Reconciliation • Data reconciliation is an extremely complex and challenging

Tools to Support Data Reconciliation • Data reconciliation is an extremely complex and challenging process. • A variety of closely integrated software applications must be developed (or acquired) to support this process. • A number of powerful software tools are available to assist organizations in developing these applications: data quality, data conversion, and data cleansing. Winter, 2001 Keith C. C. Chan 53

Data quality tools • Used to assess data quality in existing systems and compare

Data quality tools • Used to assess data quality in existing systems and compare them to DW requirements. • Used during early stage of DW development. • Examples: – Analyse (QDB Solutions, Inc. ) • Assess data quality and related business rules. • Make recommendations for cleansing and organizing data prior to extraction and transformation. – Wiz. Rules (Wiz. Soft, Inc. ) • Rules discovery tool that searches through records in existing tables and discovers the rules associated with the data. • Product identifies records that deviate from the established rule. Winter, 2001 Keith C. C. Chan 54

Data conversion tools • Perform: extract, transform, load and index. • Examples: – –

Data conversion tools • Perform: extract, transform, load and index. • Examples: – – Extract (Evolutionary Technologies International). Info. Suite (Platinum Technology, Inc. ) Passport (Carleton Corp. ) Prism (Prism Solutions, Inc. ) • These are program-generation tools. – – – Accept as input schema of source and target files. Business rules used for data transformation (e. g. formulas, algorithms, and lookup tables). Tools generate program code necessary to perform the transformation functions on an ongoing basis. Winter, 2001 Keith C. C. Chan 55

Data cleansing tools • One tool: Integrity (Vality Technology Inc. ). • Designed to

Data cleansing tools • One tool: Integrity (Vality Technology Inc. ). • Designed to perform: – – – Data quality analysis. Data cleansing. Data reengineering. • Discovering business rules and relationships among entities. Winter, 2001 Keith C. C. Chan 56

The Derived Data Layer • The data layer associated with data marts. • Users

The Derived Data Layer • The data layer associated with data marts. • Users at this layer normally interact for their decision-support applications. • The issues: – – – What characteristics of the derived data layer? How is it derived form the reconciled data layer? Introduce the star schema (or dimensional model) • The data model most commonly used today to implement this data layer. Winter, 2001 Keith C. C. Chan 57

Characteristics of Derived Data • Source of derived data is reconciled data. • Selected,

Characteristics of Derived Data • Source of derived data is reconciled data. • Selected, formatted and aggregated for end-user decision support applications. • Generally optimized for particular user groups – E. g. departments, work groups or individuals. • A common mode of operation: – – – Select relevant data from the enterprise DW on a daily basis. Format and aggregate those data as needed. Load and index those data in the target data marts. Winter, 2001 Keith C. C. Chan 58

Characteristics of Derived Data (2) • Objectives sought with derived data: – – Provide

Characteristics of Derived Data (2) • Objectives sought with derived data: – – Provide ease of use for decision support applications. Provide fast response for user queries. Customize data for particular target user groups. Support ad-hoc queries and data mining applications. • To satisfy these needs, we usually find the following characteristics in derived data: – both detailed data and aggregate data are present. • Detailed data are often (but not always) periodic -- that is, they provide a historical record. • Aggregate data are formatted to respond quickly to predetermined (or common) queries. Winter, 2001 Keith C. C. Chan 59

Characteristics of Derived Data (3) – – – Data are distributed to departmental servers.

Characteristics of Derived Data (3) – – – Data are distributed to departmental servers. The data model that is most commonly used is the a relational-like model, the Star Schema. Proprietary models are also sometimes used. Winter, 2001 Keith C. C. Chan 60

The Star Schema • A start schema is a simple database design: – –

The Star Schema • A start schema is a simple database design: – – – Particularly suited to ad-hoc queries. Dimensional data (describing how data are commonly aggregated). Fact or event data (describing individual business transactions). Another name used is the dimensional model. Not suited to on-line transaction processing and therefore not generally used in operational systems. Winter, 2001 Keith C. C. Chan 61

Components of The Star Schema Winter, 2001 Keith C. C. Chan 62

Components of The Star Schema Winter, 2001 Keith C. C. Chan 62

Fact Tables and Dimension Tables • A Star Schema consists of two types of

Fact Tables and Dimension Tables • A Star Schema consists of two types of tables: – Fact Tables • Contain factual or quantitative data about a business. • E. g. units sold, orders booked and so on. – Dimension tables • Hold descriptive data about the business. • E. g. Product, Customer, and Period. – The simplest star schema consists of one fact table, surrounded by several dimension tables. Winter, 2001 Keith C. C. Chan 63

Fact Tables and Dimension Tables (2) – – – Each dimension table has a

Fact Tables and Dimension Tables (2) – – – Each dimension table has a one-to-many relationship to the central fact table. Each dimension table generally has a simple primary key, as well as several non-key attributes. The primary key is a foreign key in the fact table. Primary key of fact table is a composite key consisting of concatenation of all foreign keys. Relationship between each dimension table and the fact table: • provides a join path that allows users to query the database easily. • Queries in the form of SQL statements for either predefined or ad-hoc queries. Winter, 2001 Keith C. C. Chan 64

Fact Tables and Dimension Tables (3) • Star schema is not a new model.

Fact Tables and Dimension Tables (3) • Star schema is not a new model. • It is a particular implementation of the relational data model. • The fact table plays the role of an associative entity that links the instances of the various dimensions. Winter, 2001 Keith C. C. Chan 65

Multiple Fact Tables • For performance or other reasons, define more than one fact

Multiple Fact Tables • For performance or other reasons, define more than one fact table in a given star schema. – E. g. , various users require different levels of aggregation (i. e. a different table grain). • Performance can be improved by defining a different fact table for each level of aggregation. • Designers of data mart need decide whether increased storage requirements are justified by the prospective performance improvement. Winter, 2001 Keith C. C. Chan 66

Example of Star Schema Product Date Day Month Year Store. ID City State Country

Example of Star Schema Product Date Day Month Year Store. ID City State Country Region Sales Fact Table Date Product Store Customer unit_sales dollar_sales Product. No Prod. Name Prod. Desc Category QOH Cust. Id Cust. Name Cust. City Cust. Country Yen_sales Measurements Winter, 2001 Keith C. C. Chan 67

Example of Star Schema with Data Winter, 2001 Keith C. C. Chan 68

Example of Star Schema with Data Winter, 2001 Keith C. C. Chan 68

Example of Snowflake Schema Year Product Month Year Date Sales Fact Table Day Month

Example of Snowflake Schema Year Product Month Year Date Sales Fact Table Day Month Date Product Store City State Country Region Store. ID City State Country Winter, 2001 Measurements Customer unit_sales dollar_sales Product. No Prod. Name Prod. Desc Category QOH Cust. Id Cust. Name Cust. City Cust. Country Yen_sales Keith C. C. Chan 69

Star Schema with Two Fact Tables Winter, 2001 Keith C. C. Chan 70

Star Schema with Two Fact Tables Winter, 2001 Keith C. C. Chan 70

Snowflake Schema • Sometimes a dimension in a star schema forms a natural hierarchy.

Snowflake Schema • Sometimes a dimension in a star schema forms a natural hierarchy. – E. g. a dimension named Market has geographic hierarchy: • Several markets within a state. • Several states within a region. • Several regions within a country. Winter, 2001 Keith C. C. Chan 71

Snowflake Schema (2) • When a dimension participates in a hierarchy, the designer has

Snowflake Schema (2) • When a dimension participates in a hierarchy, the designer has two basic choices. – Include all of the information for the hierarchy in a single table. • I. e. , table is de-normalized (not in 3 rd normal form). – Normalize the tables. • This results in an expanded schema -- the snowflake schema. • A snowflake schema – An expanded version of a star schema in which all of the tables are fully normalized. Winter, 2001 Keith C. C. Chan 72

Snowflake Schema (3) • Should dimensions with hierarchies be decomposed? – – The advantages

Snowflake Schema (3) • Should dimensions with hierarchies be decomposed? – – The advantages are reduced storage space and improved flexibility. The major disadvantage is poor performance, particularly when browsing the database. Winter, 2001 Keith C. C. Chan 73

Proprietary Databases • A number of proprietary multidimensional databases. – – – Data are

Proprietary Databases • A number of proprietary multidimensional databases. – – – Data are first summarized or aggregated. Then stored in the multidimensional database for predetermined queries or other analytical operations. Multidimensional databases usually store data in some form of array, similar to that used in the star schema. The advantage is very fast performance if used for the type of queries for which it was optimized. Disadvantage is that it is limited in the size of the database it can accommodate. Also not as flexible as a star schema. Winter, 2001 Keith C. C. Chan 74

Independent vs. Dependent Data Marts • A dependent data mart is one filled exclusively

Independent vs. Dependent Data Marts • A dependent data mart is one filled exclusively from the EDW and its reconciled data layer. • Is it possible or desirable to build data marts independent of an enterprise DW? • An independent data mart is one filled with data extracted from the operational environment, without benefit of a reconciled data layer. Winter, 2001 Keith C. C. Chan 75

Independent vs. Dependent Data Marts (2) • Might have an initial appeal but suffer

Independent vs. Dependent Data Marts (2) • Might have an initial appeal but suffer from: – – – No assurance that data from different data marts are semantically consistent, since each set is derived from different sources. No capability to drill down to greater details in the EDW. Data redundancy increases because the same data are often stored in the various data marts. Lack of integration of data from an enterprise perspective, since that is the responsibility of the EDW. Creating an independent data mart may require cross-platform joins that are difficult to perform. Different users have different requirements for the currency of data in data marts, which might lower the quality of data that is made available. Winter, 2001 Keith C. C. Chan 76