ECE 453 CS 447 SE 465 Software Testing

  • Slides: 29
Download presentation
ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor

ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Kostas Kontogiannis 1

Overview èSystem Testing èGeneral - Introduction èThreads èBasis Concepts for Requirements Specification èFinding Threads

Overview èSystem Testing èGeneral - Introduction èThreads èBasis Concepts for Requirements Specification èFinding Threads èStructural Strategies for Thread Testing èFunctional Strategies for Thread Testing èSystem Testing Guidelines Ref: “Software Testing A Craftsman's Approach” 2 nd edition, Paul C. Jorgensen 2

State Testing Impact on Faults • Can be used to reveal faults that can

State Testing Impact on Faults • Can be used to reveal faults that can manifest themselves as: – – – Wrong number of states Wrong transition for a given state-input combination Wrong output for a given transition Pairs of states or sets of states that are made equivalent States or sets of states that have become dead States or sets of states that have become unreachable 3

State Testing Process • Identify “interesting” states and flag them as “initial” states •

State Testing Process • Identify “interesting” states and flag them as “initial” states • Define a set of covering input sequences that get back to the initial state when starting from the initial state • For each step in each input sequence, define the expected next state, the expected transition, and the expected output • A set of test then consists of three sets of sequences: – Input sequences – Corresponding transitions or next-state names or Ids – Output sequences 4

State Testing Application Scenaria 1. Any processing where the output is based on the

State Testing Application Scenaria 1. Any processing where the output is based on the occurrence of one or more sequences of events 2. Most protocols between systems, between humans and machines, between components of a system 3. Device drivers with complicated retry and recovery procedures if the action depends on the state 4. Long running transaction flows 5. High-level control functions within an operating system 6. The behavior of a system with respect to resource management or utilization levels 7. A set of menus and ways that one can go from one to a another 8. Whenever a feature is directly implemented as one or more state-transition tables 5

Functional Strategies for Thread Testing • If no behavioral model [i. e. FSMs] exists

Functional Strategies for Thread Testing • If no behavioral model [i. e. FSMs] exists for a system, – then two choices: • develop a behavioral model or • resort to system level analogs of functional testing. • To identify functional test cases, we used information from the input and output domains [spaces] as well as the function itself. • Functional threads are described in terms of coverage metrics that are derived from: – events, ports, and data. 6

Event Based Thread Testing • Considering the space of port input events, the following

Event Based Thread Testing • Considering the space of port input events, the following port input thread coverage metrics are of interest to attain levels of system testing: – PI 1: each port input event occurs – PI 2: common sequences of port input events occur – PI 3: each port input event occurs in every “relevant” data context – PI 4: for a given context, all “inappropriate” input events occur – PI 5: for a given context, all possible input events occur 7

Event Based Thread Testing - Notes • PI 1 is bare minimum and adequate

Event Based Thread Testing - Notes • PI 1 is bare minimum and adequate for most systems. • PI 2 is the most common and corresponds to the intuitive view of the system since it deal with normal or expected use - hard to quantify. • a view of a ‘context’ [for PI 3 - PI 5] is that of event quiescence. – For example, PI 3 deals with context sensitive port input events • physical events that have logical meanings determined by the context within which they occur. • PI 3 is driven by an event in all of its contexts. 8

Event Based Thread Testing - Notes • PI 4 and PI 5 both start

Event Based Thread Testing - Notes • PI 4 and PI 5 both start with a context and then seek a variety of events. – PI 4 is often used by a tester in an attempt to “break” a system supply inappropriate inputs just to see what happens • This is partially a specification problem – differences between prescribed behavior [things that should happen] and proscribed behavior [things that should not happen]. – Most requirement specs have difficulty with prescribed behavior – • usually testers find proscribed behavior. 9

Coverage Metrics for Port Output Events • PO 1: each port output event occurs.

Coverage Metrics for Port Output Events • PO 1: each port output event occurs. – PO 1 is acceptable minimum especially when a system has a rich variety of output messages for error conditions. • PO 2: each port output event occurs for each cause. – PO 2 is good but hard to quantify • Basically it refers to threads that interact with respect to a port output event. • Usually a given port output event has a small number of causes. 10

Port-Based Thread Testing • Complements event-based testing: 1. determine, for each port, what events

Port-Based Thread Testing • Complements event-based testing: 1. determine, for each port, what events can occur at that port; 2. find threads which exercise input ports and output ports with respect to the event list for each port. • • not all systems will have this characteristic i. e. an event that occurs at more than one port 11

Port-Based Thread Testing • Useful for systems in which the port devices come from

Port-Based Thread Testing • Useful for systems in which the port devices come from external suppliers. • From an ER diagram, any N: N relationships between ports and events should be exercised in both directions. • Event based testing covers the 1: N relationship from events to ports, – and port based testing covers the 1: N relationship from ports to events. 12

Data-Based Thread Testing • Port and event based testing work best in event driven

Data-Based Thread Testing • Port and event based testing work best in event driven (reactive) systems. – Reactive systems are often characterized as: long running and maintain a relationship with their environment. • Very seldom do they have an interesting data-model so data threads are not very useful. • Non-reactive systems are typically “static” and are transformational (rather than reactive): – essentially support operations on a ‘database’ and where the ER model is dominant. 13

Data-Based Thread Testing • Define sets of threads in terms of data-based coverage metrics.

Data-Based Thread Testing • Define sets of threads in terms of data-based coverage metrics. – Information in relationships has many system threads, whereas the threads in the entities are usually handled at the unit level. • We can define the following metrics: – DM 1: exercise the cardinality of every relationship. – DM 2: exercise the participation of every relationship. – DM 3: exercise the functional dependencies among relationships. 14

Data-Based Thread Testing • DM 2: – ensure where or not every instance of

Data-Based Thread Testing • DM 2: – ensure where or not every instance of an entity participates in a relationship. – Some modeling techniques express participation in numerical limits (i. e. OMT, “at least one and at most 12”). • When available, this information leads to boundary value system threads. • DM 3: – it may be possible to determine explicit logical connections among relationships => functional dependencies as in relational databases. – These are reduced when the database is normalized, but they still exist and lead to interesting test threads. 15

Pseudo-Structural System Testing • At the system level we can use graph based metrics

Pseudo-Structural System Testing • At the system level we can use graph based metrics as a cross check on the functional threads. – The claim is for pseudo-structural testing since the node and edge coverage metrics are defined in terms of a control model of a system and not directly derived from the system implementation. • Behavioral models are only an approximation to the system’s reality: – why it is possible to decompose the models to several levels of detail. – A true structural model’s sheer size and complexity would make it too cumbersome to use. 16

Pseudo-Structural System Testing • The weakness of pseudo-structural metrics is that the underlying model

Pseudo-Structural System Testing • The weakness of pseudo-structural metrics is that the underlying model may be a poor choice. • Decision tables and FSMs are good choices for ASF testing. For example: – If an ASF is described using a decision table, conditions usually include port input events, and actions are port output events – It is possible to devise test cases that cover every condition, every action, or most completely, every rule. – As in FSMs, test cases can cover every state, transition or every path. 17

Operational Profiles • Many aspects of testing can be related to the old 80/20

Operational Profiles • Many aspects of testing can be related to the old 80/20 rule: – for a system with many threads, 80% of the execution traverses only 20% of the threads. • The basic concept in testing: – execute test cases such that when a failure occurs, the presence of a fault is revealed. • The distribution of faults is only indirectly related to the reliability of the system. 18

Operational Profiles • Simple view of reliability: – The probability that no failures occur

Operational Profiles • Simple view of reliability: – The probability that no failures occur during a specific time interval. – If faults are in less traveled threads of system, • then reliability will appear higher than if the same number of faults were in the high traffic areas. • Operational profiles: – Determine the execution frequencies of various threads, then select threads accordingly. – Operational profiles maximize the probability of finding faults by inducing failures in the most frequently traversed threads. 19

System Testing Categories • Objective of system testing: – To verify whether the implementation

System Testing Categories • Objective of system testing: – To verify whether the implementation (or system) conforms to the requirements as specified by the customer(s). – To verify whether the system meets a wide range of unspecified expectations • System testing is performed – After constructing a reasonably stable system in an emulated environment. – In the real environment (if the real environment is not accessible, system testing is done using models in an emulated environment) • It is important to categorize the kinds of system tests for a number of reasons: – Systematically focus on different aspects of a system while evaluating its quality. – Test engineers can prioritize their activities. – Planning based on test categorization has the advantage of obtaining a balanced view of testing. • Testing categories (11 categories) 20

Basic System Tests • These provide evidence that the system can be installed, configured

Basic System Tests • These provide evidence that the system can be installed, configured and brought to an operational state. • Basic tests are performed to ensure that commonly used functions, not all of which may directly relate to user-level functions, work to our satisfaction. • The following are the major categories of subsystems whose adequate testing is called basic test. a. Boot tests: verify that the system can boot up its software image from the supported options (ROM, PCMCIA, . . . ) b. Upgrade/downgrade tests: verify that the software image can be upgraded or downgraded (rolled back) in a graceful manner. c. Light emitting diode (LED) tests. These are designed to ensure that visual operational status of the system is correct. d Diagnostic tests. These are designed to ensure that the hardware components of the systems are functioning as desired. (Power-on self test (POST), Memory, Address and data buses, Peripheral devices) e) CLI (command line interface) tests. Ensure that the system can be configured. Ensure that uses commands are properly interpreted. Verify the 21 error message

Functionality Tests • Verify the system as thoroughly as possible over the full range

Functionality Tests • Verify the system as thoroughly as possible over the full range of requirements • Logging and tracing tests: (implicit functionality) • GUI tests: (Icon, Menu bar, Dialog box, Scroll bar) • Security tests. Verify that the system meets the requirements for detecting security breaches and protecting from such breaches (Unauthorized access, Illegal file access, Virus) 22

Robustness Tests • To verify how gracefully the system behaves in error situations, or

Robustness Tests • To verify how gracefully the system behaves in error situations, or how it handles a change in its operational environment. • Different kinds of tests are: – – – Boundary value tests (valid and invalid inputs) Recover from power failure On-line insertion and removal (OIR) System recovers after an OIR event Availability of redundant modules Degraded node test (a portion of the system fails) 23

Interoperability and Performance Tests • Interoperability tests (3 rd party products): Verify that the

Interoperability and Performance Tests • Interoperability tests (3 rd party products): Verify that the system can inter operate with 3 rd party products. • Performance tests: Determine how actual system performance compares to predicted performance – – – Response time Execution time Throughput Resource utilization Traffic volume 24

Scalability Tests • Verify that the system can scale up to its engineering limits

Scalability Tests • Verify that the system can scale up to its engineering limits – Data storage limitations (counters and buffers) – Speed limitations (CPU) – Communication limitations – Resource intensive 25

Stress Tests • Stress tests (push it over the edge to break it). Evaluate

Stress Tests • Stress tests (push it over the edge to break it). Evaluate the behavior of a software component when offered load is in excess of its designed capacity • Push the system “over the edge” and observe that the recovery mechanism works • Bring out the following kinds of problems: – Memory leaks – Buffer allocation problem 26

Load and Regression Tests • Load and stability tests: Verify that the system can

Load and Regression Tests • Load and stability tests: Verify that the system can operate in a large scale for a long time (months) • Regression tests: Ensure that nothing had has happened after a fix. – Five possibilities can happen after an attempt to fix. • • • fix the bug reported fail to fix the bug, but break something else fix this bug and fix some unknown bugs 27

Documentation Tests • Documentation tests: This is a review of technical accuracy and readability

Documentation Tests • Documentation tests: This is a review of technical accuracy and readability of user manuals including – tutorials – on-line help • There are three kinds of documentation tests – Read test (clarity, organization, flow and accuracy) – Hands-on test (evaluate usefulness) – Functional test (verify the document) 28

Conformance to Regulatory Bodies • Conformance to regulatory bodies: – Identify unsafe consequences –

Conformance to Regulatory Bodies • Conformance to regulatory bodies: – Identify unsafe consequences – Federal Communication Commission (FCC) and Canadian Standards – Association (CSA) certify a product's safety. 29