Software Verification and Validation Lecture No 1 Module

























































- Slides: 57
Software Verification and Validation Lecture No. 1
Module 1 Introduction
Software Verification and Validation Agenda 1. Introduction. 2. Motivation for software testing 3. Sources of problems 4. Working definition of reliability and software testing 5. What is a software fault, error, bug, failure or debugging
Software Verification and Validation Agenda 6. Software testing and Software lifecycle 7. Software testing myths 8. Goals and Limitations of Testing
Module 2 Motivation for Software Testing
Motivation for Software Testing 1. Software today: § Software systems are increasingly getting complex. § Size of the software § Time to market § Increasing emphasis on GUI component § Are becoming defective • How many? • What kind?
Motivation for Software Testing 2. Several reasons that contribute these defects, e. g. , § Poor Requirements elicitation: Erroneous, incomplete, inconsistent requirements. § Inadequate Design: Fundamental design flaws in the software.
Motivation for Software Testing 2. Several reasons that contribute these defects, e. g. , § Improper Implementation: Mistakes in chip fabrication, wiring, programming faults, malicious code. § Defective Support Systems: Poor programming languages, faulty compilers and debuggers, misleading development tools.
Motivation for Software Testing 2. Several reasons that contribute these defects, e. g. , § Inadequate Testing of Software: Incomplete testing, poor verification, mistakes in debugging.
Motivation for Software Testing 2. Several reasons that contribute these defects, e. g. , § Evolution: Sloppy redevelopment or maintenance, introduction of new flaws in attempts to fix old flaws, incremental escalation to inordinate complexity.
Motivation for Software Testing 3. Defective software contribute several issues, examples include: § Faulty Communications: Loss or corruption of communication media, non delivery of data. § Space Applications: Lost lives, launch delays. § Defense systems: Misidentification of friend or foe.
Motivation for Software Testing 3. Defective software contribute several issues, examples include: § Transportation: Deaths, delays, sudden acceleration, inability to brake. § Safety-critical applications: Death, injuries. § Health care applications: Death, injuries, power outages, long-term health hazards (radiation).
Motivation for Software Testing § Money Management: Fraud, violation of privacy, shutdown of stock exchanges and banks, negative interest rates. § Control of Elections: Wrong results (intentional or non-intentional).
Motivation for Software Testing § Money Management: Fraud, violation of privacy, shutdown of stock exchanges and banks, negative interest rates. § Control of Elections: Wrong results (intentional or non-intentional).
Motivation for Software Testing § Control of Jails: Technology-aided escape attempts and successes, accidental release of inmates, failures in software controlled locks. § Law Enforcement: False arrests and imprisonments.
Motivation for Software Testing 1. We consider some examples of software failure that resulted or could have resulted in monetary and/or financial losses 2. Examples of human losses: § In Texas, 1986, a man received between 16, 50025, 000 rads in less than 1 sec, over an area of about 1 cm.
Motivation for Software Testing § He lost his left arm, and died of complications 5 months later. § In Texas, 1986, a man received at least 4, 000 rads in the right temporal lobe of his brain. § The patient eventually died as a result of the overdose.
Motivation for Software Testing 3. Examples of financial losses: § A group of hackerthieves hijacked the Bangladesh Bank system to steal funds. § The group successfully transferred $81 million in four transactions, before making a spelling error that tipped off the bank, causing another $870 million in transfers to be canceled.
Motivation for Software Testing § NASA Mars Polar Lander, 1999 § On December 3, 1999, NASA's Mars Polar Lander disappeared during its landing attempt on the Mars surface. § A Failure Review Board investigated the failure and determined that the most likely reason for the malfunction was the unexpected setting of a single data bit. § The problem wasn't caught by internal tests.
Motivation for Software Testing § Malaysia Airlines jetliner, August 2005 § As a Malaysia Airlines jetliner cruised from Perth, Australia, to Kuala Lumpur, Malaysia, autopilot system malfunctioned. § Captain disconnected autopilot and eventually regained control and manually flew their 177 passengers safely back to Australia. § Investigators discovered that a defective software program had provided incorrect data about the aircraft’s speed and acceleration, confusing flight computers. § There are countless such examples…
Module 3 Sources of problems
Sources of Problems 1. Software does not do something that specification says it should do. 2. Software does something that specification says it should not do. 3. Software does something that specification does not mention.
Sources of Problems 4. Software does not do something that product specification does not mention but should. 5. The software is difficult to understand, hard to use, slow … 6. Failures result due to: 1. Lack of logic 2. Inadequate testing of software under test (SUT) 3. Unanticipated use of application
Sources of Problems 1. We need to spend time and financial resources to fix these errors or bugs in the following manner: § Cost to fix a bug increases exponentially (10 x) • i. e. , it increases tenfold as time increases § E. g. , a bug found during specification costs $1 to fix. § … if found in design cost is $10 § … if found in code cost is $100 § … if found in released software cost is $1000
Module 4 Working definition of software reliability and software testing
Definition: Software Reliability 1. Is Bug Free Software Possible? 1. We have human factors 2. Specification. Implementation mismatches 3. Discussed in details in failure reasons 2. We are releasing software that is full of errors, even after doing sufficient testing
Definition: Software Reliability 3. No software would ever be released by its developers if they are asked to certify that the software is free of errors 4. Software reliability is one of the important factors of software quality.
Definition: Software Reliability 5. Other factors are understandability, completeness, portability, consistency, maintainability, usability, efficiency, 6. These quality factors are known as non-functional requirements for a software system.
Definition: Software Reliability 1. Software reliability is defined as: “The probability of failure free operation for a specified time in a specified environment”
Definition: Software Testing 1. Goal of software testing is: 1. … to find bugs 2. … as early in the software development processes as possible 3. … and make sure they get fixed. 2. We define software testing as: [Reference Book] “Testing is the process of demonstrating that errors are not present” OR “The purpose of testing is to show that a program performs its intended functions correctly” OR “Testing is the process of establishing confidence that a program does what it is supposed to do” 3. Another definition [Mayers, 04] “Testing is the process of executing a program with the intent of finding faults”
Module 5: What is a software fault, error, bug, failure or debugging
Fault, error, bug, failure … 1. Some definitions: § Error: A measure of the difference between the actual and the ideal. § Fault: A condition that causes a system to fail in performing its required function.
Fault, error, bug, failure … 1. Some definitions: § Error: A measure of the difference between the actual and the ideal. § Fault: A condition that causes a system to fail in performing its required function. § Failure: Inability of a system or component to perform a required function according to its specifications. § Debugging: Activity by which faults are identified and rectified.
Fault, error, bug, failure … 1. Faults have different levels of severity: § Critical. A core functionality of the system fails or the system doesn’t work at all. § Major. The defect impacts basic functionality and the system is unable to function properly. § Moderate. The defect causes the system to generate false, inconsistent, or incomplete results. § Minor. The defect impacts the business but only in very few cases. § Cosmetic. The defect is only related to the interface and appearance of the application. 2. While testing, we attribute different outcomes to different severities
Fault, error, bug, failure … 1. Test case: Inputs to test the program and the predicted outcomes (according to the specification). Test cases are formal procedures: § inputs are prepared § outcomes are predicted § tests are documented § commands are executed § results are observed and evaluated
Fault, error, bug, failure … 1. All of these steps are subject to mistakes. When does a test “succeed”? “fail”? 2. Test suite: A collection of test cases Testing oracle: a program, process, or body of data which helps us determine whether the program produced the correct outcome. § Oracles are a set of input/expected output pairs.
Fault, error, bug, failure … 1. Test data: Inputs which have been devised to test the system. 2. Test cases: Inputs to test the system and the predicted outputs from these inputs if the system operates according to its specification 3. Outcome: What we expect to happen as a results of the test. In practice, outcome and output may not be the same. § For example, the fact that the screen did not change as a result of a test is a tangible outcome although there is not output. 4. In testing we are concerned with outcomes, not just outputs. 5. If the predicted and actual outcome match, can we say that the test has passed?
Fault, error, bug, failure … 1. Expected Outcome: is the expectation that we associate with execution response of a particular test execution 2. Some times, specifying the expected outcome for a given test case can be tricky business!
Fault, error, bug, failure … § For some applications we might not know what the outcome should be. § For other applications the developer might have a misconception § Finally, the program may produced too much output to be able to analyze it in a reasonable amount of time.
Fault, error, bug, failure … END § In general, this is a fragile part of the testing activity, and can be very time consuming. § In practice, this is an area with a lot of hand-waving. § When possible, automation should be considered as a way of specifying the expected outcome, and comparing it to the actual outcome.
Module 6 Software testing and Software development lifecycle
Software Testing - development lifecycle Software Development Lifecycle
Software Testing - development lifecycle § Code and Fix § Waterfall § Spiral §…
Software Testing - development lifecycle 1. Software testing is a critical element of software quality assurance and represents the ultimate review of: • specification • design • coding 2. Software life-cycle models (e. g. , waterfall) frequently include software testing as a separate phase that follows implementation! 3. Contrary to life-cycle models, testing is an activity that must be carried out throughout the life-cycle. 4. It is not enough to test the end product of each phase. Ideally, testing occurs during each phase. 5. This gives rise to concept of verification and validation
Software Testing - development lifecycle Software Development Lifecycle
Module 7 Software testing myths
Software testing myths 1. If we were really good at programming, there would be no bugs to catch. There are bugs because we are bad at what we do. 2. Testing implies an admission of failure. 3. Tedium of testing is a punishment for our mistakes.
Software testing myths 4. All we need to do is: § concentrate § use structured programming § use OO methods § use a good programming language §…
Software testing myths 1. Human beings make mistakes, especially when asked to create complex artifacts such as software systems. 2. Studies show that even good programs have 1 -3 bugs per 100 lines of code.
Software testing myths 1. Software testing: § A successful test is a test which discovers one or more faults. § Only validation technique for non-functional requirements. § Should be used in conjunction with static verification.
Software testing myths 2. Defect testing: § The objective of defect testing is to discover defects in programs. § A successful defect test is a test which causes a program to behave in an anomalous way. § Tests show the presence not the absence of defects.
Module 8 Goals and Limitations of Testing
Goals and Limitations of Testing 1. Discover and prevent bugs. 2. The act of designing tests is one of the best bug preventers known. (Test, then code philosophy) 3. The thinking that must be done to create a useful test can discover and eliminate bugs in all stages of software development.
Goals and Limitations of Testing 4. However, bugs will always slip by, as even our test designs will sometimes be buggy. 5. Most widely-used activity for ensuring that software systems satisfy the specified requirements. 6. Consumes substantial project resources. Some estimates: ~50% of development costs
Goals and Limitations of Testing • Testing cannot occur until after the code is written. • The problem is big! • Perhaps the least understood major SE activity. • Exhaustive testing is not practical even for the simplest programs. WHY?
Goals and Limitations of Testing • Even if we “exhaustively” test all execution paths of a program, we cannot guarantee its correctness. • The best we can do is increase our confidence ! • “Testing can show the presence of bug, not their absence. ”
Goals and Limitations of Testing END 1. Testers do not have immunity to bugs. 2. slight modifications – after a program has been tested – invalidate (some or even all of) our previous testing effort. 3. Automation is critically important. 4. Unfortunately, there are only a few good tools, and in general, effective use of these good tools is very limited.