Test Plan Introduction o Primary focus developer testing

  • Slides: 10
Download presentation
Test Plan: Introduction o Primary focus: developer testing – Implementation phase – Release testing

Test Plan: Introduction o Primary focus: developer testing – Implementation phase – Release testing – Maintenance and enhancement o Secondary focus: formal system verification – Addressed within current test plan via examples – Assumed to be primarily independent

Tests to be Performed o Bottom-up integration testing – Build a module, build a

Tests to be Performed o Bottom-up integration testing – Build a module, build a test to simulate how it is used o Black-box – Based on specification and ‘educated guess’ stress tests o White-box – Based on code inspection o Platform testing – Establish baseline capability of hardware/OS, detect differences o Performance – Per module, and peer to peer distributed costs o Test coverage – Coverage level dependent on resources, time, and cost of failure

Allocation of Testing Responsibilities o Assumption: development team performs – – o Internal (module-level)

Allocation of Testing Responsibilities o Assumption: development team performs – – o Internal (module-level) performance Sample system performance (limited example federation) Full white-box Limited black-box (basic system functionality) Assumption: external group performs – Independent, detailed system verification (Spec compliant) – Standardized performance tests o Test coverage – Coverage will be measured, analyzed for relative effectiveness – Coverage levels TBD

Testing Philosophy o Testing is not just pre-release! Continual process within development phase o

Testing Philosophy o Testing is not just pre-release! Continual process within development phase o Catch defects as early as possible o Make sure defects stay fixed o Track cause of defects: repair problem, do not keep repatching the tire o Need support at both design level and implementation level to accomplish these goals

Continual Testing Process o Tests created during development o Central code repository: modules, test

Continual Testing Process o Tests created during development o Central code repository: modules, test suites tied together – Tests are treated as live code to be maintained, not ‘once-offs’ – Test documentation: how to run, what is tested, and why o Revision control on both modules and tests o Modular development – System broken down into hierarchies of testable components o Automated, incremental integration testing – As code is developed, it is tested standalone, then incrementally within the confines of the system o Continual feedback into development, maintenance cycle – Weekly postings of performance results, current defect status

Design Support o Standard testing and debugging methods on each module / class (peek,

Design Support o Standard testing and debugging methods on each module / class (peek, dump, exercise, performance) o Self-checking code (pre and post parameter asserts, valid state calls) o Debug levels, controlled via runtime flags o Centralized logging mechanisms to collect distributed traces o Logs used in both developer testing and sample user traces from the field

Development Tool Support o Shadow development trees for automated and individual testing (Clear. Case

Development Tool Support o Shadow development trees for automated and individual testing (Clear. Case ‘views’) o Common testing tools, testing approach to simplify testing process, and to allow automated testing o Examples: – – – Standard test harnesses, method of invocation Standard testing directories, makefile commands per module Standard set of test-record-evaluate tools Central I/O mechanisms to collect distributed test results Standard system-level drivers Sequential test harnesses and emulated distributed harnesses to emulate determinism during development

Levels of Testing per Module o Basic: used in initial development, later in porting

Levels of Testing per Module o Basic: used in initial development, later in porting – Minimal level of functionality to establish module operation – Simplicity is critical during development, porting o Detailed: used to verify module correctness before checking into current baseline – Does module meet interface contract – Tests for common coding errors o Regression: replicates previous use which caused failure – Used to verify defect has been corrected – Used to ensure defect does not re-occur over time o Performance: tracks the CPU usage per module and peer to peer performance across distributed components

Complex Functional Areas: Testing Examples o Memory: continual test for memory leaks and over-writes

Complex Functional Areas: Testing Examples o Memory: continual test for memory leaks and over-writes – Automated use of Purify within weekly test suites across all modules, all levels of the system o Threads: platform-specific performance and implementation variances must be established per platform – Standard set of tests which mimic system use of threads o Causality: complex areas of the system (such as zerolookahead ordering) are difficult to establish correctness across all use cases (dropped packets, simultaneous time stamps with variable arrival times, cancelled events, …) – Detailed test scripts, executed in deterministic test harnesses with varying error conditions

Periodic Testing Activities o Code walkthroughs: encompass both system code and associated test suites.

Periodic Testing Activities o Code walkthroughs: encompass both system code and associated test suites. Testing focus: – Do the test suites sufficiently stress the module – Do the test suites still accurately represent the expected use of the module – Has the underlying platform changed, and has performance changed accordingly o Change Control Board: formal tracking process and tools to establish, record and monitor status of functional change requests, defect priority and status data