Critical Systems Ian Sommerville 2006 Software Engineering 8

  • Slides: 36
Download presentation
Critical Systems ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 1

Critical Systems ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 1

What is a critical system? l l Software failures causes inconvenience but no serious,

What is a critical system? l l Software failures causes inconvenience but no serious, long-term damage. But, some failures can result in losses, physical damage or threats to human life -> Critical systems are technical or socio-technical systems that people or businesses depend on. If critical systems fail to deliver their services as expected then serious problems and significant losses may result. ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 2

Critical Systems l Safety-critical systems • l Mission-critical systems • l Failure results in

Critical Systems l Safety-critical systems • l Mission-critical systems • l Failure results in loss of life, injury or damage to the environment; ex: Chemical plant protection system; Failure results in failure of some goal-directed activity; ex: Navigational system for a spacecraft; Business-critical systems • Failure results in high economic losses; ex: Customer accounting system in a bank; ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 3

System dependability l Dependibility -> Most important emergent property of a critical system l

System dependability l Dependibility -> Most important emergent property of a critical system l The dependability of a system reflects the user’s degree of trust in that system. l Usefulness and trustworthiness are not the same thing. A system does not have to be trusted to be useful. ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 4

Importance of dependability Several reasons why dependability is the most important emergent property for

Importance of dependability Several reasons why dependability is the most important emergent property for a critical system: l l l Systems that are not dependable and are unreliable, unsafe or insecure may be rejected by their users. The costs of system failure may be very high. Unthrustworthy systems may cause information loss with a high consequent recovery cost. ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 5

Development methods for critical systems l l l Failures in critical systems -> high

Development methods for critical systems l l l Failures in critical systems -> high cost -> they are usually developed using well-tried techniques Although a small numner of control systems may be completely automatic , most critical systems are socio-technical where people monitor and control the operation. Examples of development methods • • • Formal methods of software development Static analysis External quality assurance ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 6

Socio-technical critical systems There are three system components where critical systems failures may occur:

Socio-technical critical systems There are three system components where critical systems failures may occur: l Hardware failure • l Software failure • l Hardware fails because of design and manufacturing errors or because components have reached the end of their natural life. Software fails due to errors in its specification, design or implementation. Operational failure • • Human operators make mistakes. As hardware and software have become more reliable, failures in operation are now probably the largest single cause of system failures. ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 7

A simple safety-critical system: A software-controlled insulin pump l l Diabetes: A condition where

A simple safety-critical system: A software-controlled insulin pump l l Diabetes: A condition where the human pancreas is unable to produce sufficient quantities of a hormone called insuline. Insuline metabolises glucose in the blood. The conventional treatement of diabetes involves regular injections of genetically engineered insulin. Diabetics measure their blood sugar levels using a meter and then they calculate the dose of insulin that they should inject. ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 8

A simple safety-critical system: A software-controlled insulin pump l l Problem: The level of

A simple safety-critical system: A software-controlled insulin pump l l Problem: The level of insulin in the blood does not just depend on the blood glucose level but it is a function of the time when the insulin injection was taken. This can lead to very low levels of blood sugar, or very high levels of blood sugar. Low blood sugar –> Temporary brain malfunctioning -> unconsciousness and death High blood sugar -> Eye damage, kidney damage and heart problems A software-controlled insulin delivery system might work by using a microsensor embedded in the patient to measure some blood parameter that is proportional to the sugar level. This is then sent to the pump controller. The controller computes the sugar level and the amount of insulin that is needed. It then sends signals to a miniaturised pump to deliver the insulin. ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 9

Insulin pump organisation and components ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter

Insulin pump organisation and components ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 10

Insulin pump data-flow ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide

Insulin pump data-flow ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 11

Dependability requirements There are two high-level dependibility requirements for this insulin pump system: l

Dependability requirements There are two high-level dependibility requirements for this insulin pump system: l l The system shall be available to deliver insulin when required to do so. The system shall perform reliability and deliver the correct amount of insulin to counteract the current level of blood sugar. Failure of the system could cause excessive doses of insulin to be delivered and this could threaten the life of the user. It is particularly important that overdoses of insulin should not occur. ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 12

Dependability l l l The dependability of a system equates to its trustworthiness. Thrustworthiness:

Dependability l l l The dependability of a system equates to its trustworthiness. Thrustworthiness: It means the degree of user confidence that the system will operate as they expect and that the system will not “fail” in normal use. (not measurable, but “not dependable”, “very dependable”, “ultra-dependable”) For instance: The presentation processor, Powerpoint, is not a very dependable system. So, I frequently save my work. However, it is very usable. l A dependable system is a system that is trusted by its users. l Principal dimensions of dependability are: • • Availability; Reliability; Safety; Security ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 13

Dimensions of dependability ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide

Dimensions of dependability ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 14

Other dependability properties l Repairability • l Maintainability • l Reflects the extent to

Other dependability properties l Repairability • l Maintainability • l Reflects the extent to which the system can be adapted to new requirements; Survivability • l Reflects the extent to which the system can be repaired in the event of a failure; Reflects the extent to which the system can deliver services whilst under hostile attack; Error tolerance • Reflects the extent to which user input errors can be avoided and tolerated. ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 15

Dependability vs. performance l Untrustworthy systems may be rejected by their users. l System

Dependability vs. performance l Untrustworthy systems may be rejected by their users. l System failure costs may be very high. l It is very difficult to tune systems to make them more dependable. l It may be possible to compensate for poor performance. l Untrustworthy systems may cause loss of valuable information ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 16

Dependability costs l l Dependability costs tend to increase exponentially as increasing levels of

Dependability costs l l Dependability costs tend to increase exponentially as increasing levels of dependability are required There are two reasons for this: • The use of more expensive development techniques and hardware • The increased testing and system validation that is required to convince the system client ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 17

Costs of increasing dependability ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3

Costs of increasing dependability ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 18

Dependability economics l l l Because of very high costs of dependability achievement, it

Dependability economics l l l Because of very high costs of dependability achievement, it may be more costeffective to accept untrustworthy systems and pay for failure costs However, this depends on social and political factors. A reputation for products that can’t be trusted may lose future business Depends on system type - for business systems in particular, modest levels of dependability may be adequate ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 19

Availability and reliability l l Availability and reliability are closely related -> A reliable

Availability and reliability l l Availability and reliability are closely related -> A reliable system cannot be assumed to be reliable, or vice versa. An example of a system where availability is more critical than reliability: Telephone Exchange Switch • • l Users expect a dial tone when they pick up a phone -> System has high availability requirements If a system fault causes a connection to fail, it is often recoverable. The repairement can be done very quickly, and phone user may not even notice that a failure has occured. Availability does not simply depend on the system itself, but also on the time needed to repair the faults: • • System A fails once per year, system B fails once per month -> A is clearly more reliable than B System A takes three days to restart after a failure, system B takes 10 minutes -> B is more available than A. ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 20

Availability and reliability l l If you measure system reliability in one environment, you

Availability and reliability l l If you measure system reliability in one environment, you can’t assume that the reliability will be the same in another environment where the system is used in a different way. (ex: The use of a word processor in a post office and in a university) Human perceptions and patterns are also significant. (ex: The failures of the wiper in heavy rain) ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 21

Reliability terminology ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 22

Reliability terminology ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 22

Reliability achievement Approaches that are used to improve the reliability of a system: l

Reliability achievement Approaches that are used to improve the reliability of a system: l Fault avoidance • l Fault detection and removal • l Development technique are used that either minimise the possibility of mistakes or trap mistakes before they result in the introduction of system faults (ex: Using error-prone programming language constructs, such as pointers) Verification and validation techniques that increase the probability of detecting and correcting errors before the system goes into service are used (ex: Systematic system testing and debugging) Fault tolerance • Run-time techniques are used to ensure that system faults do not result in system errors and/or that system errors do not lead to system failures (ex: Incorporation of self-ckecking facilities) ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 23

Input/output mapping ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 24

Input/output mapping ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 24

Reliability perception ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 25

Reliability perception ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 25

Reliability improvement l l l Removing X% of the faults in a system will

Reliability improvement l l l Removing X% of the faults in a system will not necessarily improve the reliability by X%. A study at IBM showed that removing 60% of product defects resulted in a 3% improvement in reliability Program defects may be in rarely executed sections of the code so may never be encountered by users. Removing these does not affect the perceived reliability A program with known faults may therefore still be seen as reliable by its users ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 26

Safety l l l Safety is a property of a system that reflects the

Safety l l l Safety is a property of a system that reflects the system’s ability to operate, normally or abnormally, without danger of causing human injury or death and without damage to the system’s environment (ex: Control and monitoring systems in chemical and pharmaceutical plants and automobile control systems) It is increasingly important to consider software safety as more and more devices incorporate software-based control systems Safety requirements are exclusive requirements i. e. they exclude undesirable situations rather than specify required system services ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 27

Safety criticality There are two classes of safety-critical software: l Primary safety-critical systems •

Safety criticality There are two classes of safety-critical software: l Primary safety-critical systems • l A software that is embedded as a controller. Mulfunctioning of it can cause a hardware failure which directly threaten people. Secondary safety-critical systems • Systems that can indirectly cause injuries. ex: Computer-aided engineering design systems ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 28

Safety and reliability l Safety and reliability are related but distinct • l l

Safety and reliability l Safety and reliability are related but distinct • l l In general, reliability and availability are necessary but not sufficient conditions for system safety Reliability is concerned with conformance to a given specification and delivery of service Safety is concerned with ensuring system cannot cause damage irrespective of whether or not it conforms to its specification ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 29

Unsafe reliable systems l Specification errors • • l Hardware failures generating spurious inputs

Unsafe reliable systems l Specification errors • • l Hardware failures generating spurious inputs • l If the system specification is incorrect then the system can behave as specified but still cause an accident. A high percentage of system malfunctions are result of specification rather than design errors. Hard to anticipate in the specification Context-sensitive commands i. e. issuing the right command at the wrong time • • Inputs that are not individually incorrect can lead to a system malfunction. Often the result of operator error ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 30

Safety terminology ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 31

Safety terminology ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 31

Security l l l Security of a system: A system property that reflects the

Security l l l Security of a system: A system property that reflects the system’s ability to protect itself from accidental or deliberate external attack (ex: Viruses, unauthorized use of system services, unauthorized modification of data) Security is becoming increasingly important as systems are networked so that external access to the system through the Internet is possible Security is an essential pre-requisite for availability, reliability and safety ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 32

Fundamental security l l l If a system is a networked system and is

Fundamental security l l l If a system is a networked system and is insecure then statements about its reliability and its safety are unreliable These statements depend on the executing system and the developed system being the same. However, intrusion can change the executing system and/or its data Therefore, the reliability and safety assurance is no longer valid ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 33

Security terminology ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 34

Security terminology ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 34

Damage from insecurity There can be three types of damages that may be caused

Damage from insecurity There can be three types of damages that may be caused through external attack: l Denial of service • l Corruption of programs or data • l The system is forced into a state where normal services are unavailable The programs or data in the system may be modified in an unauthorised way Disclosure of confidential information • Information (the confidential information) that is managed by the system may be exposed to people who are not authorised to read or use that information ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 35

Key points l l l l l A critical system is a system where

Key points l l l l l A critical system is a system where failure can lead to high economic loss, physical damage or threats to life. The dependability in a system reflects the user’s trust in that system The availability of a system is the probability that it will be available to deliver services when requested The reliability of a system is the probability that system services will be delivered as specified Reliability and availability are generally seen as necessary but not sufficient conditions for safety and security Reliability is related to the probability of an error occurring in operational use. A system with known faults may be reliable Safety is a system attribute that reflects the system’s ability to operate without threatening people or the environment Security is a system attribute that reflects the system’s ability to protect itself from external attack Dependability improvement requires a socio-technical approach to design where you consider the humans as well as the hardware and software ©Ian Sommerville 2006 Software Engineering, 8 th edition. Chapter 3 Slide 36