Safety-critical system

safety-critical system [1] or a life-critical system is a system where failure or malfunction can result in one (or more) of the following outcomes: citation needed ]

  • death or serious injury to people
  • loss or severe damage to equipment / property
  • environmental harm

safety-related system (or sometimes safety-involved system ) including everything (hardware, software, and human aspects) needed to perform one or more safety functions, in which failure would cause a significant increase in the safety risk for the people and / or or environment involved. [2] Safety-related systems are those that do not have full responsibility for controlling the risk of death, severe injury or severe environmental damage . The malfunction of a safety-involved system would only be hazardous in the context of failure of other systems or human error . Some health organizations provide guidance on safety-related systems, for example the Health and Safety Executive(HSE) in the United Kingdom . [3]

Risks of this sort are usually managed with methods and tools of safety engineering . A safety-critical system is designed to lose less than one life per trillion (10 9 ) hours of operation. [4] [5] Typical design methods include probabilistic risk assessment , a method that combines failure mode and effects analysis (FMEA) with fault tree analysis . Safety-critical systems are highly computer- based.

Reliability regimes

Several reliability schemes for safety-critical systems exist:

  • Fail-operational systems continue to operate when their control systems fail. Examples of these include elevators , the gas thermostats in most home furnaces, and passively safe nuclear reactors . Fail-operational mode is sometimes unsafe. Nuclear weapons launch-on-loss-of-communications was rejected as a control system for the US nuclear forces because it is fail-operational: a loss of communications would cause launch, so this mode of operation was considered too risky. This is contrasted with the fail-deadly behavior of the Perimeter system built during the Soviet era. [6]
  • Fail-soft systems are able to continue operating on an interim basis with reduced efficiency in case of failure. [7] Most spare tires are an example of this: They usually come with certain restrictions (eg a speed restriction) and lead to lower fuel economy. Another example is the “Safe Mode” found in most Windows operating systems.
  • Fail-safe systems become safe when they can not operate. Many medical systems fall into this category. For example, an infusion pump can fail, and as long as it alerts the nurse and CEASES pumping, it won’t Threaten the loss of life icts Because safety interval is long enough to permit a human response. In a similar vein, they can fail, but must fail in a safe mode (ie turn off when they detect faults). Famously, nuclear weapon systems that launch-on-command are fail-safe, because the communications systems fail, launch can not be commanded. Railway signaling is designed to be fail-safe.
  • Fail-secure systems maintain maximum security when they can not operate. For example, while fail-safe ones lock out, keeping an area secure.
  • Fail-Passive Systems continues to operate in the event of a system failure. An example includes an aircraft autopilot . In the event of a failure, the aircraft would remain in a controllable state and allow the pilot to take over and complete the journey and perform a safe landing.
  • Fault-tolerant systems avoid service failure when faults are introduced to the system. An example may include control systems for ordinary nuclear reactors . The normal method is one of several methods of continually testing the parts of a system, and switch on hot spares for failing subsystems. As long as faulty subsystems are used, these systems are considered safe. Interestingly, the computers, power supplies and control terminals used in these systems.

Software engineering for safety-critical systems

Software engineering for safety-critical systems is particularly difficult. There are three aspects which can be applied to the engineering software for life-critical systems. First is process engineering and management. Secondly, selecting the appropriate tools and environment for the system. This enables the system developer to effectively test the system by emulation and observes its effectiveness. Thirdly, as FAA requirements for aviation. By setting a standard for which the system is required, it forces the designers to stick to the requirements. The avionics industry has succeeded in producing standard methods for producing life-critical avionics software. Similar standards exist for automotive ( ISO 26262 ), Medical (IEC 62304) and nuclear (IEC 61513) industries. The standard approach is to carefully code, inspect, document, test, verify and analyze the system. Another approach is to certify a production system, compile , and then generate the system’s code from specifications. Another approach uses formal methods to generate proof that the code meets requirements. [8] All of These Approaches Improve the software quality in safety-critical systems by testing gold Eliminating manual steps in the development process, Because people make mistakes, and mistakes are thesis The Most common causes of potential life-threatening errors.

Examples of safety-critical systems


  • Circuit breaker
  • Emergency services dispatch systems
  • Electricity generation , transmission and distribution
  • Fire alarm
  • Fire sprinkler
  • Fuse (electrical)
  • Fuse (hydraulic)
  • Telecommunications
  • Burner Control systems

Medicine [9]

The technology can not be avoided, and it can also be used in medical care (which deals with healing patients), and also life support (which is for stabilizing patients).

  • Heart-lung machines
  • Mechanical ventilation systems
  • Infusion pumps and insulin pumps
  • Radiation therapy machines
  • Robotic surgery machines
  • Defibrillator machines
  • Dialysis machines
  • Devices that electronically monitor vital functions (electrography, especially, electrocardiography , ECG or EKG, and electroencephalography , EEG)
  • Medical imaging devices ( X ray , computerized tomography – CT or CAT, different magnetic resonance imaging – MRI – techniques, positron emission tomography – PET)
  • Even healthcare information systems have significant safety implications [10]

Nuclear engineering [11]

  • Nuclear reactor control systems


  • Fun wrinkles
  • Climbing equipment
  • parachutes
  • SCUBA Equipment


Railway [12]

  • Railway signaling and control systems
  • Platform detection to control train doors [13]
  • Automatic train stop [13]

Automotive [14]

  • Airbag systems
  • Braking systems
  • Seat belts
  • Power Steering systems
  • Advanced driver-assistance systems
  • Electronic throttle control
  • Battery management system for hybrid and electric vehicles
  • Electric park brake
  • Shift by wire systems
  • Drive by wire systems
  • Park by wire

Aviation [15]

  • Air traffic control systems
  • Avionics , particularly fly-by-wire systems
  • RAIM Radio Navigation
  • Engine control systems
  • Aircrew life support systems
  • Flight planning to determine fuel requirements

Spaceflight [16]

  • Human spaceflight vehicles
  • Rocket range launch safety systems
  • Launch vehicle safety
  • Crew rescue systems
  • Crew transfer systems

See also

  • Safety-related system
  • Safety-Critical Systems Club
  • Mission critical
  • Reliability theory
  • Reliable system design
  • Redundancy (engineering)
  • Factor of safety
  • Nuclear reactor
  • Biomedical engineering
  • SAPHIRE (risk analysis software)
  • Formal methods
  • Therac-25
  • Zonal Safety Analysis


  1. Jump up^ “Safety-critical system” . . Retrieved 15 April 2017 .
  2. Jump up^ “FAQ – Edition 2.0: E) Key concepts”. IEC 61508 – Functional Safety . International Electrotechnical Commission . Retrieved 23 October 2016 .
  3. Jump up^ “Part 1: Key guidance”. Managing competence for safety-related systems (PDF) . UK: Health and Safety Executive . 2007 . Retrieved 23 October 2016 .
  4. Jump up^ AC 25.1309-1A
  5. Jump up^ Bowen, Jonathan P. (April 2000). “The Ethics of Safety-Critical Systems”. Communications of the ACM . 43 (4). pp. 91-97. doi : 10.1145 / 332051.332078 .
  6. Jump up^ “Inside the Apocalyptic Soviet Doomsday Machine” . WIRED .
  7. Jump up^ “Fail-soft definition” .
  8. Jump up^ Bowen, Jonathan P .; Stavridou, Victoria (July 1993). “Safety-critical systems, formal methods and standards”. Software Engineering Journal . 8(4). IEE / BCS. pp. 189-209. doi : 10.1049 / sej.1993.0025 .
  9. Jump up^ “Medical Device Safety System Design: A Systematic Approach” . .
  10. Jump up^ RJ Anderson, Smith MF, (editors). Confidentiality, Privacy and Safety of Healthcare Systems. Special edition of Health Informatics Journal, December 1998.
  11. Jump up^ “Safety of Nuclear Reactors” . .
  12. Jump up^ “Safety-Critical Systems in Rail Transportation” (PDF) . . Retrieved 2016-10-23 .
  13. ^ Jump up to:b [1]
  14. Jump up^ “Safety-Critical Automotive Systems” . .
  15. Jump up^ Leanna Rierson. Developing Safety-Critical Software: A Practical Guide for Aviation Software and DO-178C Compliance . ISBN  978-1-4398-1368-3 .
  16. Jump up^ “NASA Procedures and Guidelines: NPG: 8705.2” (PDF) . . June 19, 2003 . Retrieved 2016-10-23 .

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright 2019
Shale theme by Siteturner