User Tools

Site Tools


verification

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
verification [2007/11/01 12:56] franckverification [2007/11/01 14:13] (current) franck
Line 34: Line 34:
 bits wide, instead of the usual 32. This caused a message database to become corrupted. The programmers and operators, working under tremendous pressure to solve the problem bits wide, instead of the usual 32. This caused a message database to become corrupted. The programmers and operators, working under tremendous pressure to solve the problem
 quickly, accidentally copied the corrupt copy of the database over the backup, instead of the other way around.” quickly, accidentally copied the corrupt copy of the database over the backup, instead of the other way around.”
 +
  
 ==== The all-time most expensive single-character error? ==== ==== The all-time most expensive single-character error? ====
Line 40: Line 41:
  
 "Date: Sat 10 Sep 88 15:22:49-PDT\\ "Date: Sat 10 Sep 88 15:22:49-PDT\\
-From: Gary Kremen (The Arb) <89.KREMEN@GSB-HOW.Stanford.EDU>\\ Subject: Soviets See Little Hope of Controlling Spacecraft\\+From: Gary Kremen\\  
 +Subject: Soviets See Little Hope of Controlling Spacecraft\\
 According to today's (Saturday, September 10, 1988) New York Times, the Soviets lost their Phobos I spacecraft after it tumbled in orbit and the solar cells lost power. The tumbling was According to today's (Saturday, September 10, 1988) New York Times, the Soviets lost their Phobos I spacecraft after it tumbled in orbit and the solar cells lost power. The tumbling was
 caused when a ground controller gave it an improper command. This has to one of the most expensive system mistakes ever. [estimated at $100,000,000] caused when a ground controller gave it an improper command. This has to one of the most expensive system mistakes ever. [estimated at $100,000,000]
  
 Date: Tue, 13 Sep 88 15:20:18 PDT\\ Date: Tue, 13 Sep 88 15:20:18 PDT\\
-From: Peter G. Neumann <Neumann@KL.SRI.COM>\\+From: Peter G. Neumann\\
 Subject: Soviet Mars Probe\\ Subject: Soviet Mars Probe\\
 The Soviet Mars probe was mistakenly ordered to ‘commit suicide’ when ground control beamed up a 20 to 30 page message in which a single character was inadvertently omitted. The change in program was required because the Phobos 1 control had been transferred from a command center in the Crimea to a new facility near Moscow. ‘The [changes] would not have been required if the controller had been working the computer in Crimea.’ The commands caused the spacecraft's solar panels to point the wrong way, which would prevent the batteries from staying charged, ultimately causing the spacecraft to run out of power." The Soviet Mars probe was mistakenly ordered to ‘commit suicide’ when ground control beamed up a 20 to 30 page message in which a single character was inadvertently omitted. The change in program was required because the Phobos 1 control had been transferred from a command center in the Crimea to a new facility near Moscow. ‘The [changes] would not have been required if the controller had been working the computer in Crimea.’ The commands caused the spacecraft's solar panels to point the wrong way, which would prevent the batteries from staying charged, ultimately causing the spacecraft to run out of power."
Line 52: Line 54:
 ==== The Ariane 5 failure ==== ==== The Ariane 5 failure ====
  
-Quoted from the report of the Ariane Inquiry Board [Lions, 1996]:+Quoted from the report of the Ariane Inquiry Board:
  
 “On 4 June 1996, the maiden flight of the Ariane 5 launcher ended in a failure. Only about 40 seconds after initiation of the flight sequence, at an altitude of about 3700 m, the launcher “On 4 June 1996, the maiden flight of the Ariane 5 launcher ended in a failure. Only about 40 seconds after initiation of the flight sequence, at an altitude of about 3700 m, the launcher
Line 96: Line 98:
 The AT&T system failure was triggered by a system upgrade. This is a very common hazard. It is quite difficult to establish through prior testing that an upgrade to a complex systems will not have an unexpected and possibly disastrous effect—as in a The AT&T system failure was triggered by a system upgrade. This is a very common hazard. It is quite difficult to establish through prior testing that an upgrade to a complex systems will not have an unexpected and possibly disastrous effect—as in a
 recent incident at the Canadian Bank of Commerce, which for two days could not record deposits or payments through its automatic teller machines because, as a newspaper report had it, "attempts by the bank to improve one of their computer systems caused problems with another system, throwing everything out of whack." recent incident at the Canadian Bank of Commerce, which for two days could not record deposits or payments through its automatic teller machines because, as a newspaper report had it, "attempts by the bank to improve one of their computer systems caused problems with another system, throwing everything out of whack."
 +
  
 ==== Testing is not proof ==== ==== Testing is not proof ====
Line 102: Line 105:
  
 "Date: Fri, 18 Jul 1997 14:10:53 -0400 (EDT)\\ "Date: Fri, 18 Jul 1997 14:10:53 -0400 (EDT)\\
-From: "Daniel P. B. Smith" <dpbsmith@world.std.com>\\+From: "Daniel P. B. Smith"\\
 Subject: Unique definition of "proof of correctness"\\ Subject: Unique definition of "proof of correctness"\\
 *Computer Design*, July 1997, p. 38, describes the IDT-C6, a *Computer Design*, July 1997, p. 38, describes the IDT-C6, a
Line 122: Line 125:
 Techniques that are lacking, notes Price, include the use of formal methodologies for designing and writing software." Techniques that are lacking, notes Price, include the use of formal methodologies for designing and writing software."
  
-==== Ambulance Dispatch System failure ====+ 
 +==== Ambulance dispatch system failure ====
  
 In November 1992, the computerized dispatch system for the London (England) Ambulance Service, broke down suddenly and “locked up altogether”, leaving calls for assistance which had already been stored in the system unavailable, and forcing a taxing and cumbersome return to a manual, paper-based dispatch system. Luckily, the breakdown occurred in early morning hours, so that there were no severe consequences for patients. In November 1992, the computerized dispatch system for the London (England) Ambulance Service, broke down suddenly and “locked up altogether”, leaving calls for assistance which had already been stored in the system unavailable, and forcing a taxing and cumbersome return to a manual, paper-based dispatch system. Luckily, the breakdown occurred in early morning hours, so that there were no severe consequences for patients.
Line 131: Line 135:
 gradually used up all available memory thus causing the system to crash. This programming error should not have occurred and was caused by carelessness and lack of quality assurance gradually used up all available memory thus causing the system to crash. This programming error should not have occurred and was caused by carelessness and lack of quality assurance
 of program code changes. Given the nature of the fault it is unlikely that it would have been detected through conventional programmer or user testing." of program code changes. Given the nature of the fault it is unlikely that it would have been detected through conventional programmer or user testing."
 +
  
 ==== Software that kills ==== ==== Software that kills ====
Line 146: Line 151:
 Levenson and Turner warn against simply blaming tragedies such as the Therac-25 accidents on software errors alone—they argue for a deeper analysis of “organizational, managerial, technical” and even “political, or sociological factors” on the grounds that, as we have already observed, there will always be another bug; so focusing only on eliminating individual software errors may blind us to considering the larger environment which tolerates and unintentionally furthers the occurrence of such errors with their drastic consequences. Levenson and Turner warn against simply blaming tragedies such as the Therac-25 accidents on software errors alone—they argue for a deeper analysis of “organizational, managerial, technical” and even “political, or sociological factors” on the grounds that, as we have already observed, there will always be another bug; so focusing only on eliminating individual software errors may blind us to considering the larger environment which tolerates and unintentionally furthers the occurrence of such errors with their drastic consequences.
  
 +True enough, but one of the underlying causes of unsafe software may be the general reluctance to apply formal methods and verification techniques, usually on the grounds that they are too difficult to automate and hence cannot be applied to “real code”. One purpose of this text is to argue by example against the validity of this excuse—to show that through the use of appropriate tools, portions of code can be verified automatically 
 +or semi-automatically, or can be generated automatically from logical specifications. Just as we would expect any author who uses a word-processor to use a spelling checker 
 +to check for at least those spelling errors which the dictionary can catch, so we should — in the future, if not today — expect programmers to routinely check or generate their code with software verification tools to catch and eliminate at least those errors which the tools can prevent. Even then programmers will not succeed in stamping out that vexatious and stubborn “last bug”, but at least its available hiding places will grow smaller, and both money and lives may be saved.
verification.1193921806.txt.gz · Last modified: by franck