Extreme Risk

Extreme Risk

What an Officer of the Soviet Strategic Missile Forces Taught us on Extreme Risk

Martin Bartels

9 August 2020

 

Almost nobody noticed what happened on 26 September 1983

In 1983 the Cold War had reached a new peak. The great nuclear powers had not found a way out of the logic of the mutual threat by the complete destruction of not only the two countries. Both superpowers followed the “MAD” principle (“Mutually Assured Destruction”).

https://www.britannica.com/topic/mutual-assured-destruction

That meant that both of them had the capability to obliterate the other side and were able to launch a powerful second strike, even after the destruction of their own missile bases. The equilibrium meant that security was guaranteed as long as something unforeseen did not upset the balance of powers. The Soviet Union relied on “Oko”, a satellite system designed to detect the launch of ballistic missiles from US territory. On 26 September 1983 Lieutenant Colonel Stanislav Yevgrafovich Petrov

https://commons.wikimedia.org/wiki/File:Stanislaw-jewgrafowitsch-petrow-2016.jpg?uselang=ru

was the duty officer of the Serpukhov-15 bunker command centre.

https://en.wikipedia.org/wiki/Serpukhov-15

Shortly after midnight the Oko system reported the launch of initially one and then another four intercontinental ballistic missiles from bases in Montana.

After the discovery of a nuclear attack, the Soviet leadership had 28 minutes to decide on a counterattack. The logical consequence of a confirmed nuclear attack against the USSR would have been the launch of the country's entire terrestrial arsenal against the United States and NATO members, which would have immediately triggered lethal NATO counter-strikes from submarines.

The officer suspected a false alarm, but he did not have enough information to be certain of that. He made a decision to deviate from the military protocol. He reported a warning system failure to the Soviet leadership. He did not report his professional assessment.

His report made sure that there was no “retaliation”. Actually, the report turned out to be correct in terms of content, the warning system had misinterpreted solar reflections on clouds near Malmstrom Air Force Base as missile launches.

https://www.malmstrom.af.mil/

The incident became known to the public only in 1998. Stanislav Petrov was surprised by the numerous honours and the expression of deep gratitude from all over the world. He lived modestly in Frjasino (Moscow oblast) until his demise on 19 May 2017.

https://www.youtube.com/watch?v=quM5obcn8R0

 

Petrov’s criteria

Several interviews conducted many years later clearly show the considerations Stanislav Petrov made during the decisive 28 minutes:

- He was aware that a nuclear exchange of blows between the US and the USSR would quickly wipe out vast parts of the planet and contaminate the rest. 

- The firing of only 5 missiles from Montana seemed implausible to him.

- He had doubts about the reliability of the Oko system.

- It was clear to him that reporting uncertainty about the warning system's alarm signal (i.e. the truth) would irrevocably prompt the leadership to an unpredictable decision. There was Yuri Andropov at the top who was already terminally ill.

- It was clear to him that by simply giving a report which was incorrect at that time, because it stated his assessment as a fact, he alone would predetermine the military decision-making process and there would be no nuclear strike made by the USSR.

The statements made by Stanislav Petrov years later are available on the Internet, pointing out a professional with a strong sense of responsibility and whose human integrity was beyond any doubt.

However, they do not reveal what excessive tension this person experienced during those 28 minutes and somehow made rational decisions.

https://www.youtube.com/watch?v=quM5obcn8R0&t=219s

https://ru.wikipedia.org/wiki/%D0%9F%D0%B5%D1%82%D1%80%D0%BE%D0%B2,_%D0%A1%D1%82%D0%B0%D0%BD%D0%B8%D1%81%D0%BB%D0%B0%D0%B2_%D0%95%D0%B2%D0%B3%D1%80%D0%B0%D1%84%D0%BE%D0%B2%D0%B8%D1%87

 

Heroism

Heroes appear in critical situations such as wars or natural disasters. They fight for ideals, right or wrong, often for years. Some sacrifice their lives and are revered afterwards. Sometimes new evaluations of “post mortem” appear because the ideals of societies change.

A decision-making process of 28 minutes that saves humanity does not fit the classic model. Stanislav Petrov saw himself as a rational decision-maker, not as a hero. The need to avert danger from humanity was a criterion that took precedence in his mind to make a report contrary to his military duty. His transgression saved us all.

Stanislav Petrov’s action has not been and will not be re-evaluated in the future.

 

A look into the uncertainty abyss

The Oko risk was unique in its greatness. However, there are other hazardous situations that can also have huge scale and involve the conflict between the processes that were previously established on the basis of comprehensive risk analysis with a high level of authority and factors that were not previously taken into account. These can be particularly stupid.

An example of this is the Forsmark (Sweden) Nuclear Power Plant incident of 25 July 2006, where a short circuit in the electrical system that was too simple to be included in the planning of risk scenarios, triggered a chain of initially uncontrollable chain reactions. The engineers could prevent a meltdown but only with difficulty.

Once again, the rescuers were professionals who despite the carefully formulated set of rules worked out for all eventualities, took the situation under control and prevented the meltdown just in time.

https://en.wikipedia.org/wiki/Forsmark_Nuclear_Power_Plant

History shows that in many events the causes of the greatest dangers arise at a surprisingly low technical level.

https://www.history.com/news/historys-worst-nuclear-disasters

https://bellona.org/news/nuclear-issues/accidents-and-incidents/2011-11-vacuum-cleaner-that-sparked-fire-at-swedish-nuclear-reactor-may-lead-to-plants-closure.

However, the small number of documented cases do not allow to come to a conclusion that the statistical regularity of "absurd causes" can be admitted here.

 

Does human discretion make the world safer?

Let us start with the simple side of this question: when there is such a long and convincing series of numbers that even the most vigilant statistician would classify them as solid, we can compare the degrees of certainty under different constellations.

A contemporary example of this is the question of the safety of autonomous driving: for almost every country we have plenty of data on the frequency and causes of car accidents under existing circumstances (car types, technical equipment of the cars, road conditions, weather, road signs, insobriety, health condition of the driver . . .). These figures make it possible to work out measures to adjust the factors identified as dangerous and so to reduce the number of accidents. This works well.

The idea of letting go of the steering wheel and putting a computer in charge of the car may make some of us shiver. However, as soon as enough statistical data is available on a different technical environment (e.g. frequency of failure of fast Internet), we are approaching the point at which autonomous driving becomes statistically safer than steering by humans with driving licenses. Carefully compiled statistics makes it possible to deal with autonomous driving in a responsible manner. In the end, the pragmatically achieved result may become that we give up the steering wheel of the car to the computer and only intervene in certain exceptional situations.

On the other end of the spectrum there are enormous risks. There cannot be enough statistical data here to improve the uncertainty to such an extent that we can really sleep easily. Certainly, it improves safety to analyse the known technical correlations and to document presumably correct processes in such a way that the technicians in charge are less exposed to errors.

On the other hand, such rulebooks can also pose an additional danger: we humans have the strange specific nature to trust in rules too easily and too willingly. Even if (or because) they are very complicated and also incomprehensible, we give too much credit to those who wrote them, and we easily develop an unfounded feeling of security.

The belief in the wisdom and protective power of sophisticated rulebooks and hierarchies is a refutable belief. It has been refuted time and again. And it will be refuted again.

People with reasonable knowledge and experience, a sound ethical awareness and robust nerves, who can quickly identify measurement errors and implausible technical sequences of events and who understand the relative value of predefined processes must have the authority to intervene. This increases security considerably but does not guarantee it.

 

Better stay away completely

We must not rely on the hope that in the end there will always be a Stanislav Petrov ready to intervene. From a functional point of view the hope is the expectation of something desirable, which is unlikely. Therefore, the hope has no role to play in risk management. If a risk can only be mastered with the luck or the exceptional skills of specially qualified and ethically strong human beings, we fare best if we don't take it at all.