How does the mutually assured destruction work

"America needs a dead hand" for nuclear deterrence

Scientists associated with the US Air Force are proposing an autonomous AI system because new weapons such as hypersonic missiles no longer leave a window of opportunity for human decisions

Curtis McGiffin, Vice Dean of the Air Force Institute of Technology, and Adam Lowther of the Louisiana Tech Research Institute, which is also affiliated with the US Air Force, made an idiosyncratic suggestion in an article. Under the title "America needs a 'dead hand'" you write that the USA must develop an "automatic strategic reaction system based on artificial intelligence" in the new arms race. The American NC3 system (nuclear command, control, and communications) largely originated from the Cold War, when there was still sufficient advance warning time, even if this had recently shrunk to 15 minutes with the missiles fired by submarines.

Hypersonic missiles, stealth cruise missiles and AI would shrink the time span to such an extent that the US president could no longer make sensible decisions or that everyone "humanly in the loop" would only hinder the required reaction speed. Specifically named are new Russian weapons under development such as the cruise missiles caliber-M and Kh-102, the nuclear-powered Poseidon submarine or the maneuverable hypersonic missile glider Avangard Object 4202, which could render the American NC3 system ineffective (dangerous arms race with hypersonic missiles).

The Soviet "Dead Hand" for the balance of horror

So it is a matter of adopting the Soviet concept of the "dead hand" from the Cold War on the basis of autonomous systems. The USA had also developed NC3 systems during the Cold War that were based on AI, but which allegedly could not trigger an attack automatically. With this concept, the Soviet Union wanted to use a system to ensure that if the Soviet leadership were to be switched off by a first strike, a nuclear counterattack would automatically be launched in order to maintain the mutually assured destruction (MAD).

With the "dead hand" or "perimeter", the Soviet leadership reacted to the development of precision missiles that could enable a decapitation attack on the political and military leadership or interrupt their communication with the strategic forces in order to restore the balance of terror. The adversary - the USA - could assume that it would be able to eliminate the leadership, but it would trigger a counterattack. The system is said to have been operational in 1985. How far the Soviet Union fully implemented the system and whether Russia continued it is not known.

In any case, it also serves the balance of horror that has been shaken since the installation of the American anti-missile shield and the subsequent Russian armament to the development of ultrasonic missiles, when the commander-in-chief of the Russian Strategic Missile Forces, Sergei Karakaev, assured in 2011 that the system would continue be active. Viktor Yesin, the former commander of Russia's strategic missile forces in the 1990s, once again confirmed in an interview with the weekly magazine Zvezda last year that the system was working and had been "modernized". He also warned that if the US deployed missiles in Europe to adopt a pre-emptive strike doctrine, Russia could abandon its doctrine of nuclear backlash.

At the core of the Russian Perimetr system, which is switched off in peacetime, are command missiles equipped with radio transmitters and ready for launch in many places in armored and camouflaged silos. They are started automatically after the alarm is triggered and send activation codes directly to all nuclear missiles if there is no order within a certain time to stop the retaliatory strike. Perimetr, whose facilities are suspected to be in a command bunker, processes a great deal of information from the early warning systems, military communications, seismic activity, radioactive values, etc.

As soon as a nuclear explosion is detected due to sudden changes in certain parameters, the system continuously checks four conditions. The first step is to test whether there is still a connection to the General Staff. If this is available, the system will stop automatically. If the connection to the General Staff has failed, connections to other command levels are checked. Here, too, the alarm would be canceled if there was a connection. If not, the system switches to attack mode, establishes contact with all nuclear attack capacities and allegedly provides for a period of 45 minutes in which the attack, i.e. the launch of the command missile first, could be called off. Then the command missiles are launched and no further action can be taken. The entire process should be completed in an hour.

"The prophetic images of SF films quickly become reality"

The concept of the "cold hand" still shows a reluctance, if it can really run automatically. It would be activated by the military if a crisis situation arises, but only work through the plan if a nuclear attack actually occurred. Lowther and McGiffin are of the opinion that, in order to maintain the deterrent effect, the USA would have to develop an automatic, AI-based system that decides on the basis of a starting situation at lightning speed and without human involvement, which reaction is appropriate. It could then possibly also react preventively.

The authors admit that this compares with Dr. Strangelove, which NORAD computer WOPR (War Operation Plan Response) from the film "War Games" or Skynet from the film "Terminator" would suggest, "but the prophetic images of these SF films quickly become reality". The authors imagine that the president determines a reaction to certain situations in advance and that the AI ​​system alone "detects an attack, decides which reaction is appropriate (based on previously decided options) and then steers an American reaction" .

It is emphasized that this is then much more than just an automated system like that of the "Dead Hand", because "the system itself will determine the reaction based on its own assessment of the coming threat". We have already described that such autonomous AI systems will ultimately become mandatory due to nuclear armament: Hypersonic weapons force an arms race between the autonomous systems.

The authors go through three possible options to maintain the deterrent: to respond only after an attack with an increased and modified return capacity, improvements to the warning system before the launch of a nuclear weapon, which could lead to a pre-emptive attack (which supposedly reflects the "American values" contradicts), or a modernization of nuclear weapons in order to shorten the reaction time for the opponent and to bring him to the negotiating table for disarmament talks. But all three options have their disadvantages, and above all China and Russia are continuing their modernization programs, neither are the "moral dilemmas" of "not letting the Americans sleep".

The AI-based system could at least solve the problem of the shrinking time window. After all, the authors admit that AI can cause many problems, and one must also take into account that the AI ​​developers may no longer be able to control their product. Every option carries risks, but the USA should no longer just replace old weapon systems with newer ones, but nuclear deterrence must be fundamentally reconsidered and addressed today.

Or maybe it's just about pretending to have an autonomous AI system to deter opponents. Who knows whether Perimetr really works. After all, even an AI-NC3 system would only have simulated data with which it could learn and run through scenarios. It would therefore be encapsulated from the outset in a bubble of possibility without any empirical experience. Not a good starting point for a "doomsday machine" that could make the world uninhabitable for humans.

Read comments (199 posts) https://heise.de/-4519452Report errorDruckenbuchempfänger