Podcast Logo
hero

3/3/21: A.I. WARFARE – HIDING THE POWER BEHIND THE CURTAIN W/ CYRUS A. PARSA

Posted on March 3rd, 2021 by Clyde Lewis

MONOLOGUE WRITTEN BY CLYDE LEWIS

This morning I woke up and I heard a sound coming from downstairs. The TV was on and I contemplated whether or not I turned it off  last night when I went up to bed. I was surprised to find Liam downstairs watching the movie, Pacific Rim. Pacific Rim is a great science fiction film where Kaiju’s – monsters that look like Japanese mega monsters rise up out of an inter-dimensional portal called undefinedthe Breach.undefined In order to fight these creatures, the military creates giant robots called Jaegers that are piloted by soldiers.

I asked him why he was up watching the film and he said that he was getting ready for the new King Kong VS Godzilla film and contemplating the rumor that Mecha Godzilla will show up and teach both Kong and Godzilla a lesson.

People have been rewinding the trailer of the new film and have already speculated that the mechanized version of Godzilla will show up and will be the ultimate metaphor for future warfare against real monsters that wait to attack us.

Again, this is a sign that the technocracy is finding its way into everything – if not in genetics studies and biological warfare, but autonomous Artificial intelligent killing machines.

In recent years, it has sometimes appeared that global politics is simply a choice between rival forms of technocracy. In China, it is a government of engineers backed up by a one-party state.

In the west, it is the rule of economists and central bankers, operating within the constraints of a democratic system. This creates the impression that the real choices are technical judgments about how to run vast, complex economic and social systems.

We must also conclude that major wars are n the planning and left to the whims of the technocracy the new battlefiled will look more like the Mecha wars seen in science fiction films of yesterday undefined in the present it is not out of the question to ask who are what would your rather have attacking a target undefined a human or advanced Artificial Intelligence.

If diseases that resemble technology like COVID-19 appear our of nowhere and the answer is to manipulate DNA to prevent it from killing you undefined you can imagine what the technocracy has created to use in the next war.

Now, it must be clearly stated that what we perceive as the dystopian future of The Fourth Industrial Revolution—AI, blockchain, digitalization, financialization, green capitalism and so on—can’t be separated from the invisible hand of the technocratic mob. It cannot be allowed to be defined by capitalist institutions as a “legitimate political topic” instead of what it really is.

It is all para-political. It is all beyond the scope of politics as usual because we have a government now that cares more about the opinion of scientific experts than it does the substantive processes of running a government for the people by the people.

It is inevitable that the science we are told to worship is the very science that create machines that can kill us all and for the faceless scientist that pulls the strings, it does not matter to them as we have discussed before - science has been known to push us close to extinction many times in history and they will do it again as they focus their attention on future warfare.

Before the atomic bomb was detonated in New Mexico, Enrico Fermi offered wagers on “whether or not the bomb would ignite the atmosphere and cause a chain reaction, and if so, whether it would merely destroy New Mexico, destroy the world or even send a huge wave into space creating turbulence and maybe take-out other planets.

Fermi theorized that an explosion of great magnitude would signal possible alien races and that they could very well visit this planet to at least make note of the area where the bomb was first detonated.

Although many scientists would like to claim that the Fermi story is a myth, there was a definite concern that a thermonuclear reaction might trigger the fusion of nitrogen nuclei in the atmosphere causing a cascade effect.

Edward Teller quite notably the Father of the Hydrogen Bomb said that :

“In exploding a nuclear fission weapon, was there a chance that the temperature of the blast could fuse together nuclei of light elements in the atmosphere, releasing further huge amounts of atomic energy (the reaction which would be used in later, larger nuclear weapons)? If so, a run-away chain reaction might occur, through which the entire atmosphere of planet Earth could be engulfed in a nuclear fusion explosion.”

Albert Einstein wrote in the Bulletin of Atomic scientists that:

undefinedThe idea of achieving security through national armament is, at the present state of military technique, a disastrous illusion. On the part of the United States this illusion has been particularly fostered by the fact that this country succeeded first in producing an atomic bomb. The belief seemed to prevail that in the end it were possible to achieve decisive military superiority.

In this way, any potential opponent would be intimidated, and security, so ardently desired by all of us, would be brought to us and all of humanity. The maxim which we have been following during these last five years has been, in short: security through superior military power, whatever the cost.

This mechanistic, technical-military, psychological attitude had inevitable consequences. Every single act in foreign policy is governed exclusively by one viewpoint.undefined

The ghosts of Einstein and Teller should be haunting us by now but of course, while Einstein was cautious about using power, Teller was not as ethical.

He would later say that undefinedIt is not the scientist’s job to determine whether a hydrogen bomb should be constructed, whether it should be used, or how it should be used.”

Responsibility, however exercised, rested with the American people and their elected officials.

We all know what happened after that - Truman dropped the bombs on Japan.

Today, a warning was issued by Elon Musk about the new danger for future warfare and that is Artificial Intelligence - not just any Artificial Intelligence but autonomous war machines that are programmed to kill without conscience.

Musk was a guest speaker at South by Southwest and he doubled down on his warning that Artificial Intelligence is far more dangerous than any nuclear war head.

He stated that there needs to be a regulatory body overseeing the development of super intelligence.

It is not the first time Musk has made frightening predictions about the potential of Artificial Intelligence — he has, for example, called AI vastly more dangerous than North Korea and he has previously called for regulatory oversight.

Some have called his tough talk fear-mongering. Facebook founder Mark Zuckerberg said Musk’s doomsday AI scenarios are unnecessary and “pretty irresponsible.” And Harvard professor Steven Pinker also recently criticized Musk’s tactics.

Musk, however, is resolute, calling those who push against his warnings “fools.”

Furthermore. Musk said:

“The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are, This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”

Based on his knowledge of machine intelligence and its developments, Musk believes there is reason to be worried.

Musk pointed to machine intelligence playing the ancient Chinese strategy game Go to demonstrate rapid growth in AI’s capabilities. For example, London-based company, DeepMind, which was acquired by Google in 2014, developed an artificial intelligence system, AlphaGo Zero, that learned to play Go without any human intervention. It learned simply from randomized play against itself. The Alphabet-owned company announced this development in a paper published last October.

Musk worries AI’s development will outpace our ability to manage it in a safe way.

“So the rate of improvement is really dramatic. We have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one.”

To do this, Musk recommended the development of artificial intelligence be regulated.

The application of AI in military systems has plagued the ethicist but excited certain leaders and inventors. Russian President Vladimir Putin has grandiloquently asserted that “it would be impossible to secure the future of our civilization” without a mastery of artificial intelligence, genetics, unmanned weapons systems and hypersonic weapons.

Campaigners against the use of autonomous weapons systems in war have been growing in number. The UN Secretary-General António Guterres is one of them.

“Autonomous machines with the power and discretion to select targets and take lives without human involvement,” he wrote on Twitter in March 2019, “are politically unacceptable, morally repugnant and should be prohibited by international law.”

The International Committee for Robot Arms Control, the Campaign to Stop Killer Robots and Human Rights Watch are also dedicated to banning lethal autonomous weapons systems. Weapons analysts such as Zachary Kallenborn see that absolute position as untenable, preferring a more modest ban on:

He stated:

undefinedThe highest-risk weapons: drone swarms and autonomous chemical, biological, radiological, and nuclear weapons”.

The critics of such weapons systems were far away in the Commission’s draft report for Congress. The document has more than a touch of the mad scientist in the bloody service of a master.

Eric Schmidt, technical advisor to Alphabet Inc., parent company of Google speaks of Artificial intelligence without any moral restraint:

undefinedThe AI promise – that a machine can perceive, decide, and act more quickly, in a more complex environment, with more accuracy than a human – represents a competitive advantage in any field. It will be employed for military ends, by governments and non-state groups.”

In his testimony before the Senate Armed Services Committee on February 23, Schmidt was all about “fundamentals” in keeping the US ascendant. This involved preserving national competitiveness and shaping the military with those fundamentals in mind. But to do so required keeping the eyes of the security establishment wide open for any dangerous competitor.

Adm. Charles Richard, writing in the current issue of the U.S. Naval Institute Journal Proceedings, offered a blunt and detailed assessment that the luxury of living in a post-Cold War era when direct armed conflict with a rival nuclear power was not possible is over.

The United States must be ready for a nuclear war with China or Russia and seek new ways to deter both countries’ use of newly acquired advanced AI strategic weapons.

“There is a real possibility that a regional crisis with Russia or China could escalate quickly to a conflict involving nuclear weapons, if they perceived a conventional loss would threaten the regime or state,” the four-star admiral wrote.

The Pentagon must shift from a principal assumption that nuclear weapons’ use is nearly impossible to “nuclear employment is a very real possibility,” he urged in the new survey.

Government and military leaders need to better understand the new dangers of nuclear conflict and fashion new concepts of deterrence and — if needed — nuclear war-fighting strategies.

This includes new technologies including hypersonic weapons and Artificial Intelligence.

The deployment of advanced strategic forces by China and Russia calls for greater action by the United States to bolster deterrence in the face of new threats. Deterring both nations through crises or ultimately nuclear war is being tested in ways not seen before, Adm. Richard said.

“Until we, as a Defense Department, come to understand, if not accept, what we are facing and what should be done about it, we run the risk of developing plans we cannot execute and procuring capabilities that will not deliver desired outcomes,” Adm. Richard argued. “In the absence of change, we are on the path, once again, to prepare for the conflict we prefer, instead of one we are likely to face.”

The sound of thunder is now being heard - are the warnings being heeded?

We know the technology exists and we know that the war of the future is now being planned in the present, especially with our newly elected Warhawk in power.

In terms of AI, “only the United States and China” have the necessary “resources, commercial might, talent pool, and innovation ecosystem to lead the world. Within the next decade, Beijing could even surpass the United States as the world’s AI superpower.

The Armed Services community is now focusing their attention on the use of Artificial Intelligence in a hypothetical world war.

Two days of public discussion saw the panel’s vice chairman Robert Work extol the virtues of AI in battle. “It is a moral imperative to at least pursue this hypothesis” claiming that “autonomous weapons will not be indiscriminate unless we design them that way.” The devil is in the human, as it has always been.

In a manner reminiscent of the debates about sharing atomic technology in the aftermath of the Second World War, the Committee urges that the US “pursue a comprehensive strategy in close coordination with our allies and partners for artificial intelligence (AI) innovation and adoption that promotes values critical to free and open societies.”

A proposed Emerging Technology Coalition of like-minded powers and partners would focus on the role of “emerging technologies according to democratic norms and values” and “coordinate policies to counter the malign use of these technologies by authoritarian regimes”. Fast forgotten is the fact that distinctions such as authoritarianism and democracy have little meaning at the end of a weapon.

Internal changes are also suggested to ruffle a few feathers. The US State Department comes in for special mention as needing reforms.

There is currently no clear lead for emerging technology policy or diplomacy within the State Department, which hinders the Department’s ability to make strategic technology decisions.”

Allies and partners were confused when approaching the State Department as to “which senior official would be their primary point of contact” for a range of topics, be they AI, quantum computing, 5G, biotechnology or new emerging technologies.

Overall, the US government comes in for a battering, reproached for operating “at human speed not machine speed.” It was lagging relative to commercial development of AI. It suffered from “technical deficits that range from digital workforce shortages to inadequate acquisition policies, insufficient network architecture, and weak data practices.”

The official Pentagon policy, as it stands, is that autonomous and semi-autonomous weapons systems should be “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

In October 2019, the Department of Defense adopted various ethical principles regarding the military use of AI, making the DoD Artificial Intelligence Center the focal point.

These include the provision that:

DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.”

The “traceable” principle is also shot through with the principle of human control, with personnel needing to “possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities”.

The National Commission pays lip service to such protocols, acknowledging that operators, organizations and “the American people” would not support AI machines not “designed with predictability” and “clear principles” in mind. But the note of warning in not being too morally shackled becomes a screech. Risk was “inescapable” and not using AI “to solve real national security challenges risks putting the United States at a disadvantage”.

Especially when it comes to China.

Artificial Intelligence will certainly have a role in future military applications. It has many application areas where it will enhance productivity, reduce user workload, and operate more quickly than humans. Ongoing research will continue to improve its capability, explainability, and resilience.

Most of what occurs inside an AI system is a black box and there is very little that a human can do to understand how the system makes its decisions. This is a critical problem for high-risk systems such as those that make engagement decisions or whose output may be used in critical decision-making processes. The ability to audit a system and learn why it made a mistake is legally and morally important. Additionally, issues on how we assess liability in cases where AI is involved are open research concerns.

AI systems also struggle to distinguish between correlation and causation. The infamous example often used to illustrate the difference is the correlation between drowning deaths and ice cream sales.

An AI system fed with statistics about these two items would not know that the two patterns only correlate because both are a function of warmer weather and might conclude that to prevent drowning deaths we should restrict ice cream sales.

Even without these AI weaknesses, the main area the military should be concerned with at the moment is adversarial attacks. We must assume that potential adversaries will attempt to fool or break any accessible AI systems that we use.

Attempts will be made to fool image-recognition engines and sensors; cyberattacks will try to evade intrusion-detection systems; and logistical systems will be fed altered data to clog the supply lines with false requirements.

It may also be deceptive to call every decision AI based as we are all aware that the algorithm programmed into the technological death machine is what will push the system into making nightmarish decisions.

The ‘Computer will say NO’, but it will not be the computers decision, it will be the writer of the algorithm.

As we know too well with google and Facebook and YouTube algorithms and bots do not know the difference between facts, jokes and satire and thus it makes the decision to tell you that you are in violation of terms of service – and we needlessly make appeals to a non-feeling system that is always right.

AI is a way of hiding control behind a curtain.