Podcast Logo
hero

11/27/23: SKYBORG – I THINK THEREFORE I KILL W/ SEAN PATRICK HAZLETT

Posted on November 27th, 2023 by Clyde Lewis

The United States government is on the verge of deploying new artificial intelligence technology (AI) weapons that can make decisions on whether to kill human targets. The frightening lethal autonomous weapons, which are being developed in the United States, China, and Israel, will automatically select humans deemed a “threat” to the system and eliminate them. Some critics have voiced fears that the deployment of AI weapons would entrust machines to make decisions about whether to kill human targets, with no human oversight. This is an absolutely fundamental security issue, a legal issue, and an ethical issue. No matter how epically monstrous our government becomes, and no matter how many mechanized war machines they unleash in the future, our battle does not end with them - it only begins with them. Tonight on Ground Zero, Clyde Lewis talks with US Army veteran, military analyst, and sci-fi writer, Sean Patrick Hazlett about SKYBORG - I THINK THEREFORE I KILL.

SHOW SAMPLE: 

SHOW TRANSCRIPT:

There have been some brilliant science fiction stories that have forewarned us against the idea of leaving responsibility in the hands of thinking artificial intelligence systems.

The 1984 film, The Terminator revolves around a Cybernet assassin created by an intelligent supercomputer called Skynet.

The Cult television series Black Mirror also acquainted us with Autonomous AI guard dogs in the episode called Metalhead.

Robo Cop can also be thrown into the mix as autonomous robots would be used in the future, they weed out undesirables. These robots would be contract police undefined and would consist of merciless robotic patrolmen- some that are impatient and are known to use lethal force, even if questioned.

If you dig into practically any conspiracy theory, even the most skeptical person may find him or herself seriously wondering if there isn’t “something to it” after all. Could all the apparent connections just be a series of coincidences? Could so much of the information simply be false?

The answer to both questions is yes, but that can be a hard answer to accept. A good conspiracy theory sounds reasonable. It appears to answer a lot of unanswered questions. It can also be exciting, far more interesting than mundane reality. For many, the conspiracy theory merely confirms what they already suspect or believe. And, of course, there is the fact that there have been conspiracies and cover-ups throughout history

Whereas the apparent ubiquitousness of such a massive cover-up does bring into question its credence in the present day— are we to believe that the government has been this efficient in silencing those who wish to uncover the mystery of autonomous killing machines and Advanced general Intelligence that now has the power to annihilate human beings.

In recent years, it has sometimes appeared that global politics is simply a choice between rival forms of technocracy. In China, it is a government of engineers backed up by a one-party state.

In the West, it is the rule of economists and central bankers, operating within the constraints of a democratic system. This creates the impression that the real choices are technical judgments about how to run vast, complex economic and social systems.

Now, it must be clearly stated that what we perceive as the dystopian future of The Fourth Industrial Revolution—AI, blockchain, digitalization, financialization, green capitalism and so on—can’t be separated from the invisible hand of the technocratic mob. It cannot be allowed to be defined by capitalist institutions as a “legitimate political topic” instead of what it really is.

It is all para-political. It is all beyond the scope of politics as usual because we have a government now that cares more about the opinion of scientific experts than it does the substantive processes of running a government for the people by the people.

It is inevitable that the science we are told to worship is the very science that creates machines that can kill us all and for the faceless scientist that pulls the strings, it does not matter to them as we have discussed before – science has been known to push us close to extinction many times in history and they will do it again as they focus their attention on future warfare.

I was reading over the weekend about the Open AI CEO Sam Altman being dismissed for four days several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the boardundefineds ouster of Altman, the poster child of generative AI.

Prior to his triumphant return, more than 700 employees had threatened to quit and join backer Microsoft in solidarity with their fired leader.

The mystery surrounding the brief dismissal of OpenAI CEO Sam Altman has turned into a bit of a mystery, and the possibility that they have found something in the algorithms that could destroy human beings.

In the days before Altman was sent off into exile, several staff researchers penned a letter to the board about a significant breakthrough - called Q-Star - that allowed the AI model to undefinedsurpass humans in most economically valuable tasks.undefined

Reuters sources said the AI milestone was one of the significant factors that led to the boardundefineds abrupt firing of Altman last Friday. Another concern was commercializing the advanced AI model without understanding the socio-economic consequences.

This has spawned an outrageous theory that Q Star was demoed and it wound up either a machine that showed consciousness or AGI and Sam Altman didn’t immediately tell the board and their feelings were hurt.

There has always been that gossip going around about how artificial general intelligence has already happened and that the technocracy is moving to give it a military blessing.

AGI has the potential to surpass humans in every field, including creativity, problem-solving, decision-making, language understanding, etc., raising concerns about massive job displacement. A recent Goldman report outlines how 300 million layoffs could be coming to the Western world because of AI.

The Q star breakthrough and the rapid advancement of this technology now make sense why the board abruptly fired Altman for his rush to develop this technology without studying the modelundefineds impact on how it threatens humanity.

Altman recently said, undefinedI think this is like, definitely the biggest update for people yet. And maybe the biggest one we’ll have because from here on, like, now people accept that powerful AI is, is gonna happen, and there will be incremental updates… there was like the year the first iPhone came out, and then there was like everyone since.undefined

The day before Sam was fired, he gave this chilling speech:

undefinedIs this a tool weundefinedve built or a creature we have built?undefined

It is inevitable that the science we are told to worship is the very science that creates machines that can kill us all and for the faceless scientist that pulls the strings, it does not matter to them as we have discussed before – science has been known to push us close to extinction many times in history and they will do it again as they focus their attention on future warfare.

The New York Times reported that The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality.

Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the U.S., China and Israel.

The use of the so-called “killer robots” would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.

Several governments are lobbying the UN for a binding resolution restricting the use of AI killer drones, but the U.S. is among a group of nations — which also includes Russia, Australia and Israel — who are resisting any such move, favoring a non-binding resolution instead.

The United States government is on the verge of deploying new artificial intelligence technology (AI) weapons that can make decisions on whether to kill human targets.

The frightening lethal autonomous weapons, which are being developed in the United States, China, and Israel, will automatically select humans deemed a “threat” to the system and eliminate them.

Some critics have voiced fears that the deployment of AI weapons would entrust machines to make decisions about whether to kill human targets, with no human oversight.

This is an absolutely fundamental security issue, a legal issue, and an ethical issue.

The Pentagon is reportedly developing a network of hundreds or even thousands of AI-enhanced, autonomous drones that could be rapidly deployed near China in the event of conflict.

These drones would carry surveillance equipment or weapons and would be used to take out or weaken China’s extensive network of anti-ship and anti-aircraft missile systems along its coasts and artificial islands in the South China Sea. This development could potentially be a major shift in military strategy.

Frank Kendall, the US Air Force secretary, said AI drones would need to have the capability to make lethal decisions under human supervision.

“Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose,” he said.

The New Scientist noted that Ukraine used AI-controlled drones in its conflict with Russia in October. However, it’s not known if the drones caused human casualties.

A senior AI scientist at the University of California, Berkeley, Stuart Russell, will screen the video on Monday during an event held to Stop Killer Robots at the United Nations Convention on Conventional Weapons.

A senior AI scientist at the University of California, Berkeley, Stuart Russell, will screen the video on Monday during an event held to Stop Killer Robots at the United Nations Convention on Conventional Weapons.

The campaign issued a warning:

“Machines don’t see us as people, just another piece of code to be processed and sorted. From smart homes to using robot dogs by police enforcement, AI technologies and automated decision-making are now playing a significant role in our lives. At the extreme end of the spectrum of automation lie killer robots.”

“Killer robots don’t just appear – we create them,” the campaign added.

“If we allow this dehumanization, we will struggle to protect ourselves from machine decision-making in other areas of our lives. We need to prohibit autonomous weapons systems that would be used against people, to prevent this slide to digital dehumanization.”

According to Russell, creating and deploying autonomous weapons would be disastrous for human security.

If you dig into practically any conspiracy theory, even the most skeptical person may find him or herself seriously wondering if there isn’t “something to it” after all. Could all the apparent connections just be a series of coincidences? Could so much of the information simply be false?

The answer to both questions is yes, but that can be a hard answer to accept. A good conspiracy theory sounds reasonable. It appears to answer a lot of unanswered questions. It can also be exciting, far more interesting than mundane reality. For many, conspiracy theory merely confirms what they already suspect or believe. And, of course, there is the fact that there have been conspiracies and cover-ups throughout history.

Whereas the apparent ubiquitousness of such a massive cover-up does bring into question its credence in the present day— are we to believe that the government has been this efficient in silencing those who wish to uncover the mystery of Artificial General intelligence, and whether ir not it is being programmed to annihilate us all.

An Arms Control Association report—entitled Assessing the Dangers: Emerging Military Technologies and Nuclear (In)Stability—” unpacks the concept of ’emerging technologies’ and summarizes the debate over their utilization for military purposes and their impact on strategic stability.”

The publication notes that the world’s military powers “have sought to exploit advanced technologies—artificial intelligence, autonomy, cyber, and hypersonic, among others—to gain battlefield advantages” but warns too little has been said about the dangers these weapons represent.

“Some officials and analysts posit that such emerging technologies will revolutionize warfare, making obsolete the weapons and strategies of the past,” the report states.

“Yet, before the major powers move quickly ahead with the weaponization of these technologies, there is a great need for policymakers, defense officials, diplomats, journalists, educators, and members of the public to better understand the unintended and hazardous outcomes of these technologies.”

Lethal autonomous weapons systems—defined by the Campaign to Stop Killer Robots as armaments that operate independent of “meaningful human control”—are being developed by nations including China, Israel, Russia, South Korea, the United Kingdom, and the United States.

The U.S. Air Force’s sci-fi-sounding Skyborg Autonomous Control System, currently under development, is, according to the report, “intended to control multiple drone aircraft simultaneously and allow them to operate in ‘swarms,’ coordinating their actions with one another with minimum oversight by human pilots.”

Although the rapid deployment of such systems appears highly desirable to many military officials, their development has generated considerable alarm among diplomats, human rights campaigners, arms control advocates, and others who fear that deploying fully autonomous weapons in battle would severely reduce human oversight of combat operations, possibly resulting in violations of international law, and could weaken barriers that restrain escalation from conventional to nuclear war,” the report notes.

The latter half of the 20th century we witnessed numerous nuclear close calls, many based on misinterpretations, limitations, or outright failures of technology. While technologies like artificial intelligence are often touted as immune to human fallibility, the research suggests that such claims and hubris could have deadly and unforeseen consequences.

An increased reliance on AI could lead to new types of catastrophic mistakes, There may be pressure to use it before it is technologically mature; it may be susceptible to adversarial subversion; or adversaries may believe that the AI is more capable than it is, leading them to make catastrophic mistakes.

While the Pentagon in 2020 adopted five principles for the “ethical” use of AI, many ethicists argue the only safe course of action is a total ban on lethal autonomous weapons systems.

Hypersonic missiles, which can travel at speeds of Mach 5—five times the speed of sound—or faster, are now part of at least the U.S., Chinese, and Russian arsenals.

Last year, Russian officials acknowledged deploying Kinzhal hypersonic missiles three times during the country’s invasion of Ukraine in what is believed to be the first-ever use of such weapons in combat. In recent years, China has tested multiple hypersonic missile variants using specially designed high-altitude balloons.

Countries including Australia, France, India, Japan, Germany, Iran, and North Korea are also developing hypersonic weapons.

The report also warns of the escalation potential of cyberwarfare and automated battlefield decision-making.

As was the case during World Wars I and II, the major powers rushed ahead with the weaponization of advanced technologies before they had fully considered—let alone attempted to mitigate—the consequences of doing so, including the risk of significant civilian casualties and the accidental or inadvertent escalation of conflict.

America is used to playing the role of the hero in the epic tale of modern Earth. Our nation began with an act of defiance and victory so unexpected and so poetic, that it cemented our cultural identity as freedom fighters for centuries to come.

Over time, our government, turning progressively corrupt, has exploited this cultural identity in order to lure Americans into committing atrocities in the name of our traditional sense of “heroism”. We have, in fact, become the very antagonists we thought we were fighting against undefined this appears to be the most brutal irony we as Americans have to face.

The other brutal irony is the idea that we no longer need to look into the eye of the enemy undefined all we have to do is pick that bad guy undefined otherwise, itundefineds people undefined and then target them. Artificial intelligence can do the math undefined and figure in property damage and loss of life undefined before it proceeds to vaporize a city.

As humans, we are well aware that false paradigms are used in politics by establishment elites in order to control social discussion and to divide the population against each other.

The Left/Right debate has been and always will be a farce, being that the leadership on both sides of the aisle has identical goals when it comes to the most important aspects of the American structure.

Machines, however, hold no political views and have no idea what life is, nor the impacts of mass casualties.

No matter how epically monstrous our government becomes, and no matter how many mechanized war machines they unleash in the future, our battle does not end with them. It only begins with them.

SHOW GUEST:

Sean Patrick Hazlett is a US Army veteran, speculative fiction writer and editor, and finance executive in the San Francisco Bay area. He holds an AB in history and a BS in electrical engineering from Stanford University and a master’s degree in public policy from the Harvard Kennedy School of Government. As a cavalry officer serving in the elite 11th Armored Cavalry Regiment, he trained various Army and Marine Corps units for war in Iraq and Afghanistan.

He is an active member of the Horror Writers Association and Codex Writers’ Group. Hazlett’s award-winning short story “Adramelech” appeared in the Wall Street Journal best-selling anthology L. Ron Hubbard Presents Writers of the Future Volume 33.