Podcast Logo
hero

1/10/23: SMELLS LIKE MEAN CYBORG W/ GENNADY STOLYAROV

Posted on January 10th, 2023 by Clyde Lewis

Last Fall, Elon Musk announced, that he wanted to perfect and mass-produce Optimus as soon as possible, predicting the “humanoid” would prove to be widely accepted and profitable. The techno-futurist is unlikely to question his commitment to a rapidly expanding world of “artificial intelligenceundefined and how it will contribute to an entirely surveilled, behavior-modified, and jobless humanity as some people will prefer AI-enabled robots over humans. The Uncanny Valley has been crossed and the populace is getting used to their cyborg overlords. Tonight on Ground Zero, Clyde Lewis talks with transhumanist advocate, Gennady Stolyarov about SMELLS LIKE MEAN CYBORG.

SHOW PREVIEW: 

SHOW SAMPLE:

SHOW PODCAST: 

https://aftermath.media/podcast/1-10-23-smells-like-mean-cyborg-w-gennady-stolyarov/

SHOW TRANSCRIPT: 

People were fascinated by our Moonchild show as the topic of homunculus sorcery eventually dovetailed into the moonchildren that are converted to becoming more like machines or one in spirit with the internet of bodies.

Sorcery, a kind of “pre-science,” consists of specific techniques purportedly capable of attaining certain ends.  If one wished to destroy a foe, one employed imitative homeopathic magic. The infamous voodoo doll being the most well-known example.  If the victim shortly thereafter sickened and died, the efficacy of the magical technique was “confirmed.”

Last Fall, we can go back to when one of the sorcerers of our time Elon Musk introduced his new “humanoid” robot named Optimus, which promptly demonstrated to a rapt audience the ability to walk, carry objects, and water plants.

Musk announced, with his typical promoter’s enthusiasm, that he wanted to perfect and mass-produce Optimus as soon as possible, predicting that the “humanoid” would prove even more profitable than his Tesla line of electric cars.

Musk is unlikely to question his commitment to a rapidly expanding world of “artificial intelligenceundefined and how it will contribute to an entirely surveilled, behavior-modified and jobless humanity.

At the same time, he has acknowledged some unforeseen dangers and the urgent necessity for some regulation of their production and use.  A techno-futurist with an awareness of what is coming, he has joined with such techno-scientific luminaries as the late Stephen Hawking and Bill Gates in warning that AI, if insufficiently contained and regulated, constitutes “the largest existential threat to humanity.”

Since perpetual war by now seems “normalized,” and without the least bit of protests anymore, Pentagon contracts promote the myriad capabilities of robot soldiers on the battlefield.

Musk has thus urgently warned of an imminent threat from “killer robots” .  Human Rights Watch has long taken this seriously enough to launch its Campaign to Stop Killer Robots, noting that such machines “would be able to select and engage targets without meaningful human control.”

Self-replicating robots already exist, even “self-reconfigurable modular” ones which can re-arrange their design and self-repair–capacities obviously advantageous under battlefield conditions.

Autonomous drones move through the sky able to smell out a terrorist and fire at will undefined we learned this with President Obama

But as of yet we cannot smell a robot or something that smells like a mean cyborg. You may remember that last week we reported a gruesome story about a young man in Portland that attacked an elderly man at a train platform.

Koryn Kraemer has been arraigned on a second-degree assault charge stemming from the grisly incident where he bit off the ear and chewed the face off of a 78-year-old man.

Kraemer, who was allegedly drunk and high on fentanyl and marijuana, approached the elderly man and proceeded to gnaw on his face.

Police responded to the scene and pulled Kraemer off the victim. By then, he had chewed off the man’s ear and bitten the skin off his face to the point of exposing his skull.

In a subsequent interview with law enforcement, Kraemer stated that he believed the victim was a ‘robot’ trying to kill him based on how the victim smelled.

Kraemer also said that he spit out the victim’s flesh that he had bitten off, and claimed that police saved his life by separating him from undefinedthe robot.undefined

What I didnundefinedt know was that weeks before in Indiana a man allegedly fatally shot his father and dismembered his corpse after believing him to be a robot.

Shawn Hays, 53, of Lawrence County, Indiana, was arrested on December 20 after deputies responded to a welfare check call on his 73-year-old father Rodney Hays, according to a probable cause affidavit,

The person who called the police informed them that Hays told them that he had shot and mutilated his father because he had been turned into a robot.

When deputies arrived at the residence, they reportedly found Hays undefinedhastily attempting to exit the property in a silver Chevrolet pickup.undefined There was also reportedly a shotgun in the vehicle, which officers managed to remove while distracting him.

When they asked about his father, Hays reportedly told them that it was not actually his father, but rather a robot that resembled his father.

When asked where his father was, Hays gestured toward the house behind the vehicle. The report states that Hays became combative when officers asked him to exit the truck.

During the altercation, Hays reportedly told the officers, undefinedItundefineds a robot that looks like a humanundefinedlaying over there. I had to shoot at it to destroy it.undefined

Hays had also reportedly raised concerns from others with Facebook posts about his fatherundefineds robotic identity.

Rodney Hays reportedly had been shot in the head and chest. His body, which officers reportedly found on the lawn, had allegedly been partially dismembered and mutilated.

These two stories within weeks of each other send out vibes that are similar to the plot of Invasion of the Body Snatchers where people in a small California town begin to realize that their loved ones have subtly changed their personalities in a way that is creeping out their loved ones.

Even though these grizzly stories make us contemplate how drugs can alter our perception undefined keep in mind that there have been many occasions as of late where the demon known as A.I. has been conjured by the likes of Elon Musk- the very man who warned us of this so-called demon.

Back On December 7th, I did a show about Robots and their eventual relationships with humans. It was about the time that Elon Musk announced that he was ready to roll out chips to be placed under the skin in order to implement his Neurolink plan.

Musk gave updates on the company’s wireless brain chip. In addition to forecasting clinical trials, Musk said he plans to get one of the chips himself.

Neuralink says it is developing brain-chip interfaces that could restore a person’s vision, even in those who were born blind, and restore “full body functionality”, including movement and verbal communication, for people with severed spinal cords.

The chip interface that targets the motor cortex could be tested in humans as soon as six months, the company said.

The company does not have permission from the Food and Drug Administration (FDA) to sell the device, but Musk said on Wednesday that most of the FDA paperwork for approval to implant the device into a human being had been submitted.

Neuralink has been testing on animals as it awaits approval for clinical trials.

As we predicted this was only the beginning as the marriage of man and machine would become pervasive in 2023  and beyond.

The arrival of the chip was in my opinion the arrival of the A.I/ surveillance state where the health police state would be a part of our lives with robot doctors, lawyers, and policemen,

There are thousands of police robots across the country, and those numbers are growing exponentially. It won’t take much in the way of weaponry and programming to convert these robots to killer robots, and it’s coming.

The first time police used a robot as a lethal weapon was in 2016 when it was deployed with an explosive device to kill a sniper who had shot and killed five police officers.

This scenario has been repeatedly trotted out by police forces eager to add killer robots to their arsenal of deadly weapons.

For instance, despite an outcry by civil liberties groups and concerned citizens alike, in an 8-3 vote on Nov. 29, 2022, the San Francisco Board of Supervisors approved a proposal to allow police to arm robots with deadly weapons for use in emergency situations.

This is how the slippery slope begins.

According to the San Francisco Police Department’s draft policy, “Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.”

A last-minute amendment to the SFPD policy limits the decision-making authority for deploying robots as a deadly force option to high-ranking officers, and only after using alternative force or de-escalation tactics, or concluding they would not be able to subdue the suspect through those alternative means.

In other words, the people wanted the police to have the power to kill with impunity using remote-controlled robots.

It was later in December that a Francisco supervisor evaluated the idea and decided to put the brakes on the project for the time being.

The Board of Supervisors voted unanimously to explicitly ban the use of robots in such a fashion for now. But they sent the issue back to a committee for further discussion and could allow it in limited cases at another time.

These robots, often acquired by local police departments through federal grants and military surplus programs, signal a tipping point in the final shift from a Mayberry style of community policing to a technologically-driven version of law enforcement dominated by artificial intelligence, surveillance, and militarization.

Now, the eerie new capabilities of artificial intelligence are about to show up inside a courtroom — in the form of an AI chatbot lawyer that will soon argue a case in traffic court.

That’s according to Joshua Browder, the founder of a consumer-empowerment startup who conceived of the scheme.

Sometime next month, Browder is planning to send a real defendant into a real court armed with a recording device and a set of earbuds. Browder’s company will feed audio of the proceedings into an AI that will in turn spit out legal arguments; the defendant, he says, has agreed to repeat verbatim the outputs of the chatbot to an unwitting judge.

Browder declined to identify the defendant or the jurisdiction for next month’s court date, citing fears that the judge would catch wind of the planned stunt and block it.

AI is shaking up the tools and rules that determine the balance of power between individuals, on one hand, and governments and corporations, on the other.

AI has already made inroads into the American legal profession, where big firms routinely use it to assist in the task of reviewing troves of documents that can number in the millions during the discovery phase of litigation.

Now many argue that a flesh and blood lawyer would do better than the A.I. there are 1./3rd of the population prefer robot cashiers at the supermarket.

While surveys on automated supermarket checkout are limited and diverse, it’s clear that people are divided in their views toward robot cashiers — about a third prefer robots over humans, for various reasons. Similarly, bank ATMs (robot tellers, in a sense) have been widespread for a half-century, but were preferred by some from the outset — and are preferred over human tellers by a wide margin today.  Perhaps more relevant, a recent survey of New Yorkers showed that while most preferred more thorough traffic enforcement, 59 percent preferred speed cameras — robot traffic cops — over human police officers and 65 percent of Blacks and 74 percent of Latinos preferred robot speed cops over human traffic police.

It’s clear that a substantial element of the public prefers to deal with robots instead of human cashiers, tellers and cops. While some of this has to do with minimizing time consumed, some has to do with the obvious fact that humans are opinionated, while machines often leave the impression they are not.

This phenomenon — of some people preferring robots (in this case, an apparently-neutral robot over an opinionated human) becomes more important as we enter an era of AI-enabled robots. And it may partly explain why AI-enabled robots enjoy public support despite the warning that unfettered A.I. Is dangerous.

Even Henry Kissinger came out of his coffin to say that A.I. needs to be controlled.

If someone suspects that opinionated humans in authority intend to do them or their family harm, then that person will probably prefer an apparently neutral, AI-enabled robot over an obviously bigoted human.

As we recently saw in San Francisco, however, when a robot is equipped to physically harm a human (robots inflict financial, emotional and other harm on humans every day, but this rarely raises a public outcry), an entirely different set of public attitudes emerge.

In this case, local police officials and officers proposed to use armed robots to violently deal with suspects in situations where human police officers and civilians would be in imminent deadly danger. Many human police prefer to deploy robot police over human officers in such situations; nonetheless, the opposition was loud and immediate against “killer robots.”

It would appear that robots are waiting to take over without a fight.

There is a new movie in theaters right now called M3GAN. It has become a viral movie that according to critics doesnundefinedt take itself too seriously undefined until the story takes a dark turn where the laughter ends and the terror that only AI can bring begins.

Although M3gan eventually becomes a movie about technology so successful that it surpasses both its creator’s dreams and her control, it starts off as a reminder that, in the vast majority of cases, the promises that code could take on the functions of humans have either ended in failure or, just as often, a scaling-down of expectations. Instead of knowledgeable clerks or informed critics—or even, like, friends—to recommend what we should watch or listen to or read next, we have algorithms that make such sophisticated inferences as suggesting that having just purchased one car, we might be interested in shopping for another.

We drop our collective jaws at the idea that an A.I. chatbot can write like a human, and defend someone in a court of law undefined never mind if that human the AI is emulating is as moderately skilled as a 16-year-old jock on the football team.

M3gan presents itself as yet another cautionary tale about the dangers of artificial intelligence: What if we give a machine the power to learn with no moral or ethical guidance, except to protect one creature at the expense of all others?

Well, you can bet that science has an answer to that dilemma.

For some industry leaders, chatbots and image-generators are far from the final robotic frontier. Next up?

Consciousness.

Consciousness is one of the longest-standing, and most divisive, questions in the field of artificial intelligence. And while to some itundefineds science fiction — and indeed has been the plot of countless sci-fi books, comics, and films — to others, itundefineds a goal, one that would undoubtedly change human life as we know it for good.

Of course, the biggest issue that the industry runs into with the question of consciousness — you know, other than the technological challenge that it would undoubtedly be — is the fact that, well, the concept itself doesnundefinedt really have a firm definition, in the field or beyond it. Philosophically, consciousness is vague and debatable. And scientifically,  efforts to tidily nail consciousness down to specific brain functions or otherwise signifiers tend to fall flat. There are also a number of deeply ethical questions that arise with just the concept of machine consciousness, particularly related to machine labor.

Even so, considering that consciousness has no set definition, itundefineds hard to cosign any particular one.

It is like what brain do we put in the monster Dr. Frankenstein?

Itundefineds also impossible to ignore the fact that humans really, really like to anthropomorphize just about anything we can, from cars to pets and of course I name my computers.

Every computer I have ever had has been named after famous robots. My computer today is named Maria after the Robot in Metropolis. The one before that was named Kryten after the robot in Red Dwarf. My first computer was named Marvin from the book, Hitchhiker’s Guide to the Galaxy.

Such a tendency is exceedingly present in the fields of robotics and artificial intelligence, where those building machines constantly project human features, both physical and intellectual, onto the devices that they create.

If you remember “Terminator III” is subtitled, “Rise of the Machines”, and it is rather striking that those movies don’t have any human bad guys in them; just the opposing terminators. The inventors and developers – who are the real enemy of mankind – remain in the shadows.

In real life – at least so far, they act like they are Satan’s little helpers.

But I can assure you if a poll were taken, the majority of Americans would overwhelmingly approve of every police measure up to and including Killer robots.  Many men if polled would prefer a robot lover than a real flesh and blood human being.  The Uncanny Valley has been crossed and people are getting used to their cyborg overlords.

 

SHOW GUEST: GENNADY STOLYAROV

Gennady Stolyarov II has been the Chairman of the U.S. Transhumanist Party since November 2016, aiming to put science, health, and technology at the forefront of American politics. Mr. Stolyarov is an actuary, independent philosophical essayist, science-fiction novelist, poet, amateur mathematician, composer, and Editor-in-Chief of The Rational Argumentator, a magazine championing the principles of reason, rights, and progress. Mr. Stolyarov regularly produces YouTube Videos discussing life extension, politics, philosophy, and related subjects and hosts the U.S. Transhumanist Party Virtual Enlightenment Salons, where he invites some of the worldundefineds leading thinkers to engage in wide-ranging, interdisciplinary conversations on these topics. Mr. Stolyarov is also a member of the Board of Directors of the Longevity Escape Velocity Foundation, which seeks to proactively identify and address the most challenging obstacles on the path to the widespread availability of genuinely effective treatments to prevent and reverse human age-related disease.

In December 2013, Mr. Stolyarov published Death is Wrong, an ambitious children’s book on life extension. Death is Wrong can be found on Amazon in paperback and Kindle formats, and can also be freely downloaded in PDF format in the English, Russian, French, Spanish, and Portuguese languages.

Mr. Stolyarov is the Chief Executive and one of the founding members of the Nevada Transhumanist Party, established in August 2015, and the Chairman of the Transhuman Club, the non-political affiliate organization of the U.S. Transhumanist Party.