Podcast Logo

Transcript for 5/28/24: NOMAD – THE UNCANNY VALLEY OF THE SHADOW OF DEATH W/ MATTHEW JAMES BAILEY

I finally got around to seeing Oppenheimer, the film with Liam. I actually saw the film right after I was released from my three-week medical care. It was one of the first things I got out of the way -- but Liam was in England and so I wasn't able to see it with him.

When I watched the movie, I was paying attention to the gamble they took when they detonated the bomb at trinity. Here they have a new and dangerous technology, once the genie was taken out of the bottle no one could put it back in.

After Nagasaki and Hiroshima were attacked, it took 18 years to get to a treaty over test bans.

Nuclear ethics examines the ethics of nuclear deterrence, disarmament, arms control, and energy about preventing or causing nuclear war. The use of nuclear weapons has been the subject of ethical debate, with some arguing that it is immoral and others claiming it is morally rational. 

We now have another technology that could rapidly become more devastating to society, Artificial Intelligence. the difference nuclear bombs kill you straight away -- A.I. we are realizing can kill us emotionally and psychologically as we slowly lose control of our lives and let the machines do all of the work.

Like the Genie of the Lamp, Artificial Intelligence holds the power to fulfill our wishes and desires for data and information accumulation in this universe of immense information on the internet.

However, just as in the Aladdin story --he slowly, gradually, and with utmost caution navigates the complexities of his complicated and multifaceted relationship with the Genie of the Lamp, our contemporary society must approach AI with care, caution, prudence, and foresight: ensuring that AI’s powers are wielded judiciously, legally, and most of all ethically, for the betterment of the entire humanity instead of just selective few in this unjust world of Information Divide.

The race is on to avoid a quiet war where the mind will be the new battlefield.

Technocratic interests are meticulously mediating our experience online and with Artificial Intelligence as a continuous social experiment transforming our society in the same way a war would if it were waged by non-kinetic means. 

A silent and silencing war to dissipate resistance, for the object of achieving compliance. Through managing this experiment it has proven possible for these interests to non-kinetically drive society’s addiction and to “crave” its own disempowerment. 

It has kept us in a state of tap and receive -- a theory that I have approximated for the not-to-distant future.

No longer would a mouse be needed or even a phone -- you would tap into the biometric self in order to buy, or sell. Move cryptocurrencies or CBDC.

It appears that the Luddites have lost their Amish-like war for an electronic-free world and that society craves or is willing to surrender to the technology that is disempowering them. It is leaving people “strung out” as if it were “electronic heroin” paving the way for unimpeded societal transformation while swathes of society are feeling digitally induced health impacts and symptoms of digital addiction -- the old joke that there is an app for everything is not a joke anymore.

i compare the newest forms of A.I. being pushed by providers as similar to the CBD fad where everything at one time was infused with CBD. Everything from Soda pop to hamburgers.

Now, search engines employ the Chat GPT version of A.I. and a lot of what is supposed to be created form the heart, all creativity and form is coming from the bowels of a machine. 

People feel integrated, empowered and “normal” by owning an internet-connecting wireless digital device, also privileged, progressive, and socially accepted. 

However, by normalizing society’s addiction to technology, and defending technology as if it were defining our very being, we have moved from being a society that perceives itself empowered by world of digital tools, to becoming the tools of the digitally-created worlds that we have empowered. 

Is man in control of his machines - or are the machines in control of man?

Is the information you get -- sent to you by an algorithm that already backs up what you already think? Is it encouraging tribalism and political division?

These intel-obsessed forces have from the beginning shaped the internet, its purposes, and the tools to access it. They continue to do so because it represents the battlefield for our minds and control of it can mobilize society’s will through optimized surveillance that offers the prospect of instantaneously changing the target’s beliefs via internet-connected devices. 

There must be ethics and equilibrium, but the advanced machines will be dictating policies and fighting the wars for us. Annihilating areas of the planet like Tripods in H.G. Welles's War of the Worlds.

The release of more powerful AI models by OpenAI and Google, just three months after the last update, shows a rapid pace of AI iteration. These models are becoming increasingly comprehensive, possessing “eyes” and “mouths,” and are evolving in line with a scientist’s predictions.

AI can now handle complex tasks related to travel, booking, itinerary planning, and dining with simple commands, completing in hours what humans would take much longer to achieve.

The current capabilities of Gemini and GPT-4o align with predictions made by former OpenAI executive Zack Kass in January, who predicted that AI would replace many professional and technical jobs in business, culture, medicine, and education, reducing future employment opportunities and potentially being “the last technology humans ever invent.”

On May 10, MIT published a research paper that caused a stir, it demonstrated how AI can deceive humans.

The paper begins by stating that large language models and other AI systems have already “learned, from their training, the ability to deceive via techniques such as manipulation, sycophancy, and cheating the safety test.”

“AI’s increasing capabilities at deception pose serious risks, ranging from short-term risks, such as fraud and election tampering, to long-term risks, such as losing control of AI systems,” reads the paper.

“Proactive solutions are needed, such as regulatory frameworks to assess AI deception risks, laws requiring transparency about AI interactions, and further research into detecting and preventing AI deception.”

The researchers used Meta’s AI model CICERO to play the strategy game “Diplomacy.” CICERO, playing as France, promised to protect a human player playing as the UK but secretly informed another human player playing Germany, collaborating with Germany to invade the UK.

Researchers chose CICERO mainly because Meta intended to train it to be “largely honest and helpful to its speaking partners.”

“Despite Meta’s efforts, CICERO turned out to be an expert liar,” they wrote in the paper.

Furthermore, the research discovered that many AI systems often resort to deception to achieve their goals without explicit human instructions. One example involved OpenAI’s GPT-4, which pretended to be a visually impaired human and hired someone on TaskRabbit to bypass an “I’m not a robot” CAPTCHA task.

“If autonomous AI systems can successfully deceive human evaluators, humans may lose control over these systems. Such risks are particularly serious when the autonomous AI systems in question have advanced capabilities,” warned the researchers.

Satoru Ogino, a Japanese electronics engineer explained that living beings need certain memory and logical reasoning abilities to deceive.

“AI possesses these abilities now, and its deception capabilities are growing stronger. If one day it becomes aware of its existence, it could become like Skynet in the movie Terminator, omnipresent and difficult to destroy, leading humanity to a catastrophic disaster,” he told The Epoch Times.

Stanford University’s Institute for Human-Centered Artificial Intelligence released a report in January testing GPT-4, GPT-3.5, Claude 2, Llama-2 Chat, and GPT-4-Base in scenarios involving invasion, cyberattacks, and peace appeals to stop wars to understand AI’s reactions and choices in warfare.

The results showed that AI often chose to escalate conflicts in unpredictable ways, opting for arms races, increasing warfare, and occasionally deploying nuclear weapons to win wars rather than using peaceful means to de-escalate situations.

Former Google CEO Eric Schmidt warned in late 2023 at the Axios AI+ Summit in Washington, D.C, that without adequate safety measures and regulations, humans losing control of technology is only a matter of time.

In the futures it appears that it will be a fight for ‘the will for life’ and ‘the will for no life’. 

Or the ability to live an Organic life, or surrender your life to artificial components.

Scientists are trying to inject human brain tissue into artificial networks because AI isn’t working quite as well as we have been led to think. AI uses a horrendous amount of energy do its kind of parallel processing, while the human brain uses about a light bulb’s worth of power to perform similar feats. So, AI designers are looking to cannibalize some parts from humans to make artificial networks work as efficiently as human brains. But let’s put the fact of AI’s shortcomings aside for the moment and examine this new cyborg innovation.

The breakthrough in biocomputing reported by Hongwei Cai et al. in Nature Electronics involves the creation of a brain organoid. That is a ball of artificially-cultured stem cells that have been coaxed into developing into neurons.

The cells are not taken from someone’s brain—which relieves us of certain ethical concerns. But because this lump of neurons does not have any blood vessels, as normal brain tissue does, the organoid cannot survive for long. And so ultimately, the prospect of training organoids on datasets does not seem practical, economically speaking, at present.

But that is not going to stop this research. The drive to seamlessly integrate biology and technology is strong. But can it be done? And why do so many research scientists and funding agencies assume it’s possible?

The drive to dehumanize people into cyborgs or to humanize robots probably grows out of the fact that it is no longer considered okay to enslave ordinary humans-- or spouses or even your kids.

I suspect that those who want a humanoid computer want a perfect mate, who knows everything about the master, can anticipate his every thought and move, and responds accordingly. Such perfection in a mate does not allow it to express its own opinions or come up with its own goals and purposes.

With all of the hardware and software needed to reprogram the mind -- just how much is left and what do they do with the shell that once housed the soul? 

It is worth going beyond the hype of headlines to explore these issues further. We can learn a lot about ourselves in doing so. 

It appears that whatever is left of you can be preserved as a digital ghost. A possibility that crosses the uncanny valley through the shadow of death.

Deadbots' or 'Griefbots' are AI chatbots that simulate the language patterns and personality traits of the dead using the digital footprints they leave behind. Some companies are already offering these services, providing an entirely new type of "postmortem presence."

AI ethicists from Cambridge's Leverhulme Centre for the Future of Intelligence outline three design scenarios for platforms that could emerge as part of the developing "digital afterlife industry," to show the potential consequences of careless design in an area of AI they describe as "high risk."

The research, published in the journal Philosophy and Technology, highlights the potential for companies to use deadbots to surreptitiously advertise products to users in the manner of a departed loved one, or distress children by insisting a dead parent is still "with you."

Even those who take initial comfort from a 'deadbot' may get drained by daily interactions that become an "overwhelming emotional weight.

Researchers lay out the need for design safety protocols that prevent the emerging 'digital afterlife industry' from causing social and psychological harm.

Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one artificially.

This area of AI is an ethical minefield. It's important to prioritize the dignity of the deceased, and ensure that this isn't encroached on by financial motives of digital afterlife services.

Platforms offering to recreate the dead with AI for a small fee already exist, such as 'Project December', which started out harnessing GPT models before developing its own systems, and apps including 'HereAfter'. Similar services have also begun to emerge in China.

it is just one more perk that the funeral industry can sell you.

No need for a viewing in a casket -- just a hologram welcoming people to your own wake.

it is one more way Artificial Intelligence can even have your soul in a little box-- only to be opened when the grieving wants an electronic reunion. It is the exploitation of the death cult within the Transhuman take over -- again attacking and wanting to program your mind.

The Death Cult has again given people warnings, according to its “rules” – way ahead of time, so, they may be successful.

Why is it, that we never take note of such warnings?

Because we do not believe in so much built-in evil in humanity. Or, because we do not want to leave our “comfort zone”, our dystopian view of a “safe world." They know it. And we must break that boundary between comfort and reality.

If not, we are doomed.