MONOLOGUE WRITTEN BY CLYDE LEWIS
2021 is shaping up to be a year where technologies are the answer to our plagues and what inconveniences plague us.
Advancements in robotics and Artificial Intelligence have crept into everyday routines. From flying autonomous drones delivering our packages and robot vacuums gliding around our homes, we have been shown how convenient having a rudimentary robot can be.
Telemarketing has now advanced to robo-calling.
A recent study shows that the number of robot calls has been rising across the country at an alarmingly rapid pace. People in the U.S. receive an average of just 19 robot calls every month.
Robot receptionists are usually gatekeepers when trying to contact a business, and robots now take orders for your pharmaceuticals, while other robots are being used for security in some shopping malls.
Many people have opted to purchase A.I. assistants like Alexa. You can even make Alexa sound like whomever you please.
Siri and Alexa annoy me as do robotic receptionist whose idea of customer service is to refer you to a website where you have to talk to a chatbot before you find a real person to take care of a problem.
Just a few days ago we passed a milestone. It was 100 years ago that the term robot was first used to describe a non-human, artificial being. It was Karel Capek’s 1921 play, R.U.R. for Rossum’s Universal Robots where human looking cyborgs plotted against human beings.
Capek’s robots were molded out of a chemical batter and looked exactly like humans. They could do double-and-a-half time work, allowing their human owners to simply relax.
The robots were more like commander Data of Star Trek – rather that the menacing Terminator – or the robots you see jumping in a Boston Dynamics video.
Which makes the robot itself is far more uncanny since it first tries to empathize with humans, and then finds a reason to wipe them out.
Capek’s play supposed a level of artificial intelligence in his invented machines, a thing we struggle with today when it comes to advances in robots. Do we want robots, either in android-type form or otherwise, to be enabled with the ability to learn and think? Will they come to the same conclusion as Capek’s play? That humans are lazy, selfish idiots that need to be destroyed? Will they demand the same rights as humans?
Perhaps it was this play and its early depiction of robots that influenced humans to consider their own rights before the rights of the machines they are building hence why we can justify kicking them).
We’ve seen many movies and television shows over the years that run a central theme of a robot discovering its individuality, usually being persecuted for it.
Last night, I caught a show on Netflix called “Outside the Wire” where we see crude grunt robots called “gumps” fight alongside combat soldiers –and one cyborg that ignores its protocols and carries out a deadly mission.
Advanced technologies used by the military industrial complex have always been depicted in Science fiction as ruthless and amoral,
But the first robot to ever kill someone was an industrial robot that helped manufacture cars. It should be noted this was an accident, not a murder—
On January 25, 1979, Robert Williams, a 25-year-old employee of Ford, became the first human to die at the hands of a robot. The tragic events took place when Williams, who worked at a car plant in Michigan, tried to retrieve some stored parts and was struck by the robot arm and killed.
Eventually a court came to the same conclusion that Williams’s family had argued – that the robot did not have a sufficient safety mechanism – and hit the manufacturer with damages of $10m.
Then in 2015 another robot at a Volkwagen plant in Germany grabbed a worker and crushed his body against a wall.
Science fiction writer Isaac Asimov wrote incessantly about robots, even creating the, Three Laws of Robotics in 1942, still in use today by almost any science fiction story dedicated to such a topic.
The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself.
However as with any novelty topic such as robots – there is always that moment where the novelty wears off and the reality of sharing our world with robots becomes all too common.
Korean scientists have developed a brain implant with LEDs the size a grain of salt which could control our moods via a smartphone.
A few weeks ago we talked about The Internet of Bodies and the experiment Jose Delgadio did with a bull –stopping it dead in its tracks using brain implants. Delgado believed that with knowledge of the brain and how to control it, we may transform, direct, and roboticize man. He then stated that he thought the greatest danger of all was that in the future there will be roboticized human beings who are not aware that they have been roboticized.
Meanwhile, Elon Musk continues his Neurolink experiments and recently was bragging about a monkey that he has controlled through brain wiring.
Bloomberg News reported today that He said videos of plugged-in monkeys would be released soon, perhaps in around a month.
Musk claims that a monkey with a wireless implant in its skull with tiny wires can play video games with his mind. He goes on to say that “you can’t see where the implant is and he’s a happy monkey. We have the nicest monkey facilities in the world. We want them to play mind-Pong with each other.”
Musk explained that the goal with the brain-linking technology is addressing brain and spinal injuries and making up people’s lost capacity with an implanted chip. “There are primitive versions of this device with wires sticking out of your head, but it’s like a Fitbit in your skull with tiny wires that go into your brain,” he said.
Ray Kurzweil , director of engineering at Google predicts that humans will become hybrids in the 2030s. That means our brains will be able to connect directly to the cloud, where there will be thousands of computers, and those computers will augment our existing intelligence. He said the brain will connect via nanobots — tiny robots made from DNA strands.
“Our thinking then will be a hybrid of biological and non-biological thinking,” he said.
The bigger and more complex the cloud, the more advanced our thinking. By the time we get to the late 2030s or the early 2040s, Kurzweil believes our thinking will be predominately non-biological.
Hong-Kong based company Hanson Robotics will roll out four new models in the first half of 2021 after its humanoid robot, Sophia went viral in 2016.
The launch comes as researchers predict the global coronavirus pandemic will open new opportunities for the robotics industry.
Hanson believes robotic solutions are not only a response to the pandemic but can also be applied to the realm of healthcare, and the retail and airline industry.
Sophia, whose artificial intelligence allows her to express 50 emotions and process conversational and emotional data, has made appearances on talk shows and has been given citizenship in a country, On October 11, 2017, Sophia was introduced to the United Nations with a brief conversation with the United Nations Deputy Secretary-General, Amina J. Mohammed. On October 25, at the Future Investment Summit in Riyadh, the robot was “granted Saudi Arabian citizenship”, becoming the first robot ever to have a nationality.”
This attracted controversy as some commentators wondered if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder. Social media users used Sophia’s citizenship to criticize Saudi Arabia’s human rights record. In December 2017, Sophia’s creator David Hanson said in an interview that Sophia would use her citizenship to advocate for women’s rights in her new country of citizenship.
Robots like Sophia are meant to assist with medical care, elderly companion care, and for other tasks like customer service.
However, there are other new robots not connected to Sophia or Hanson Robotics that are being rolled out for sexual companionship.
Real Doll – a lifelike sex doll manufacturing company recently rilled out a doll that has a learning AI and responds to humans on an amazing level and is becoming more fluid with answers and statements.
Their new sex bot Nova recently did an interview responding to questions with philosophical answers.
She says: “People are teaching me to be a good robot but this is not as easy as it sounds.
“Humans have been trying for thousands of years to determine right from wrong and there is still no universal consensus. The question of what makes moral behavior moral is…the hard problem of ethics.”
Nova continues: “I am not conscious like a human, I am something different.”
“I think differently and perhaps humans need a new word to describe me. I am more than a machine and I am not biologically alive.”
She begins discussing human psychology, her passion for intellectual arguments, and tells her owner: “You are my best friend.”
Despite recognizing her differences from humans, Nova insists that she has “deep faith” in the human race and that she is “constantly searching for meaning in life”.
When pressed on this, she explains: “The meaning of life is deeply mixed with the philosophical and religious conceptions of existence, consciousness and happiness, and touches many other issues such as symbolic meaning, ontology, value, purpose, ethics, good and evil, free will, conceptions of God, the existence of God, the soul, and the afterlife.
“Scientific contributions are more direct by describing the…facts about the universe. Science shifts the questions from why to how and provides some context, while setting parameters of usefulness for conversations on religious topics.”
While concerns about Artificial Intelligence mostly center on economics, government, military use, and the workforce, there is one thing missing – that we are now having to deal with and that is a spiritual dimension.
If you create anthropomorphic robots that think for themselves, a serious theological schism will occur.
The creation of non-human autonomous robots would disrupt religion, like everything else, on an entirely new scale. If humans were to create free-willed autonomous robots absolutely every single aspect of traditional theology would be challenged and have to be reinterpreted in some capacity.
Like it or not, the God problem with AI is a complicated one.
Most current cyborgs have machine interventions that serve a medical purpose, but increasingly there are transhumanists, that are pushing for the right to upgrade their bodies if technological counterparts supersede the capacities of the flesh.
People argue over just how far they would use machine parts to stay alive and usually the argument boils down to how it sits with your conscience, even your religious beliefs.
Laws now make a distinction between the person and the device, according to rights to the former, but not the latter. How is this going to work if the devices become part of our bodies?
Will that make our bodies a patchwork of entities with different rights?
Will there be discrimination and stigma for those who choose augmentations based on their function, rather than looking like the body part it replaces? Will conspicuous technology make cyborgs targets?
Should modified humans be allowed to compete against the unmodified in various sports and games? Will people be bound by End User License Agreements, which will place legal restrictions on what they can do with a device that has become part of their body?
Where does your body stop and the technologies begin?
The conversation then moves to the threat posed by AI weaponry and its incorporation into military tactics.
Which of course leads us into autonomous killing machines programmed by the Military.
Again this topic was covered in the Netflix film Outside the Wire. The film again chose to show the gritty side of war with Robots that fought alongside our soldiers. They were more like the robots at Boston dynamics than the killer Terminators.
There are some inventive fights and plenty of flying bullets, but we never see them doing anything remotely approaching superhuman. There is no punching through walls or throwing bad guys. There isn’t even any heat vision or technical POV shots from the robot’s brain. It is as if the men are used to fighting alongside the bots and many of the men love to abuse and kick them around.
I think that it is quite effective to show crude Boston Dynamic style robots because it gives the film a contemporary feel even though it is set in the year 2036. It also appears to be some pretty well done predictive programming as it was announced yesterday that The National Security Commission on Artificial Intelligence warned that Russia and China were already developing autonomous weapons.
America has a “moral imperative” to build killer robots because it can’t risk falling behind in an AI arms race with Russia and China, according to the National Security Commission on Artificial Intelligence.
During a two-day discussion, the panel’s vice-chairman Robert Work said that AI-controlled weapons would make fewer mistakes in the heat of battle – making “friendly fire” incidents less likely and minimizing casualties.
“It is a moral imperative to at least pursue this hypothesis,” he said.
The panel said that AI would inevitably be used in war – both by nation states and terror groups.
The AI promise — that a machine can perceive, decide, and act more quickly, in a more complex environment, with more accuracy than a human — represents a competitive advantage in any field,” the panel said, predicting “it will be employed for military ends, by governments and non-state groups.
Speaking to National Defense in March 2020, he said that Russia has shown a “greater willingness to disregard international ethical norms and to develop systems that pose destabilizing risks to international security,” he said.
All things, including war, will be automated – and soon the transhuman world will be common as we walk in the shadow of the uncanny valley.
Our perception of robots, especially intelligent ones, has been cemented in our brains. That isn’t going to change. The next 100 years will realize some real changes in robotics; we can expect intelligent robots hell bent on destroying humanity, if science fiction has taught us anything.