Podcast Logo
hero

2/9/23: DIGITAL FARM – D.I.Y. A.I. A.I. “O” W/ MATTHEW JAMES BAILEY

Posted on February 9th, 2023 by Clyde Lewis

Everyone is talking about the latest AI project, ChatGPT, and the responses have ranged from excitement to terror. In fact, it has become such a cultural phenomenon that the site is operating at overcapacity and is difficult to access. While this powerful language technology doesn’t engage in predictive analysis, it has made clear the power of AI to gather massive amounts of data can lead to better decision-making. Can we safely assume that ChatGPT will give us sound information and that it wonundefinedt somehow create something that is erroneous?  Tonight on Ground Zero, Clyde Lewis talks with Matthew James Bailey, an emissary for ethical AI, about DIGITAL FARM - D.I.Y. A.I. A.I. “O”.

SHOW TRANSCRIPT:

Everyone is talking about the latest AI project, ChatGPT, and the responses have ranged from excitement to terror. In fact, ChatGPT has become such a cultural phenomenon that the site is operating at overcapacity, and you can’t even get on right now.

I tried this morning and I could not get on.

In the meantime, AI is already impacting various industries but none more visible or game changing than the sports business. The reason is that predicting future outcomes are essential to everything in sports.

Think about some of the decisions that need to be made in real time. This type of predictive analysis based on data analytics has been around for a while introduced by the Oakland Athletics and its general manager Billy Beane who with a payroll of $44 Million was able to compete favorably with teams like the Yankees with a payroll of $125 Million. His character was famously played by Brad Pitt in the movie ‘’Moneyball,’’ based on a book about Beane by the same name.

The basic premise of Moneyball was that statistical analysis, such as slugging percentage and on base percentage, was a more effective way to predict success that the business intuition of baseball insiders comprised of scouts and managers. The owner of the Oakland Athletics at the time, Lew Wolff, took a big gamble in giving Beane the latitude to test his thesis at a time when it was completely unknown.

All the major sports league are incorporating AI into everything they are doing particularly from a fan engagement perspective.

The NFL has already joined with Amazon to gather AI insights. For example, they have launched an AI tool that combine seven AI models, including a new model to predict the value of a pass before the ball is thrown, to evaluate quarterback passing performance. The NBA is also incorporating AI into an engagement tool to provide fan with a deep analysis of the performance of teams and player in nearly every conceivable situation.

Predictive programming takes on a whole new meaning when A.I. is involved.

While ChatGPT currently doesn’t engage in predictive analysis it has made clear the power of AI to gather massive amounts of data can lead to better decision-making .

ChatGPT is a powerful language model developed by OpenAI that is capable of generating human-like text.

The viral AI chatbot software, launched by Elon Musk’s company at the end of 2022, can write anything you want — true or false. Give it a prompt like who is Clyde Lewis.

It actually wrote an amazing bio about me- it even wrote about things I rarely think about or would not even think about putting in a bio.

It was also quite interesting that it did not just regurgitate what is written in Wikipedia, or Fake Wikipedia sites that say that I am a millionaire or that I believe that i see demons.

If you recall I went to war with Wikipedia over lies that they allowed to be published about me. I was harassed by a pedophile stalker that hijacked my Wikipedia page and locked it.

In 2014, I was alerted on my Twitter account that my Wikipedia page was altered anonymously by someone in the House of Representatives. I was never told who altered the page only that several other people were targeted including Vice Presidential candidate, Sarah Palin.

When I said I would sue them undefined they said that I had no right that since I am a public figure it is known as fair comment. So I encouraged my audience to vandalize my page- so that they had no choice but to shut it down.

Now if you look up Clyde Lewis on Wikipedia, you read that he is an Olympic swimmer from Australia.

I once had a guest on the show that tried to relate to me by talking about swimming undefined I thought he was joking, but apparently, he used Wikipedia to get information about me.

My fans know that I am most certainly not in that good of shape to be an Olympic swimmer.

I typed Ron Patton into ChatGPT to see if it knew him.  It did it called him a writer, publisher, producer and influential in providing documented evidence of mind control activity carried out by the government.

I was blown away.

Then I selected regenerate undefined and it rewrote the whole thing with different words and included other facts about me and Ground Zero.

It even said that the show had a cult following and has been recognized as the number 1 parapolitical and paranormal talk show.

I was flattered by an A.I.

Computer science experts, and even the universities themselves, say this technology is only the beginning of a new era of learning.

ChatGPT is a writing tool-and many educators are worried that this will be used by students to avoid writing essays.  Imagine having an AI writing your thesis.

But it can be compared to a flight simulator -where even if he AI helps you fly undefined in reality you still have to know how to land the plane in the real world.

ChatGPT was developed on a data set larger than any of its competitors. It works by observing patterns in texts from around the world, including books, articles and web pages, and learning which words are most likely to appear together.

But it’s not a database: it creates new prose because it’s simply looking for learned characteristics. It learns only from what it’s taught, so it can still make mistakes, have gaps in its knowledge and have inbuilt bias.

So now we know why the people at Davos want control of the narrative undefined that way AI programs that analyze data will eventually have the power to rewrite history -and soon there will be a developed trust with the machine undefined instead of researching matters using libraries and other information.

Here lies the danger in this new toy.

Dr Cheryl Pope, a senior lecturer at the University of Adelaide’s School of Computer and Mathematical Sciences, said ChatGPT was great for writing a first draft, but the need for editing and fact-checking by humans was unlikely to be replaced anytime soon.

If the person asking the A.I. to write a paper and they know nothing about the subject how can they critique it?

How can they find errorsundefined this is where the ChatGPT can be broken.

Again, this is a tool, not a replacement.

ChatGPT I am sure will be demonized for other reasons. It certainly is going to put a dent in the advertising model for websites that provide information.

Weundefinedve become accustomed to using Google to direct us to relevant web pages. But why search these pages – while generating ad revenue for the company in the process – if artificial intelligence can find the answer to our questions straight away?

We use Amazon for a shopping experience that features sponsored product listings. What will happen to them when AI immediately suggests the perfect product?

We use Facebook or Instagram to connect with our communities.

What will happen if AI becomes the one making these connections and generating the majority of posted content? This new generation of AI removes the reasons that users spend time browsing the web. But our time and clicks are monetized, which poses a problem to the very heart of the internet: its advertising model.

Someone is going to lose money undefined and that is why the A.I. race is on to dominate with this new Chat model.

Google offered a glimpse of its new artificial intelligence chatbot search tool on Wednesday at a European presentation that sought to underscore its prowess in both search engine and AI tech, a day after its archrival Microsoft unveiled its own search chatbot aimed at eroding Google’s dominance.

But a highly-visible mistake Google’s bot, named “Bard,” made in a Monday blog post the company put out announcing the product, and the fact that Google hasn’t let outsiders test its tool yet, potentially contributed to concerns from the company’s investors. They sold off shares, leading to a nearly 8 percent drop in the company’s value Wednesday.

The dueling announcements, hyper-focus on whether Google is dipping behind Microsoft and twitchy Wall Street reaction show how the battle over AI has become the central obsession of the tech industry.

The chatbot search tools announced by the companies this week are different from regular search engines in that they create longer, contextualized answers in response to queries, cutting out the need for people to click through to a publisher’s website.

We are now witnessing a sort of Arms Race over whose robot chat will be victorious.

As any comic book fan knows: with great power comes great responsibility.  With artificial intelligence tools increasing in sophistication and usefulness, people and industries are eager to deploy them to increase efficiency, save money, and inform human decision-making. But are these tools ready for the real world?

The proliferation of AI raises questions about trust, bias, privacy, and safety, and there are few settled, simple answers.

As AI has been further incorporated into everyday life, more scholars, industries, and ordinary users are examining its effects on society. The academic field of AI ethics has grown over the past five years and involves engineers, social scientists, philosophers, and others.

We live in a society that functions based on a high degree of trust. We have a lot of systems that require trustworthiness, and most of them we donundefinedt even think about day to day,

To trust a technology, you need evidence that it works in all kinds of conditions, and that it is accurate.

One of the biggest complaints I have about GPS is that I live in a place where a locked gate has been installed to keep the homeless out.

This gate has now blocked any access to my home. So If I give delivery drivers my exact address, they wind up at the locked gate.  Some of them will just give up and say that they cannot deliver to the address.

They are incapable of looking to see if there is a back entrance undefined and when I can give them directions they always say undefinedWell that is not what the GPS says.undefined

I feel that sometimes if it weren’t for GPS people would not be able to get around because they trust it so much.

We are allowing the machines to run us -and we are not running the machines.  They are supposed to be tools and not the sum of everything we are looking for.

But they have been with us so long that people take for granted that computers can malfunction, and make mistakes, and there are many trolls out there that intentionally pass bad information online that is used for clickbait.

If people agree with what they see and it resonates with their judgment most people will trust.

This is where the evil starts.

We already have ways of ensuring trustworthiness in food products and medicine, food that is supposedly healthy is approved by the FDA undefined or it is blessed by a rabbi as Kosher.

However, there are certain things that people eat that arenundefinedt healthy and even though the FDA approves it undefined it can be detrimental to your health.

Same with medicines undefined Iatrogenic artifacts have been known to kill patients -and we are now wondering just what effects the COVID vaccine has had on the population.

But people trusted it because they are either coerced or encouraged to believe that it is okay to use.

Today, many products come with safety guarantees, from childrenundefineds car seats to batteries. But how are such guarantees established? In the case of AI, engineers can use mathematical proofs to provide assurance. For example, the AI that a drone uses to direct its landing could be mathematically proven to result in a stable landing.

Most of the time- even advanced civilizations have been known to crash their UFOsundefined near military bases at times. I am sure their undefinedmath is flawlessundefined but there is a constant in the universe and that is incompetence.

Take a look at self-driving cars. Roads are full of people and obstacles whose behavior may be difficult to predict. Ensuring the AI systemundefineds responses and undefineddecisionsundefined are safe in any given situation is complex.

A tiny amount of noise—for example, something in an image that is imperceptible to the human eye—can throw off the decision-making of current AI systems.

Though we may call it undefinedsmart,undefined todayundefineds AI cannot think for itself. It will do exactly what it is programmed to do, which makes the instructions engineers give an AI system incredibly important. If you donundefinedt give it a good set of instructions, the AIundefineds learned behavior can have unintended side effects or consequences.

For example, say you want to train an AI system to recognize birds. You provide it with training data, but the data set only includes images of North American birds in the daytime. What you have actually created is an AI system that recognizes images of North American birds in daylight, rather than all birds under all lighting and weather conditions.

Instructions become even more important when AI is used to make decisions about peopleundefineds lives, such as when judges make parole decisions on the basis of an AI model that predicts whether someone convicted of a crime is likely to commit another crime.

Instructions are also used to program values such as fairness into AI models. For example, a model could be programmed to have the same error rate across genders. But the people building the model have to choose a definition of fairness; a system cannot be designed to be fair in every conceivable way because it needs to be calibrated to prioritize certain measures of fairness over others in order to output decisions or predictions.

Todayundefineds advanced AI systems are not transparent. Classic algorithms are written by humans and are typically designed to be read and understood by others who can read code. A.I. architectures are built to automatically discover useful patterns, and it is difficult, sometimes seemingly impossible, for humans to interpret those patterns. A model may find patterns a human does not understand and then act unpredictably.

Can we safely assume that ChatGPT will give us sound information and that it wonundefinedt somehow create something that is erroneous?

I donundefinedt know if in its infancy we can say for sure undefined but at the moment it is an interesting turn of events in A.I. technology.

undefined

 SHOW GUEST: MATTHEW JAMES BAILEY

Matthew James Bailey is an internationally recognized pioneer of global revolutions such as Artificial Intelligence, Smart Cities and The Internet of Things.

Matthew is the is author of the playbook for the Age of AI - Inventing World 3.0 - Evolutionary Ethics for Artificial Intelligence™ - https://aiethics.world/the-book. He has been recognized as a Who’s Who in Artificial Intelligence and is a Visiting Scholar to the National Institute of Aerospace and NASA. Matthew is the founder of AIEthics.World - https://aiethics.world  - an organization providing leadership training for artificial intelligence and new inventions such as Ethical AI and a new ethical genome for AI.

During his career, Matthew has been privileged to meet famous global leaders such as Steve Wozniak, Sir David Attenborough, John P. Milton and Professor Stephen Hawking. He has advised Fortune 100 companies as well as prime ministers, cabinets and representatives of G7 Countries in technology revolutions.  He has assisted multiple territories and global technology companies to successfully position themselves into the digital future. Matthew is a regular keynote speaker and has spoken on BBC Radio.

undefined