5 MINS READ
If you're new here, we roll with AI.
And tech and stuff like that.
I am sure you'll like it.
In today's issue, we'll do some debunking and annoying fact-checking.
Subscribe to the newsletter if you haven't, and share to a friend.
Read on.
It is no news that AI is the news.
AlphaGo can learn to play board games on its own, Tesla is building autonomous (self-driving) cars, and the new GraphCast can predict weather accurately.
But what can AI programs not do; that we might think they can?
The truth about what AI's true capabilities seems is obscured in this sea of discoveries and news.
Can we separate the ruse from reality?
Um, yeah.
Myth #1: AI Can Take Over The World
Wait, what?
Haven't I been hinting this in previous issues?
And doesn't Elon Musk think so?
OK. Chill.
This one is #1 because it is the most pervasive and tricky.
Let's clarify one thing:
AI models are computer programs.
Yes.
And computer programs are just a series/sequence of instructions to achieve a task.
The catch is:
The behaviour and purpose of AI models depends on how humans design them.
Big names like Elon Musk, Bill Gates and Stephen Hawkings have expressed fears about superintelligences that develop to a point that we can't control them.
This could be the case if the AI model is assigned a malicious purpose.
Things could go awry.
The real problem is:
It is difficult to correctly define what we want AI systems to achieve.
When we try to communicate our human objectives to computer systems, it is easy for them to misunderstand and apply them too literally.
I explored this in our 19th issue, Goals, goals, goals.
Let me give an example here.
Many online video platforms use AI to help make the experience smoother.
In some cases, they want to show the kind of videos that the user wants to see, so that the user watches the video from start to finish.
So they give an AI that objective.
Problem is, the AI algorithm may start favouring short, sensational videos or videos that reflect the user's strong opinions.
The user is most likely to watch these kind of videos from start to finish but they still need to see the videos they want most.
How do you explain to a computer what you mean by "videos the user wants to see"?
See? It's not that simple.
If you have bad AI on the scene, it resulted from two basic scenarios:
We actually design it to be bad AI
We design it to do Task A but we don't define things well enough. So in trying to do Task A, it starts doing something else. And that something else is something bad.
What I am saying is,
We can't have AI systems that spontaneously turn evil like in the movie, iRobot.
No. Thank God.
Read more about this: https://emmanuelpaulmaah.substack.com/p/goals-goals-goals-issue-19
Myth #2: AI Might Have Feelings
In our 20th issue, we talked about dating computers.
In the two movies we discussed, Her (2013) and Blade Runner 2049 (2017), the protagonists fell in love with AI programs.
Even today, in the real world, there are people falling in love with AIs.
The AIs are charming and loving and awww.
Amazon Alexa can respond with intonations that express emotions like excitement, disapproval or disappointment.
Microsoft's Bing chatbot was in the news one time for using aggressive words at some users.
It feels like AI has feelings.
But is it really the case?
Even before we started studying AI as a discipline, people have imagined creating emotional creatures.
This idea influenced movies like Pixar's WALL-E (2008).1
However, we're still a long way off.
The good/bad news is that:
AI doesn't have feelings.
They may appear to be gloomy or joyful but these emotions are simulated.
These pseudo-emotions are the work of someone in a lab.
What's important is how we feel and relate to these expressions of emotion.
Like Theodore in Her (2013), who discovers his AI lover also "loves" 600+ other people, will it break our hearts when we find these feelings are not real in the way ours are?
Read AI Girlfriends for an unspoiled review of Her (2013): https://emmanuelpaulmaah.substack.com/p/ai-girlfriends
Myth #3: AI Is Smarter Than Us and Functions Like the Human Brain
This one might be hard for some to swallow.
After all GPT-4, the latest AI model from OpenAI, can pass several standardized exams.
Without attending the classes!
So what is the guy talking about?
The idea is, AI was inspired by the human brain.
But it is not even a copy or a simulation of the brain.
At least yet.
Our current AI systems are not that intelligent.
They are very specialised and are very effective at performing those specific tasks.
“The most intelligent computer systems today have less common sense than your cat” — Yann LeCun, Chief AI Scientist at Meta.
The goal of the field is to use computers to solve problems that would require intelligence.
It might involve "seeing" and interpreting visual data or "hearing' and language recognition.
There's a field deep within AI called deep learning. In deep learning, they use special algorithms called neural networks to attempt to mimic how our brains function.
However, AI still doesn't come close.
Intelligence is measured in different ways and AI just ticks a few boxes.
Talking about specialization,
We can divide AI into narrow AI and general AI.
Narrow AI does a narrowly defined task very well and General AI should enable machines to apply knowledge and skills to various functions and contexts.
We have superhuman narrow AI that outperforms 100% of humans.
No one can predict protein structures better than AlphaFold.
But the most advanced General AI we have are OpenAI's ChatGPT, Meta's LLaMa 2 and Google's Bard that perform somewhat equal to an unskilled human.
So, in recap:
AI doesn't have freewill. The way they behave and their purpose is designed by humans.
AI doesn't experience emotions like we do. All of it is simulated.
AI is brain-inspired. Not a brain copy.
I think that's all. *Curtain closes*
Prompt of the Day: Invitation Email
Copy and paste this into ChatGPT:
We're hosting a [e.g., WEBINAR/VIRTUAL EVENT] next week, and we need to send an invitation email to our email list. Can you help me write an inviting and informative email that includes the [e.g., EVENT DETAILS/SPEAKERS/REGISTRATION LINK]? Our audience is [e.g., KNOWLEDGEABLE/BUSY/ENGAGED], and we want to make the email [e.g., CONCISE/ENTICING/INTERACTIVE]
Tool of the Day
Room AI can help you with interior design.
Did You Know?
OpenAI is seeking extra funding from Microsoft as it pursues the development of a more advanced AI model, GPT-5
Thanks for reading.
I think I'll have to explain the levels of AI, and better explore narrow and general AI in another issue.
If you think so too, you can tell me by replies or comments.
I hope you liked this issue.
Thanks for your support.
With love and ink,
Emmanuel.
I started watching WALL-E yesterday and I'll write a review soon