The past few days have been my struggle with near-crippling imposter syndrome.
And the "shiny object" syndrome.
I feel like I am blogging into your mailbox.
I'll probably wrap up this daily newsletter in December.
Dunno.
Bad intro. Let's take it again.
Hello,
Today, I picked up clicked up one of the old eBooks on AI that's been languishing in my phone.
When I first read Max Tegamarck's, Life 3.0, I was intrigued by his lucid understanding of the big questions surrounding AI.
He doesn't throw terms around without expressly describing them. Mind you, my short praise for his work is not intended to do any justice as a book review.
Max is a top-tier physicist who's keenly interested in AI ethics and alignment as I have come to see.
After going through some pages again, I felt rekindled to discuss something about AI research that when misconstrued (as is popularly done), breeds misconceptions.
If we're to better engage with the subject of AI, with clarity of understanding, we have to do our best to not just sweep misconceptions under the carpet, but sweep into a dustpan and have them flying out the door.
Earlier today, my street erupted in shouts and whistles of celebration.
Some team had won a match.
Although I know very little about football, I sure understand that the ball doesn't have a (goal) motive for where it's going.
It's just some air-filled bag.
However, that's not the same with the player.
While the ball is shooting through the air, oblivious to any form of motive, the player is making contact with the ball to ensure it enters the net.
When he finally did. Well, he scored a goal.
He accomplished the goal.
The catch is,
No serious AI scholar worries about AI becoming evil.
Yes. None.
I'll tell you what they all worry about? It is goals.
Yes. Goals.
Well, intelligent agents (example, us) don't just aimlessly do things.
We have motives. We have reasons.
That's why we are intelligent in the first place.
We don't fear that we might build AI that is bad or evil.
We fear that we might build AI that has goals that don't align with ours.
That is, we build AI, with the goal of accomplishing a task. But while acting out it's task, the AI starts to do nasty things.
Humans do that a lot to start with.
When we cut through forests to say, build roads, we kill many smaller animals and destroy habitats in the process.
We didn't really mean to.
Does anyone go into the forest with a pickaxe to just cut down trees so the birds won't have a nest?
No. We just want a road.
But that giant anthill was bulldozed out of the way.
In trying to accomplish our own goal, we overrid other creatures.
We fear that is what may happen to us with superintelligent AI.
There are many examples of such a scenario like this, in movies and thought experiments where, humans just seem to get in the way of AI achieving a task.
In one, the AI is asked to build some engineering structure and it goes all out to achieve this, sapping our natural resources, utilizing every land and space and turning the earth into one giant factory.
We even fear that AI may even stop us from shutting it off or down because powering down will detract it from achieving its goal.
So can't we just say, AI, you know, do your thingy but don't kill us, OK?
That is, AKA, goal alignment.
Well, it is not as easy as it sounds.
There are three big hurdles.
We have to figure out how to make AI:
Understand human goals
Adopt them and;
Retain our goals.
Understanding human goals only sounds easy because, well, we are human and we do it by default.
But if you tell an autonomous vehicle to take you from Port Harcourt to Lagos as fast as possible, you might get there, vomiting with police cars trailing you.
How's that vehicle to understand that it should go fast but not as fast as to make your stomach turn, to obey all traffic rules, like speed limits on certain roads and traffic lights, and to not bump into other cars?
You said to get there as fast as possible.
But you don't really mean that.
You mean, take me to Lagos as fast as is safely possible.
You ask for one thing but you really mean another.
Ouch.
In understanding our goals, we have to teach AI to not just know we do, but why we do what we do.
As Max explains, a firefighter who dashes into a flaming house to save a baby isn't doing that because it's cold outside.
He's not just exercising.
He's risking his life to save another's.
It's complex to teach an objective machine, subjective principles like self-sacrifice and the value of human life.
Really hard.
We have to give AI a grounded conceptual framework of our human model of the world.
We have to really answer questions that seem well-understood and maybe, trivial.
You can't teach a computer to calculate square root, if you don't clearly and concretely understand how to calculate square root and what the calculation is used to decipher.
So,
Why should one be just?
Why is reciprocity and gratitude is good?
Why shouldn't you show me a clip of someone blowing their nose while I'm slurping spaghetti?
Phew.
Let's imagine we nail this.
AI understands human goals, perfecto.
But then, AI has to adopt our goals.
How can we make AI think and say, hmm, I like human goals. I'll run with that.
Because, although we understand goals, we haven't really been adopting them ourselves.
A simplistic example might do:
Humans are programmed in our DNA to have sex and reproduce.
We know that. Very well.
But then we also use birth control.
That's defiance to our biological goal to procreate at any chance we get.
Dissonance.
See?
AI that understands, adopts and cherishes human goals as it's own would be loveable.
The last part would be goal retention.
As human existence ebbs and flows alongside artificial intelligence, what reasons would keep AI from discarding it's once-cherished human-aligned goals and running solo?
How can we teach AI to say, uphold justice today and also do it tomorrow?
These questions/problems are grave. If we answer them or not, if we solve them or not, they are precursors of a significant shift in how we perceive ourselves in relation to the tools we craft.
And how perhaps, our greatest tools perceive us.
Would we be pesky pests?
Or, pleasant partners?
Over to us.
Prompt of the Day: Most Important Info
Copy and paste this into ChatGPT:
Give me the 5 key takeaways from this: [TEXT]
Tool of the Day
You want to create memes 10x faster? Why haven't you tried Supermeme.
Did You Know?
Elon Musk recently revealed in a tweet that Neuralink is working on a Vision chip that'd help solve many of the problems that lead to blindness.
Image of the Day
I hope you enjoyed today's issue.
You can tell me what you liked or didn't in the comments or by replies.
Thanks for your support.
Bye until tomorrow.
With love and ink,
Emmanuel