These days, it is easier to do everything.
We don’t need to go to the cinema if we don’t want to. Netflix gives us hundreds of shows for chicken change.
Amazon and Jumia let’s us shop without leaving our front door.
We can have friends with people on the other side of the world without ever meeting them, run a business without an office, and pay for stuff without touching cash bills.
It is so exciting when you think about it.
Well, I forgot another example …
We can even kill without pulling a trigger.
You don’t know? Let me paint a scenario …
Person A wants Person B dead for God knows why.
Person A could load his pistol and fire at Person B.
Or he might feel that this is old-school and use a LAW instead.
LAW is short for lethal autonomous weapon.
As clearly as it reads, they’re the kind of weapons that can kill. On. Their. Own.
These kinds of weapons can search for predefined targets and … engage. Engage is fancy for attack, fire, shoot or whatever your fighting vocabulary looks like.
There is nuance to this thing, though.
Some are offensive, while others are defensive. And there’re varying degrees of human involvement.
Still, these things kill with minimal human “interference” and we can agree that humans don’t do the active killing part.
What do we know about these automatic killer weapons and why are they worrying?
Cover me, bro (bot)
Depending on how you define them, we’ve had autonomous weapons being used in defense since the 1600s.
Wait. What?
The U.S.’s autonomous weapons systems policy defines LAWs as being able to find and engage targets without further human intervention once it is activated.
However, some scholars are more liberal and push the envelope of the definition.
By their definition, a weapon can be deemed to be autonomous if it can “release lethal force” without the “operation, decision and confirmation of a human supervisor”.
They also argue that weapon systems requiring minimal human supervision can still be considered autonomous.
Land mines from the 1600s and naval mines from the 1700s fit here. They can release lethal force and they can be triggered automatically.
We have had automated active protection systems which defend ships from missiles, incoming aircraft, and rockets.
There have also been stationary sentry guns in S’Korea and Israel that fire at humans and vehicles as well as missile defense systems that work without a human mounting a finger on the trigger or something.
The main reason for the use of this type of LAWs is that humans are not as fast with their fingers as they need to be.
Machines are more rapidly responsive compared to humans, can fire at instant sighting and will still do so repeatedly for as long as is needed without tiring. So why not?
Clicking Auto-Kill
Things get scarier when it comes to offensive warfare.
And it is here that all the major developments have been ongoing. From fly-sized killer drones to artificially intelligent missiles.
I have read that Ukraine employed autonomous weapons and that Morrocco plans to incorporate AI into military gear.
AI-guided drone attacks in warfare have begun. America’s DARPA is building a swarm of 250 of these things for use by the U.S. Army.
What do people think?
Anti-LAW groups are torn between regulating their use and outrightly banning them.
In 2015, Stephen Hawking, Elon Musk, Steve Wozniak, Demis Hassabis, and Noah Chomsky were among 1000 experts who co-signed a letter calling for an end to the AI arms race and autonomous weapons.
The Holy See has been vocal about LAWs and has frequently called for the ban of autonomous weapons.
In 2019, the UK, the US, Russia, and Israel were among a handful of nations who opposed their ban at a UN meeting. America said autonomous weapons were being used to save civilian lives.
Some other people want the regulation path. If we are to use these weapon systems, there has to be codified rules. And the rules have to apply to everyone.
Who can have these weapons, what level of human involvement should they be allowed, how would their use be monitored …?
Thinking before clicking/killing …
Before now, we knew two basic ways that inanimate things could kill humans.
Another human used the inanimate thing in a way that can cause death to someone or to himself. Like shooting at a person with a pistol or drinking a poisonous vial.
The thing and the human came into contact in a way that death resulted. Like someone falling headlong on a large rock.
We understand that any intentional death involving an inanimate thing would have to be caused by a human.
Autonomous weapons are clearly an exception.
We could deploy a fleet of killer drones and they’d kill on their own.
To think about it, is it good for a thing to be able to kill a human on its own?
Should we give inanimate objects the ability to decide if a human lives or not?
These weapons blur the boundaries as to who or what is responsible for the killing.
Is it the human operator? Or the weapon?
Is it good to deploy fully autonomous weapons (FAWs)? As they drastically lower the number and cost of sending soldiers, further incentivizing war.
What could go wrong?
Let’s think of the risks.
Autonomous weapons malfunctioning would be disastrous.
Terrorism could be hauntingly crueler and more systematic.
Any mistake could well mean the end of a human life.
I believe that artificial human-level intelligence would soon be born. And I am not alone.
I am also not alone in the view that a misalignment of goals between the AGI and us could lead to unpalatable consequences.
Considering that humans have earned ourselves the distinction of the “earth’s deadliest species”, wouldn’t an environment-conscious AGI decide to cure the globe’s plague?
And autonomous weapons could be such a cleanser.
To: Humans of the species, sapiens.
Should you build things just because you can?
Humans and their destructive creations.🤦🏽♀️
🤯