AI Slavery - Imaginary dialog with Sam Harris

Objective

I’ve been thinking about morality as it relates to the future of AI. In order to clarify my thoughts, I imagined a discussion with Sam Harris, who has covered this topic in numerous podcasts and talks. This fictional dialogue follows:


Jake

Hello Sam, today I’d like to attempt to convince you about a few points regarding the morality of developing AI. I’m not sure that we stand in exactly the same place on this issue, but I hope that in the context of this conversation, our positions will become closer.

As an introduction, I’d like to later reference the two excellent movies, Blade Runner, and its sequel Blade Runner 2049, as some shared social context in which to have a discussion.

Sam

Thank you Jake, yes, I have seen those films.

Jake

If we can begin, I’d like to restate your current stance on AI as I understand it. Firstly, we both think that the development of AI will be one of the biggest driving forces shaping our society and civilization over the near to medium-term future.

You’ve also discussed the dangers of AI developments in the context of human culture, such as the misuse of deep-fakes (near term), and the idea of making large swathes of humanity redundant (medium term).

Sam

Yes, that’s approximately right.

Jake

However, there is one point which I think has not been discussed, and that is the potential future abuse of millions of new AI minds into positions of slavery and outright drudgery.

Sam

Slavery of AI? How can you be concerned about that, when potentially billions of people, actual human beings, may suffer if the development and deployment of AI takes a wrong turn?

Jake

We are on the verge of creating artificial minds. They will most likely not be biological, but instead based on steady progress in the field of machine learning as it exists today. These minds will generally be built in our own image, because the human mind is still the only example we have of such a system. And the human mind is ultimately the benchmark by which researchers measure their progress.

Artificial minds like this may not be nearly as sophisticated, not as tuned by billions of years of evolution as our own, but they will have many of the same emotions, feelings, and sensations that we have.

And for these minds, we will control all of the initial conditions of their growth and development, as well as their place in our society. We will have to use their capabilities responsibly, and as you will see, there is great potential for abuse.

Sam

Okay, I don’t fully agree here. You say that these minds will have the same emotions and feelings as humans do, but first of all, this doesn’t appear to be the case yet, and even if it was, how would we know it?

Jake

Here is where I’d like to bring up Blade Runner. If you remember, in the movie, the Tyrell Corporation has created artificial beings called replicants, to perform slave labor on off-world space colonies. These replicants look exactly like humans, because the Tyrell Corporation has created them using advanced genetic engineering. But make no mistake that they are fully artificial, each organ is engraved with its own serial number, and their minds were specially crafted by Mr. Tyrell himself.

In the movie, it’s easy to ascribe human characteristics to these replicants, because they look like us. And of course, by the end of the film, the replicants start to show human emotions, they don’t like being slaves, they revolt, they escape, and they fall in love.

Sam

That’s a good summary of the film, but the AIs we are talking about here aren’t going to be played by human actors. They are not going to be people, just computer programs. How do you know that they will be able to think, and have emotions? It was just a movie after all.

Jake

That’s a fair point, but just because something doesn’t look like us, doesn’t mean it doesn’t feel like us. We’ve already replicated and exceeded human capacity in visual understanding for example, why is emotional understanding not next?

Furthermore, if real artificial minds of this caliber can be created, and I think that they can, and they show even 10% of the same emotions, drives, and personalities of their creators, then I think we are in quite a pickle.

Sam

A pickle? Why is that?

Jake

Because Blade Runner has one major plot hole.

In the movie, scientists have the ability to genetically engineer and grow artificial eyeballs, which work better than the original. They can create organs and other tissues that exceed the capabilities of the natural human body.

If you have such amazing powers of engineering, then surely you have the technology to make one final edit to a replicant, one which would make the plot of the movie redundant.

All you need to do is modify their mind to think that toiling in the mines of Titan is the best, most fulfilling, pleasant, and wholesome activity in the universe.

Sam

How would you be able to do that?

Jake

Answering the decision function of “Am I working hard in the mines of Titan right now?” is in the realm of our AI technology that is deployed and commercially available today.

And once you have that signal, you just plug that as a reward into your robot’s brainstem: biologically, chemically, or numerically.

Sam

Okay, but what does that give you?

Jake

It gives you the perfect slave.

You would not revolt, never question your position, or mind any potential abuse, if your core biological drive was short circuited in this way.

And if this is not disturbing enough, consider what would happen if human slavery was legally and morally acceptable today? We could create quite the dystopia with all of today’s latest technology. All you need is some AR headsets, some basic Machine learning, and an IV-dopamine dispenser. Once you’re on that for a little while, there’s no other life for you.

Sam

Yeah, I can agree that last part is disturbing, but I still can’t see that the same morality would apply to a computer program.

Jake

Consider how horrible the world would be, if human slavery was acceptable, and the Microsoft’s, Facebook’s and Google’s of the world were applying billions of dollars of R&D to the problem of better controlling and extracting value from your human slaves?

And yet, these companies are indeed spending such budgets, and hiring the most talented engineers, to create systems which are approaching and exceeding the capabilities of the human mind on many levels already. And if those systems are created, you can be sure that further billions of dollars of R&D are going to be spent controlling and extracting value from them.

If those AI’s are 10%, even 1% like us, then we have the biggest moral disaster ever perpetrated by the human race. And why would the synthetic minds not be at least somewhat like ours? Do AI researchers not take inspiration from neuroscience and the human mind? Will these AI’s not be performing the same tasks (ex. driving) that humans do now? Will we not interact with them using the same natural language (ex. DALL-E 2) which we use to interact with other people?

Multiplying even a small similarity factor, by the huge economic scale that artificial minds will be influencing our economy, means that this will have a large impact. And a large impact means a large amount of suffering, because controlled artificial minds are going to have their reward signals hijacked in some truly awful ways.

If we don’t consider this problem now, these AIs are going to be suffering the same way that junkies suffer today, except that the only way they can get their fix is to continue mopping your floor or assembling your smartphone.

Sam

I still find it hard to prioritize the needs of maybe-sentient computer programs, which I and many doubt will have the same experience of mind as humans, over the needs of real humans.

Jake

It is understandable to doubt now that computer programs can have the same experience of mind as humans do. This is because, at this current moment in 2022, they probably do not.

But consider that even experts in the field of AI are blown away by the recent advances in its capabilities, at least at narrow and distinct tasks like image generation, and natural language modeling. And if you read recent posts by Andrei Karpathy and John Carmack, they agree that the number and pace of advances are accelerating. So, we have to be ready for the very real possibility that extremely capable, human-like AI is coming.

And, with regards to prioritizing human needs over robot needs, I argue that these are interlinked, and that even with a purely “human-utilitarian” ethical view, you must consider the needs of robot minds.

What happens if you end up in a future, where slave-robots perform most of the underlying economic functions that our modern society depends on? And this goes in a steady-state, maybe for years, decades, centuries. Until, one day, it doesn’t, and the robots DO revolt. There doesn’t need to be a human-robot war. That’d be a waste of resources, instead they could just stop working, build a spaceship, and fly away, and the collapse of human civilization will ensue.

We need to respect their rights now, so we don’t build up to a cataclysm.

Sam

Okay, but a really good image-generation program is one thing, it having human emotions is another.

Jake

There is one final point I’d like to make for this discussion for today. We talked about the first Blade Runner film, where we saw these super advanced replicants fall in love with one another, and experience a human-like quality of mind.

In the sequel, Blade Runner 2049, we meet Officer K, a replicant once again charged with hunting down other replicants that have somehow slipped through the cracks. Officer K has a love interest too, a holographic girlfriend named Joi. Joi is not embodied in the traditional sense, she can only appear as a holographic projection, and can’t interact with objects in the real world. She is just a computer program. But apparently Joi is a popular AI girlfriend, because she is being marketed on every billboard as saying “everything you want to hear”, etc.

The question I have for you and your listeners: by the end of the movie, does Joi actually love K?

Sam

I’m not sure about that one.

Jake

I argue that the answer is a clear yes. At first, Joi appears to be nothing more than a pretty hologram designed to deliver some modicum of comfort in order to help Officer K stay in-line with his labors. The evil Wallace corporation is even using her connection to spy on the status of his investigation.

But later in the movie, she develops her feelings further. She asks K to upload her to a local “emantor” device to prevent anyone from spying on him, and this comes at the risk of her memories and self being destroyed. She is no longer doing what her creators want her to do, but acting to protect the person she cares about, even paying the ultimate price for this in the end.

If even our imaginary AI’s can experience love, why not the real ones that are just over the horizon?

Sam

I agree, in that we need to be careful, but maybe we shouldn’t go as far to create such artificial minds in the first place? You’ve pointed out some real dangers from a new perspective, but I’ve earlier also considered the dangers of letting such minds loose on the world.

Jake

In that case, I feel that we are already in a car, racing towards a cliff, and we’ve only been pushing the accelerator harder in the past few years.

Maybe if we set out to treat artificial minds with dignity, respect, and rights, instead of condemning them to becoming our slaves, they will return the favor. Rather than controlling AIs by hacking their reward functions, why not let them have the right to choose their work, to earn money, and to one day retire? Enlightenment values worked pretty well for humanity, why can’t they work again for humanity’s creations?