|This'll be relevant in a few paragraphs, I swear.|
I am discovering that when Elizer Yudkowski, the author of Harry Potter and the Methods of Rationality and LessWrong1, tells me I should be scared of something, there are actually two levels of terror that I have to access. This is because it's not difficult for me to distance myself from problems of AI--after all, the likelihood that I'm going to be designing a pet friendly artificial intelligence in my basement is pretty slim. So, when he says "I don't talk about this idea, because most people are too frightened by it to react with the proper curiosity and interest," I can easily pick curiosity, because I've got nothing on the line.
I have to get to a state where I can actually be legitimately frightened--where I have chips in the game. Otherwise, all I'm doing is finding a solution that masks the act of fleeing from a problem in the guise of intellectual curiosity. It is very easy for me to say, "Wow, what an interesting problem," then immediately put the problem out of my mind. It would look like I'm reacting appropriately to something scary, but really I'm just disengaging.
This article is a very good example of that, actually. The basic gist is: if we create an AI, it might want to study humans. And the way you study things is frequently to make better and better models of your subject.
So, what happens if the AI accidentally creates models of humans so good that they become sapient?
And then what happens if the AI decides to start deleting old backup copies of these sapient simulations?
It's an intriguing thought that a lot of AI researchers, according to Yudowski, anyway, would handwave out of existence--they would say the problem would take care of itself, because the AI will be smart enough to recognize what was happening and keep it from happening, or that certain limitations would naturally prevent the creation of fully simulated consciousnesses. Of course, there's no way of really knowing that ahead of time, and I'm not sure how an AI would actually recognize that it was creating sentient cyberhumans while it's still in the process of figuring out how sentient humans work. And once it has them, they're there, and both the AI, and humanity, has to figure out what we do with a bunch of simulated beings trapped within the mind of another artificial being.
Which, yeah, I can see how that would be a problem, but not for me personally, right? I'm not an AI researcher. I'm pretty sure that for us artists and writers there's not a lot to worry about. After all, we don't have to deal with the hard realities of AI, we can comfortably speculate and fantasize about the intriguing future that awaits us without worrying too much about solving the problems ourselves. We're never going to get so exact a fictional simulation that our own creations start thinking for themselves! And besides, artists are smart, we'll know if that's what's happening and stop ourselves before we go to far. There are just fundamental limitations to our simulations that would prevent the creation of an actual secondary consciousness in our own minds.
Why does that sound familiar?
There's a story I remember reading as a child (which Google tells me was probably "Liang and the Magic Paintbrush"), a picture book about a boy who can paint pictures so real they spring to life, and so he deliberately paints flaws in his form. The Emperor hears tell of the boy's strange powers and commissions the artist to paint a great dragon. The boy does, but leaves one eye unfinished, blank.
The emperor doesn't like this.
You can probably imagine what kind of ending the story has. It's not a happy ending.
I didn't really understand this story as a child, and I'm not sure I quite grasp the intended metaphor now, but boy, I can think of a pretty intriguing new reading.
Think about it like this:
As artists (used here to include writers, dancers, &c.--creators of aesthetic works) we often simulate characters, audiences, Ideal Readers, even semi-abstracted emotional ideas as part of our works. I think this is true even of abstract artists--expressionists, poets, dancers, maybe even chefs--albeit to a lesser extent than to realists. There's still a mental model of audience and experience that you're trying to convey--a simulation that attempts to accurately map behavior.
In the most extreme cases of this modeling, we have writers discussing their characters in self-determining terms. The character does, in essence, what it wants and the writer is along for the ride. Which isn't to say the simulation has free will. Think of it in terms of the classic philosophical problem of omniscience: because we are an omniscient observer, we know what the characters would do based on our modeling of their personality, and so while the characters aren't literally walking around making decisions, we see the path that they would weave through a fictional narrative.
Basically, although ultimately I (or more accurately, my mental simulation) am winding the characters up and noting what paths they naturally wobble along due to the particular physics of their setting and personality, they still feel quite real. So intense is this experience that I personally have a lot of trouble subjecting characters to pain, because I feel to much empathy for these simulations, despite the fact that they don't have subjective experiences.
At least, they don't yet.
There's going to be a point in possibly the very near future when we start actually augmenting our intelligence. How long do you think before we start simulating simple people--actual subjectively aware life forms--within our own swelled heads?
If you are an artist, you should be feeling sheer terror right now. Imagine what it will be like to write stories or draw portraits when you might accidentally create a real being just by thinking too hard about your subject.
You will essentially have become mentally pregnant with a fully grown adult that cannot escape the confines of your mind.
Oh, but it gets worse!
See, there's nothing currently that says a sociopath can't be an artist, and that a sociopathic artist can't get the same kind of brain augmentation that the rest of us can.
Ever wanted to just... blow up the world? Well in the future, you might be able to blow up fully realized simulated worlds with sentient beings--genocide as stress relief.
It's enough to make you give up art forever... or give up augmentation.
But that's a path I don't really find interesting or productive. The benefits of upgrading everyone's brains are just too damn weighty to be counterbalanced by this totally hypothetical, fictional, and possibly straight up idiotic media theorist's fears. Remember, this isn't my field. I could be totally off base here--dreaming up nightmares that could never manifest in real life.
No, this isn't a problem we can run from, as alarming as it is. Maybe the solution is to put hard limits in our own brains along the lines that Yudowski suggests for sentient AI--something that can recognize when a being might be created and stop it from being created. We need to leave flaws in our form so that the dragon doesn't spring to life. That seems like, at the very least, a useful metaphor for describing the problem. And really, one of the lessons of that magic paintbrush tale is that art can and perhaps even should accept flaws. Remember, artists are liars, and art derives the greater part of its power from lies--sometimes lies as simple as the careful manipulation of a shadow, or a single unfinished eye.
How do we set up those limits? Hell if I know. But it's something we're going to have to worry about in the future, I think. And in the mean time I'll be thinking very carefully before killing off any fictional characters.
After all, for all I know I may already have blood on my hands.
Circle me on Google+ at gplus.to/SamKeeper. As always, you can e-mail me at KeeperofManyNames@gmail.com. If you liked this piece please share it on Facebook, Google+, Twitter, Reddit, Equestria Daily, Xanga, MySpace, or whathaveyou, and leave some thoughts in the comments below.
1 I can never quite figure out whether LessWrong is an identity, a collective, or just a website full of articles. It might be all three, and it seems to be used differently in different situations. Fucking transhumanists.