The Worst Filing System Known To Humans

A Song of Ice and Fire (1) Affect (9) Alienating My Audience (27) Animation (17) Anime (5) Anonymous (3) Anything Salvaged (15) Art Crit (30) Avatar the Last Airbender (2) Black Lives Matter (1) Bonus Article (1) Children's Media (2) Close Reading (88) comics (24) Cyborg Feminism (2) Deconstruction (9) Devin Townsend (2) Evo Psych (1) Fandom Failstates (1) Fanfiction (18) Feminism (21) Fiction Experiments (14) Food (1) Fragments (14) Games (20) Geek Culture (21) Getting Kicked Off Of TV Tropes For This One (8) Gnostic (2) Guest Posts (8) Guest: Ian McDevitt (2) Guest: Jon Grasseschi (3) Guest: Leslie the Sleepless Film Producer (1) Guest: Sara the Hot Librarian (3) Guest: Timebaum (1) Guest: Yanmato (3) Harry Potter (8) Harry Potter and the Methods of Rationality (3) Has DC Done Something Stupid Today (3) Homestuck (11) How Very Queer (28) hyperallthethings (3) hyperanimation (1) Hypercomics (7) I Didn't Ask For Your Life Story Sheesh (20) Illustrated (29) In The Shadow Of No Towers (2) It Just Keeps Tumblring Down Tumblring Down Tumblring Down (10) It's D&D (2) Judeo-Christian (13) Lady Gaga (5) Let's Read Theory (2) Lit Crit (18) Living In The Future Problems (6) Lord of the Rings (5) Mad Max (1) Madoka Magica (1) Magic The Gathering (3) Manos (3) Marvel Cinematic Universe (12) Marx My Words (12) Medium Specificity (5) Meme Hell (1) Metal (2) Movies (23) Music (24) Music Videos (20) Object Oriented Ontology (2) Occupy Wall Street (3) Pacific Rim (2) Paper Roundup Clambake Panic Attack (5) Paradise Lost (4) Parafiction (1) Patreon Announcements (10) Poetry (11) Pokemon (1) Politics and Taxes and People Grinding Axes (13) PONIES (9) Raising My Pageranks Through Porn (4) Reload The Canons! (5) Remixes (8) Review Compilations (6) Science Fiction Double Feature (16) Self-Referential Bullshit (19) Sociology (11) Spooky Stuff (27) Star Wars (2) Steven Universe (2) Surrealism (8) The Net Is Vast (19) Transhumanism (4) Twilight (4) Using This Thing To Explain That Thing (105) Watchmen (4)

Reload the Canons!

This series of articles is an attempt to play through The Canon of videogames: your Metroids, your Marios, your Zeldas, your Pokemons, that kind of thing.

Except I'm not playing the original games. Instead, I'm playing only remakes, remixes, and weird fan projects. This is the canon of games as seen through the eyes of fans, and I'm going to treat fan games as what they are: legitimate works of art in their own right that deserve our analysis and respect.

Tuesday, March 19, 2013

AI and the Magic Paintbrush

This'll be relevant in a few paragraphs, I swear.

I am discovering that when Elizer Yudkowski, the author of Harry Potter and the Methods of Rationality and LessWrong1, tells me I should be scared of something, there are actually two levels of terror that I have to access. This is because it's not difficult for me to distance myself from problems of AI--after all, the likelihood that I'm going to be designing a pet friendly artificial intelligence in my basement is pretty slim. So, when he says "I don't talk about this idea, because most people are too frightened by it to react with the proper curiosity and interest," I can easily pick curiosity, because I've got nothing on the line.

I have to get to a state where I can actually be legitimately frightened--where I have chips in the game. Otherwise, all I'm doing is finding a solution that masks the act of fleeing from a problem in the guise of intellectual curiosity. It is very easy for me to say, "Wow, what an interesting problem," then immediately put the problem out of my mind. It would look like I'm reacting appropriately to something scary, but really I'm just disengaging.

This article is a very good example of that, actually. The basic gist is: if we create an AI, it might want to study humans. And the way you study things is frequently to make better and better models of your subject.

So, what happens if the AI accidentally creates models of humans so good that they become sapient?

And then what happens if the AI decides to start deleting old backup copies of these sapient simulations?

It's an intriguing thought that a lot of AI researchers, according to Yudowski, anyway, would handwave out of existence--they would say the problem would take care of itself, because the AI will be smart enough to recognize what was happening and keep it from happening, or that certain limitations would naturally prevent the creation of fully simulated consciousnesses. Of course, there's no way of really knowing that ahead of time, and I'm not sure how an AI would actually recognize that it was creating sentient cyberhumans while it's still in the process of figuring out how sentient humans work. And once it has them, they're there, and both the AI, and humanity, has to figure out what we do with a bunch of simulated beings trapped within the mind of another artificial being.

Which, yeah, I can see how that would be a problem, but not for me personally, right? I'm not an AI researcher. I'm pretty sure that for us artists and writers there's not a lot to worry about. After all, we don't have to deal with the hard realities of AI, we can comfortably speculate and fantasize about the intriguing future that awaits us without worrying too much about solving the problems ourselves. We're never going to get so exact a fictional simulation that our own creations start thinking for themselves! And besides, artists are smart, we'll know if that's what's happening and stop ourselves before we go to far. There are just fundamental limitations to our simulations that would prevent the creation of an actual secondary consciousness in our own minds.


Why does that sound familiar?

There's a story I remember reading as a child (which Google tells me was probably "Liang and the Magic Paintbrush"), a picture book about a boy who can paint pictures so real they spring to life, and so he deliberately paints flaws in his form. The Emperor hears tell of the boy's strange powers and commissions the artist to paint a great dragon. The boy does, but leaves one eye unfinished, blank.

The emperor doesn't like this.

You can probably imagine what kind of ending the story has. It's not a happy ending.

I didn't really understand this story as a child, and I'm not sure I quite grasp the intended metaphor now, but boy, I can think of a pretty intriguing new reading.

Think about it like this:

As artists (used here to include writers, dancers, &c.--creators of aesthetic works) we often simulate characters, audiences, Ideal Readers, even semi-abstracted emotional ideas as part of our works. I think this is true even of abstract artists--expressionists, poets, dancers, maybe even chefs--albeit to a lesser extent than to realists. There's still a mental model of audience and experience that you're trying to convey--a simulation that attempts to accurately map behavior.

In the most extreme cases of this modeling, we have writers discussing their characters in self-determining terms. The character does, in essence, what it wants and the writer is along for the ride. Which isn't to say the simulation has free will. Think of it in terms of the classic philosophical problem of omniscience: because we are an omniscient observer, we know what the characters would do based on our modeling of their personality, and so while the characters aren't literally walking around making decisions, we see the path that they would weave through a fictional narrative.

Basically, although ultimately I (or more accurately, my mental simulation) am winding the characters up and noting what paths they naturally wobble along due to the particular physics of their setting and personality, they still feel quite real. So intense is this experience that I personally have a lot of trouble subjecting characters to pain, because I feel to much empathy for these simulations, despite the fact that they don't have subjective experiences.

At least, they don't yet.

There's going to be a point in possibly the very near future when we start actually augmenting our intelligence. How long do you think before we start simulating simple people--actual subjectively aware life forms--within our own swelled heads?

If you are an artist, you should be feeling sheer terror right now. Imagine what it will be like to write stories or draw portraits when you might accidentally create a real being just by thinking too hard about your subject.

You will essentially have become mentally pregnant with a fully grown adult that cannot escape the confines of your mind.

Oh, but it gets worse!

See, there's nothing currently that says a sociopath can't be an artist, and that a sociopathic artist can't get the same kind of brain augmentation that the rest of us can.

Ever wanted to just... blow up the world? Well in the future, you might be able to blow up fully realized simulated worlds with sentient beings--genocide as stress relief.

It's enough to make you give up art forever... or give up augmentation.

But that's a path I don't really find interesting or productive. The benefits of upgrading everyone's brains are just too damn weighty to be counterbalanced by this totally hypothetical, fictional, and possibly straight up idiotic media theorist's fears. Remember, this isn't my field. I could be totally off base here--dreaming up nightmares that could never manifest in real life.

No, this isn't a problem we can run from, as alarming as it is. Maybe the solution is to put hard limits in our own brains along the lines that Yudowski suggests for sentient AI--something that can recognize when a being might be created and stop it from being created. We need to leave flaws in our form so that the dragon doesn't spring to life. That seems like, at the very least, a useful metaphor for describing the problem. And really, one of the lessons of that magic paintbrush tale is that art can and perhaps even should accept flaws. Remember, artists are liars, and art derives the greater part of its power from lies--sometimes lies as simple as the careful manipulation of a shadow, or a single unfinished eye.

How do we set up those limits? Hell if I know. But it's something we're going to have to worry about in the future, I think. And in the mean time I'll be thinking very carefully before killing off any fictional characters.

After all, for all I know I may already have blood on my hands.

Circle me on Google+ at As always, you can e-mail me at If you liked this piece please share it on Facebook, Google+, Twitter, Reddit, Equestria Daily, Xanga, MySpace, or whathaveyou, and leave some thoughts in the comments below.

1 I can never quite figure out whether LessWrong is an identity, a collective, or just a website full of articles. It might be all three, and it seems to be used differently in different situations. Fucking transhumanists.


  1. As a practical matter, the artist's variation seems to be much less of a problem than the one for AI.

    Humans already simulate other humans for more mundane reasons, but for obvious reasons we don't have room in our heads to spin up a full instance of Virtual Human 1.0. Instead, we use the magic black box in our brains that we label "empathy", which is only possible because all humans are wired with basically the same brain plan. Thanks to some recent upgrades to Social Ape 0.9 Beta, our "empathy" black box is able to timeshare our brain hardware with simulated people... who are really just alternative versions of ourselves that receive different backgrounds and beliefs when we roleplay them. Huh.

    To me it all seems much different from "AI so powerful it can fully simulate a mind of a totally different design than its own". Unlike the AI, we share hardware with our simulations, so we co-experience the thoughts and emotions of our roleplayed brainchildren. If they felt terror or dread at the thought of ceasing to exist, we would feel it; whatever our friends might feel about dying in real life, the empathized copies of our friends don't seem to mind painlessly ceasing to exist.

    1. It's not a magic black box, it's an entire set of "mirror neurons".

    2. Those of you in this subthread (at least) might be interested in the phenomenon of "tulpas" (easily googleable), whose proponents can't seem to decide whether they're self-inducing DID, creating new sentient beings, or fucking around with their mirror neurons when they create new sentient beings in their heads to play around with. And do it anyway.

      The metaphysical types are very creepy, but most of them seem at least nice to talk to, if rather immature.

  2. I remember reading a similar tale. IT was about an artist who was comissioned by a lord to paint a dragon, even though only members of the royal family could own such a painting. The artist responded by painting the dragon on the ground, rather than flying, ("Perhaps he will see how his pride weighs him down") and without eyes. Enraged, the lord took his own brush and painted in eyes. The dragon then peeled itself from the paper and destroyed the lord's house.
    I believe the author said it was based off of a Chinese folktale. I could be mistaken.

  3. Have you gotten to Roko's Basilisk yet? There are some elements in the LessWrong community who take thought experiments a little too seriously, with kind of screwed-up results.

    (You can google it, but I'll give you a hint. I won't describe it in detail in case LessWrong users come here and decide I need to die for the greater good or something ;-). It has to do with personality simulation by a strongly superhuman AI, it's recursive, and it leads to both self-censoring and actual censorship of the LessWrong site)

    1. Yesss, yeeeeessss. Keep spamming Roko's Basilisk EVERYWHERE until Roko himself gets really pissed off and mugs you in a back alley.

      Nawww, just keep spamming it everywhere because Roko's zealotry is kind of obnoxious.


Support on Patreon
Reader's Guide
Tag Index
Reload the Canons!