There’s been a lot of talk lately about AI software capabilities improving to the point that we may be able to simulate someone so effectively that their interactions with us will be nearly identical to the real thing. At that point, we may have to redefine what such terms as consciousness and sentience even mean. Does it matter if the substructure is a machine, if the output is indistinguishable from reality?
Well, according to a few scientists, philosophers, and deep thinkers, the rabbit hole may go a hell of a lot deeper than that.
Let’s start with Russian self-styled “transhumanist” Alexey Turchin. Turchin has suggested that in order to build a convincing simulated reality, we need not only much more sophisticated hardware and software, we need a much larger energy source to run it than is now available. Emulating one person, semi-convincingly, with an obviously fake animated avatar, doesn’t take much; we can more or less already do that.
But to emulate millions of people, so well that they really are indistinguishable from the people they’re copied from, is a great deal harder. Turchin proposes that one way to harvest that kind of energy is to create a “Dyson sphere“ around the Sun, effectively capturing all of that valuable light and heat that otherwise is simply radiated into space.
Now, I must say that the whole Dyson sphere idea isn’t what grabbed me about Turchin’s paper, as wonderful as the concept is in science fiction (Star Trek aficionados will no doubt recall the TNG episode “Relics,” in which the Enterprise almost got trapped inside one permanently). The technological issues presented by building a Dyson sphere that is stable seem to me to be nearly insurmountable. What raised my eyebrows was his claim that once we’ve achieved a sufficient level of software and hardware sophistication—wherever we get the energy to run it—the beings (can you call them that?) within the simulation would proceed to interact with each other as if it were a real world.
And might not know they were within a simulation.
“If a copy is sufficiently similar to its original to the extent that we are unable to distinguish one from the other,” Turchin asks, “is the copy equal to the original?”
If that’s not bad enough, there’s the even more unsettling idea that not only is it possible we could eventually emulate ourselves within a computer, it’s possible that it’s already been done.
And we’re it.
Work by Nick Bostrom (of the University of Oxford) and David Kipping (of Columbia University) has looked at the question from a statistical standpoint. (Nota bene: David Kipping’s Cool Worlds Lab on YouTube is an absolute must-subscribe if you’re interested in astronomy.) Way back in 2003, Bostrom considered the issue a trilemma. There are three possibilities, he says:
Intelligent species always go extinct before they become technologically capable of creating simulated realities that sophisticated.
Intelligent species don’t necessarily go extinct, but even when they reach the state where they’d be technologically capable of it, none of them become interested in simulating realities.
Intelligent species eventually become able to simulate reality, and go ahead and do it.
Kipping recently extended Bostrom’s analysis using Bayesian statistical techniques. The details of the mathematics are a bit beyond my ken, but the gist of it is to consider what it would be like if choice #3 has even a small possibility of being true. Let’s say some intelligent civilizations eventually become capable of creating simulations of reality. Within that reality, the denizens themselves evolve—we’re talking about AI that is capable of learning, here—and some of them eventually become capable of simulating their reality with a reality-within-a-reality.
Kipping calls such a universe “multiparous”—meaning “giving birth to many.” Because as soon as this ball gets rolling, it will inevitably give rise to a nearly infinite number of nested universes. Some of them will fall apart, or their sentient species will go extinct, just as (on a far simpler level) your character in a computer game can die and disappear from the “world” it lives in. But as long as some of them survive, the recursive process continues indefinitely, generating an unlimited number of matryoshka-doll universes, one inside the other.
[Image licensed under the Creative Commons Stephen Edmonds from Melbourne, Australia, Matryoshka dolls (3671820040) (2), CC BY-SA 2.0]
Then Kipping asks the question that blows my mind: if this is true, then what is the chance of our being in the one and only “base” (i.e. original) universe, as opposed to one of the uncounted trillions of copies?
Very close to zero.
“If humans create a simulation with conscious beings inside it, such an event would change the chances that we previously assigned to the physical hypothesis,” Kipping said. “You can just exclude that [hypothesis] right off the bat. Then you are only left with the simulation hypothesis. The day we invent that technology, it flips the odds from a little bit better than 50–50 that we are real to almost certainly we are not real, according to these calculations. It’d be a very strange celebration of our genius that day.”
My only questions about this—and I’ll admit up front that I’m no expert—have to do with energy availability and fidelity. If the universe really is multiparous (to use Kipping’s term), then the ever-branching ramifications would seem to me to imply an infinite amount of energy in the base world, to support all those nested virtual universes. Physics, as you no doubt know, is uncomfortable with infinities, so this immediately suggests a problem somewhere with the reasoning.
Second, at every tier of the branching tree of virtual universes, wouldn’t you lose some degree of fidelity? You can’t perfectly simulate our universe within even an arbitrarily-large computer; in order to do so, you would need information and processing capacity equal to the universe it’s contained in. So you’d have to cut corners, make some parts of the simulation merely “good enough.” Every time you go up a level, the problem would amplify, until you’d finally have a simulation with poor enough fidelity (or with high enough glitchiness) that it simply wouldn’t be convincing.
On the other hand, the Mandela Effect and Glitch-in-the-Matrix aficionados have a convenient answer to that. So maybe that’s a can of worms to open another day.
The whole thing reminded me of a conversation in my novel Sephirot between the main character, Duncan Kyle, and the fascinating and enigmatic Sphinx, that occurs near the end of the book:
“How much of what I experienced was real?” Duncan asked.
“This point really bothers you, doesn’t it?”
“Of course. It’s kind of critical, you know?”
“Why?” Her basso profundo voice dropped even lower, making his innards vibrate. “Everyone else goes about their lives without worrying much about it.”
“Even so, I’d like to know.”
She considered for a moment. “I could answer you, but I think you’re asking the wrong question.”
“What question should I be asking?”
“Well, if you’re wondering whether what you’re seeing is real or not, the first thing to establish is whether or not you are real. Because if you’re not real, then it rather makes everyone else’s reality status a moot point, don’t you think?”
He opened his mouth, stared at her for a moment, and then closed it again.
“Surely you have some kind of clever response meant to dismiss what I have said entirely,” she said. “You can’t come this far, meeting me again after such a long journey, only to find out you’ve run out of words.”
“I’m not sure what to say.”
The Sphinx gave a snort, and a shower of rock dust floated down onto his head and shoulders. “Well, say something. I mean, I’m not going anywhere, but at some point you’ll undoubtedly want to.”
“Okay, let’s start with this. How can I not be real? That question doesn’t even make sense. If I’m not real, then who is asking the question?”
“And you say you’re not a philosopher,” the Sphinx said, her voice shuddering a little with a deep laugh.
“No, but really. Answer my question.”
“I cannot answer it, because you don’t really know what you’re asking. You looked into the mirrors of Da’at, and saw reflections of yourself, over and over, finally vanishing into the glass, yes? Millions of Duncan Kyles, all looking this way and that, each one complete and whole and wearing the charming befuddled expression you excel at.”
“Yes.”
“Had you asked one of those reflections, ‘Which is the real Duncan Kyle, and which the copies?’ what do you think he would have said?”
“I see what you’re saying. But still… all of the reflections, even if they’d insisted that they were the real one, they’d have been wrong. I’m the original, they’re the copies.”
“You’re so sure?... A man who cannot prove that he isn’t a reflection of a reflection, who doesn’t know whether he is flesh and blood or a character in someone else’s tale, sets himself up to determine what is real.” She chuckled. “That’s rich.”
So yeah. When I wrote that, I wasn’t ready for it to be turned on me personally.
Anyhow, that’s our unsettling science/philosophy for this morning. Right now it’s probably better to go along with Duncan’s attitude of “I sure feel real to me,” and get on with life. But if perchance I am in a simulation, I’d like to appeal to whoever’s running it to let me sleep better at night.
And allow me to add that the analysis by Bostrom and Kipping is not helping much.




Great read! 💜
I think about these concepts way more than I probably should. And the older I get, the more I’m convinced we’re in a simulation. I don’t think that’s necessarily a bad thing either. I’ve never considered the AI/simulation/self-awareness angle as you’ve presented it here. This will now keep me awake for several weeks. 🥲