Share Podcast
The Meaning of Life in the Metaverse (with David Chalmers)
NYU philosophy professor David Chalmers discusses what the metaverse might look like, and the moral quandaries it could pose.
- Subscribe:
- Apple Podcasts
- Spotify
- RSS
We could soon be living more of our lives in immersive virtual worlds, but what will that look like and how will it affect us? New York University professor of philosophy and neural science David Chalmers joins Azeem Azhar to discuss what the metaverse might offer us, the moral quandaries it could pose, and what our rights there might look like.
They also address:
- What the consequences of wrongdoing in the metaverse might be.
- How governance and society could develop in virtual worlds.
- The philosophical precedents for virtual worlds.
@Azeem
@exponentialview
David Chalmers
Further resources:
Azeem’s 2022 Trends: Web 3.0, Sci-Fi Tech, and the Metaverse – Exponential View podcast, 2022
What Studying Consciousness Can Reveal about AI and the Metaverse (with Anil Seth) – Exponential View podcast, 2022
Reality+, by David Chalmers
AZEEM AZHAR: Welcome to Exponential View with me, Azeem Azhar. The world is changing at an amazing pace. We’re entering the exponential age propelled by radical, remarkable technologies. And on this podcast, I want to explore the themes, topics, and questions that will help you make sense of this change. Now, one of the most mind-bending and divisive topics in technology at the moment is the metaverse, the idea of a fully immersive virtual world in which we might come to spend more and more of our time. Big tech companies and startups alike are working to build the tech that underpins these worlds. But what will the metaverse end up looking like and who will decide how it works? How would spending more time in increasingly realistic simulations affect the way we live, affect who we are, and affect how we behave? Today’s guest has devoted a lot of time to those questions and has a provocative thesis. Virtual realities will eventually be indistinguishable from what most of us think of as the real world and will allow us to live fulfilling, happy lives. David Chalmers, welcome to Exponential View.
DAVID CHALMERS: Thanks Azeem. It’s great to be talking to you.
AZEEM AZHAR: David, you are, I suspect, up to till this book best known for coining the idea of the hard problem of consciousness. You are originally a mathematician, then a philosopher of mind. You were a rock musician for a little while as well. But what brings a nice philosopher of mind like you into the mucky, grubby world of the metaverse and virtual reality?
DAVID CHALMERS: Well, these topics actually have a long history in philosophy. As a philosopher, I’m just fascinated by the mind and its relationship to the world. One pole of that, if you like, is consciousness. The human mind, subjective experience, what it’s like to be us. The other pole of that is reality, the physical world out there. And philosophy just asks questions like how can a mind know about the world? How can we know anything about reality? Now, suddenly, with virtual reality technology, we have this whole new class of realities. It’s not just good, old, physical reality anymore. We have artificial realities that we can kind of cook up ourselves on a computer and then go and inhabit. It’s like it’s a whole new class of realities. That’s just fascinating for a philosopher to think about and come to grips with.
AZEEM AZHAR: The timing for your work now, is that based on observing the pace of technology and seeing the richness with which we can now deliver these realities has got to a sort of turning point where these questions become even more present? That they’re getting closer and closer?
DAVID CHALMERS: Yeah. In a way these questions have moved from philosophy, to science fiction, to science, to technology. I actually got into this years ago when The Matrix movie came out. Just a wonderful illustration of the idea that we could somehow be inhabiting a virtual world. At that point it was science fiction, of course, but virtual reality technology’s been around for a while. The more general ideas here are arguably been around centuries in philosophy. The French philosophy Rene Descartes said, “How do you know that you’re not dreaming right now? How do you know that an evil demon isn’t producing sensations of an external world like this when none of it is real?” Now so the modern version of that posed in, say, movies like The Matrix is how do you know you’re not in a computer simulation? Here we are 20 years later, 20 years after The Matrix and the technology for constructing virtual worlds like The Matrix is very much well on the way. These last 5 or 10 years I’ve seen so many advances in VR technology. These ancient philosophical questions have suddenly moved away from science fiction into the realm of fact. We have to worry about them.
AZEEM AZHAR: I find it quite fascinating looking at the history of these technologies that even before they were immersive, even before they were Matrix-like, which even wearing in Oculus headset is not by any means, you could get lost in a way in a computer-generated world. Even those text-based, multi-user sort of Dungeons, they were called, that you’d find on the internet in the early ’80s could suck someone in for hours at a time. I mean, what does that tell us about either the human imagination or the human susceptibility to want to be able to navigate spaces that are distinct from the ones that we physically inhabit?
DAVID CHALMERS: My own first virtual world was actually one of those text-based virtual worlds, Colossal Cave Adventure. Back in 1976, when I was 10 years old, I just came across it on a computer at my father’s workplace. And as you said, I got sucked in. It was one of these text-based world, a long, complicated system of caves underground. You could go north, you could go south. You could pick up treasures. You could fight trolls. And yeah, I mapped out this whole system of caves. It felt immersive and it was totally addictive. And I think we’ve certainly found over the years that video games and many other virtual worlds just draw people in. Maybe the human mind has a capacity to want to understand and explore reality. And it turns out that doesn’t just apply to the physical reality. Okay, we all love exploring a new city or a new country, but we’re faced with a new virtual world. And that instinct, that quest to understand reality just takes over.
AZEEM AZHAR: It is fascinating to build from your first example of Colossal Cave Adventure through to environments like the Whole Earth ‘Lectronic Link or the WELL, which is one of the first online communities. And then you have people living in AOL. And then I think one of the most interesting was Second Life because Second Life was, I think, one of the first examples where you tried to construct a 3 space that was free of a game designer’s narrative. We had had increasingly rich virtual worlds up to that point. I remember the Ultima series of games, which I’m not sure if you ever played them. They were kind of pretty wonderful. They constructed. There was this world building that went on, but ultimately it was still narrated within the frame of a game designer. And along comes Second Life, and it’s one of the first examples where there are really no limits to what can be built and where you can go. So they’re sort of reminiscent of those self-generating, text-based games, but with avatars and graphics and sound. Do you think that that marks a sort of distinction, that sense of the increasing richness, but also the ability to be an author within the world?
DAVID CHALMERS: Yeah, I do think that Second Life is distinctive and important. It was the first really popular virtual world that wasn’t a game world. Almost all virtual worlds that kids are familiar with these days come from video games. I mean, there was a bit of a history before Second Life. Some of these multi-user Dungeons, as you said, were just used for social purposes without game purposes. But Second Life just put this on steroids of this massive virtual world that people could go in not to win a game, but to make friends, build relationships, build communities, to maybe make some money, sell some products, and so on. Many of the things you actually do in the physical world, in a way that’s one of the ideas that’s supposed to be distinctive of the metaverse. The metaverse is all about using virtual worlds for a vast array of ordinary human activities. And that was beginning to happen already in Second Life back in 2007, I guess, was its high point. It still exists today, of course. Now Second Life, it’s not virtual reality. It’s not immersive. You don’t experience it in 3D. But I think it is an important weigh station on the road to the metaverse.
AZEEM AZHAR: But if we back to that core contention of yours, which is that within these virtual worlds, we will be able to live moral lives and have lives where we can raise questions of virtue and good behavior and socialization. Do you think that is dependent on a certain level of technological capability for that question to be relevant? Or do you think we can explore those questions even at a point where the technology hasn’t reached that level of sophistication?
DAVID CHALMERS: We can explore those questions at every point. You could explore them for the multi-user Dungeons back in the 1990s. And yeah, by the time you get to Second Life, you already see people living quite rich, at least parts of their lives in Second Life. For some people, the communities they built in Second Life were and are very important to them. And you could already raise the question are these meaningful and valuable activities for these people or are they just escapism? And I think already many users of virtual worlds like Second Life would say, “No. This is far from just escapism. This is a very important part of my life.” I do think as the technology advances, one will become more and more intimately connected to these artificial worlds in ways more and more analogous to the ways we’re connected to the physical world. So I think VR technology has already put us further down that track. It’s limited, still, though in various ways. One’s sense of one’s body, for example, is very limited.
AZEEM AZHAR: Right. What I’d love to do is unpeel that onion a little bit. But before we do that, let’s return to the thesis because the distinguishing feature, I think, is that you argue there’s no essential difference between biological reality and a putative artificial reality. So just for the broad listener, could you just summarize that argument briefly?
DAVID CHALMERS: Yeah. So my central thesis is virtual reality is genuine reality. It may not be exactly the same as physical reality, but I want to argue that in principle it’s on a par with physical reality. It’s just as real and it can in principle be just as good. There’s a long tradition of thinking that virtual reality has to be a kind of illusion or a hallucination or a fiction, that it’s somehow not really real. And that’s what I want to resist.
AZEEM AZHAR: There seems to be a connection to this idea of this sort of simulation thesis, the sense that we might be living in a simulated world. But I got the sense that your argument in Reality+, which is your book, is not dependent on believing in the simulation hypothesis. You’re making an argument that says if we could simulate well enough within virtual reality, we can create a reality in which people can live as good a life as they could in a biological reality.
DAVID CHALMERS: That’s right. I mean the extreme version of the thesis says something like our physical reality is already a virtual reality. The simulation hypothesis would be a version of that. That’s the thesis that we’re already living in a computer simulation. I take that seriously. I think we can’t rule that out. We can’t disprove it. There’s some chance it could be true. But you don’t need to believe the simulation hypothesis to take on board these thoughts about virtual reality as a technology that we’re using. The simulation hypothesis is in the realm of science fiction, but virtual reality technology is something we’re constructing today. So I sometimes think of this as from The Matrix to the metaverse. The Matrix is the extreme science fiction scenario, which is so much fun and so enlightening to consider. In many respects, it’s kind of the pure case of a virtual reality. But the metaverse is the technology we’re actually constructing, where suddenly the rubber hits the road and these questions become of practical importance. Are these genuine realities or illusions? Can we actually live a decent life there?
AZEEM AZHAR: What does a metaverse need to have? What are the attributes that it should have in order to be able to be a reality in which we can live sort of pari passu with a biological reality? I mean, there are some obvious things like I have to be able to see things at the same resolution and fidelity. I have to be able to feel things and smell things and feel heat and feel the location of my limbs in space. But it strikes me sort of other things that are required for this virtual reality to have the affordances of something in which I could be a moral agent.
DAVID CHALMERS: Again, we can look for the ideal case, a virtual world in which one can do everything that one can do in physical reality. Right now, how do you eat and drink in a virtual world? It’s not really something. How do you have sex in a virtual world? Well there are ways, but they are quite limited compared to physical sex. Right now there are some things that physical realities and physical bodies just far outstrip what virtual realities and virtual bodies can do. We could look, though, towards 50 or 100 years in the future once we’ve really developed virtual reality technology, so that maybe we have brain computer interfaces and these virtual worlds directly interact with our body representations in the brain so that we can experience ourselves with all kinds of amazing bodily experiences, whether or not they’re actually happening with one’s physical body. We already have digital bodies in virtual reality avatars. Right now they’re more primitive than physical bodies, but they’re getting better all the time.
AZEEM AZHAR: It strikes me that there are some other things that we need in this sort of virtual world. You need to be able to have other people within that world. And those people either have to be other people who are simultaneously in the virtual world with you, or they need to be manifestations of real people or programmed entities. And if they’re programmed entities, they need to be able to behave a little bit like real people, unless they identify themselves as the Alexa inside my virtual world. And I guess we also need to be able to have a world that isn’t designed like a game. So it’s a little bit more like a Second Life or a Minecraft than it is an EverQuest or a Ultimate Online or a FIFA 21. And I suppose there also then needs to be aspects of that virtual world that are persistent that allow it to evolve and change. Does there need to be physics? Do there need to be dynamics and economics within a virtual world for us to be able to express ourselves as those moral agents?
DAVID CHALMERS: Yeah, I think all of those things are super important. And probably the most important thing is the first thing you mentioned, namely other people. And this kind of brings us back to consciousness. We are conscious beings. Much of the value in our lives comes from consciousness and from the connection to other people’s consciousness. I mean, the good news is that virtual worlds are actually quite well-designed for that. Multiplayer virtual worlds were huge with video games and multi-user virtual worlds have been very big ever since those multi-user Dungeons, and a world like Second Life is just constructed from the start to be social so you can have those meaningful interactions. But also I think it’s very important for us to have… What else do we get meaningful from? We get meaning from having serious goals and projects that we can overcome, that we can achieve. Things we really care about, maybe, say, the struggle against oppression or the struggle to build communities, to build cool things, or to build some enterprise. It’s hard to do in a video game, but I think of gaming as a limited kind of project here. It’s more tied to escapism than to the central path of our lives. We need central projects here that we really care about.
AZEEM AZHAR: That idea of kind of a community coming together or having a common goal is strongly analogous to other things we see in less rich virtual environments like Wikipedia, or open source software projects, or people flocking together on a crowdfunding. That is people who are only interacting with each other through the virtual space and a computer-mediated environment on something they actually care about where there are behavioral norms and standards and there’s a goal. We already see that happen in millions of instances across the internet.
DAVID CHALMERS: It’s true. On the other hand, it’s also limited. I’ve taught a lot the last two years as a university professor. I’ve taught over Zoom and it’s fine. I can interact with students and so on, but it just feels so limited in part because we’re not really sharing a common physical space. At a certain point, I got to go back into the classroom and actually be in the same space as my students and there’s something just very central about that.
AZEEM AZHAR: So what is it about us being in a common physical space that changes the value we get from it or changes the way in which we behave?
DAVID CHALMERS: I think some part of it is just that we are very deeply spatial creatures. We are somehow designed to live in space, to share space with other people. Conversation naturally happens between multiple people in a space with eye contact. We’re also embodied creatures, and as embodied creatures, we’re designed to share space, whether it’s through touch or even just sight and social interactions. You’re right that the internet alone already got you a long way to digital communities and digital relationships. But I think what it lacked is both embodiment and the share of a space, as well as the sheer immersion you get from inhabiting a genuine three-dimensional reality, which is somehow what we as human beings are designed for.
AZEEM AZHAR: So I’m curious about how you think we might behave. I mean, one of the things that we saw in Second Life was that people would behave differently. They would dress themselves differently. They would appear differently. They would create differently to how they might express themselves in the real world because it had different limits. And you even see this in a rather more closed-ended, but still relatively constructive game like Fortnite, where people take on digital skins and present themselves in different ways. When you think about reality+, how are we going to relate our biological identities to the identities that we carry on in a metaverse?
DAVID CHALMERS: Yeah. I think there’s a vast number of different ways to kind of express identity in the metaverse. When I go into virtual worlds and build an avatar, most of the time they’re pretty boring. They look a bit like me. They’ve got maybe something like a black jacket and some long gray hair, whatever. But other people go for all kinds of big changes in expression. Sometimes people express themselves with what appears to be a different gender. I think a lot of people who are experimenting with things like exploring, say, their gender identity in the physical world, then experimenting with different gender expressions in the virtual world is often a path towards coming to grips with their identity. Some people actually maintain somewhat different identities in different worlds, different personas. Maybe that’s continuous with what happens in the physical world, where we might express ourselves one way with our family and another way at work and another way with our friends. But I think as virtual worlds become more and more ubiquitous in our lives we’re just going to see many, many more ways of expressing identity.
AZEEM AZHAR: How do we go about designing pain in a virtual world?
DAVID CHALMERS: Pain is valuable. It serves a certain role for us to signal when we’re in danger or in bad conditions where we want to remove of ourself from the situation. On the other hand, there are forms of pain which seem to be much worse than they need to be. And most notably chronic pain, which doesn’t seem to serve a useful function. So at the very least, I think inside or outside of the virtual world, I would hope to be able to re-engineer the brain so you only get pain signals when they matter and not out of proportion to how much they matter. And I don’t know to what extent that’s a question about a virtual world and just a question about how much we want to design our cognitive systems.
AZEEM AZHAR: When you look at a metaverse or a virtual reality, should people be allowed to commit crimes? Should they be allowed to behave badly? Exact violence on another person within that virtual world?
DAVID CHALMERS: This question has arisen already any number of times in virtual worlds. Even back in the multi-user Dungeons of the 1990s, there was a case of kind of a virtual sexual assault where one character was harassing and even quasi-assaulting another user’s virtual body in this case. And this was quite traumatic for that user and many people within that virtual world. The perpetrator should be kicked out of the VR and punished as best they could. One gets similar incidents, even in virtual worlds of today, like Facebook’s world Horizon just in recent weeks has had reports of potential harassment and bodily threat in virtual worlds. They responded, actually, by introducing a new feature, a four-foot sphere around every user’s body such that it was impossible for other users to basically come into direct contact with their bodies. I mean, it’s optional and people can turn it off, but at least by default that kind of protection is there. So I do think that if I’m right, that virtual reality is genuine reality and that what happens there is meaningful, we have to treat what happens in virtual worlds in some ways continuously and eventually on a par with the way we treat similar things that happen in physical reality. Right now, I don’t know that the law always does that, but I do think our legal system is going to have to catch up.
AZEEM AZHAR: I mean, I’ve seen on Minecraft with my kids there have been issues in classes where someone has gone and damaged someone else’s Minecraft creation. And of course that does lead to a real-world sanction because labor and effort did go into building this thing in the virtual space. So I see how that can flow back from the virtual into the sort of real or biological space. What I was really thinking about is that these virtual worlds are going to be constructed. They’re going to be built by people or more likely by people under the managerial direction of corporations. So should the designers or the operations design these worlds such that they do have the capabilities within them, the affordances that allow the members of these virtual realities to assault each other or murder each other or commit crimes?
DAVID CHALMERS: I mean, it kind of comes down to the same question that we already see, say with social media corporations such as Facebook, which is should they somehow control what’s going on in social media? In which case it looks like they’re vulnerable to charges of manipulation. Or should they just let everything develop laissez faire, in which case they’re vulnerable to charges of just letting awful things happen? And social media corporations, of course, regularly get accused of both. Once you have corporations designing virtual worlds in which we’re spending our time, I do think they will, at a certain sense, be held responsible for what happens in those virtual worlds. This is one difference between the virtual world and the physical world. In the virtual world we have a creator of that world who can be held to account for what happens there. In the physical world, who knows maybe there’s a creator, but if they’re anywhere, God is up in heaven in some place where we don’t have access to hold him into account.
AZEEM AZHAR: So do these then just become spaces of manipulation for individuals in ways that we don’t see in the biological realm?
DAVID CHALMERS: Yeah. I mean, especially if there’s going to be virtual worlds which are free. Constructed by corporations, but free for everyone to use. Then okay, well, then someone’s got to be paying somewhere. And presumably in worlds like that, the people are going to be monetized and going to be manipulated. That’s going to be part of the equation that makes this happen. Now it may be that there’ll be some virtual worlds where the corporations or whoever constructs them promise them to be free of interference and manipulation, but maybe only relatively elite or well-off people are going to be able to afford that kind of autonomy. And the masses will be in the monetized and manipulated world. So yeah, it’s hard to know what the right solution is to this in the long run. I very much hope that there will be user-controlled and user-governed virtual worlds from the bottom up without corporations playing this essential role. Because the moment there’s a corporation in there in the mix, you just get faced with this responsibility versus manipulation dilemma which I don’t really see a good solution.
AZEEM AZHAR: But if we play this out a little bit, we can imagine a point at which the technologies that one needed to develop a very large virtual world that has the kind of characteristics that you think it needs could be very, very cheap from a sort of capital expenditure and power consumption perspective. In other words, a bunch of volunteers could, as they can create a piece of open source software without buying an IBM mainframe today, go off and create a virtual world. A virtual world that could be created and generated by users. So then if that happened and if that was possible, how do you think those rules might then emerge? I mean, are we going to see sort of Rawlsian versus Hobbesian versus Rousseau-style approaches and methodologies to agreeing how these virtual societies or virtual worlds should operate?
DAVID CHALMERS: Yeah, I think it’s entirely possible we’ll have just a vast set of virtual world operating on entirely different principles and people will be able to actually use this to experiment with different kinds of society and different kinds of government. The philosopher Robert Nozick in his book Anarchy, State, and Utopia started talking about utopia. And he said for him, utopia was not like a single best ideal form of society, but rather he said it’s what he called meta-utopia. A meta-utopia is a choice of societies to live in with many different governing systems. So some may want a libertarian society, and some may want a socialist society, and some may want some communitarian society where communities come first. And yeah, I can imagine a future where we have so many different virtual worlds that people would be able to choose the kind of virtual world that they want to inhabit. Of course, there’s always going to be questions like who pays for all this and so on. So this is all going to work a whole lot better in a future where we’ve mastered nuclear fusion, and energy is relatively cheap, and computer power is trivial. It’s still an interesting question. Is it always going to be the case that the cutting edge of the technology is run by the corporations? They’re always going to have the best engineers? Sure, they’ll be the open source communities out there in the background too, but will they always be a step behind? I think that’s just something where we have to wait and see.
AZEEM AZHAR: One promise of a virtual world is that you could construct a lot of abundance. Your ability to heal the character or just generate a new car or house or cup of coffee is much, much easier. Does abundance lead to an idea of utopia? Because some utopias are sort of constructed around this idea that well, all of our material needs are met. And I just wonder the extent to which there is a sense of that in the discussion of utopia and virtual worlds.
DAVID CHALMERS: It is kind of intriguing that in virtual worlds there’s at least the potential for certain kinds of abundance, most obviously with the analogs of material goods in virtual world, say housing. In the physical world, you build a house. That’s expensive. You build another house exactly the same. That’s about equally expensive. In VR, you design a house, you build it once and then you can duplicate it just like that if you need to.
AZEEM AZHAR: There seems to be a desire by some of the people who are building these things to embed that kind of affordance of scarcity so that the price mechanism works in the virtual worlds that are being built. I mean, what should we make of that?
DAVID CHALMERS: Yeah, I think that’s right. And what this kind of brings out is the market wants scarcity. You get a market in something in virtue of it being scarce. And there’s strong forces for markets to be very influential in designing these worlds. So yeah, you see a lot of mechanisms right now that you might view as mechanisms of artificial scarcity, taking entities that look like they should be abundant like a digital object or artwork that could be freely duplicated, and then make it scarce. The non-fungible token, NFT, is a prime example of this. It’s deliberately designed to undercut the fungibility and the abundance of virtual goods and it’s been created for market-based reasons. And I mean, it raises deep questions about the role of markets in designing virtual worlds. I mean, this, of course, is not a new issue. It arises for digital goods of any kind which are very easy to duplicate it. In practice I think what we find is there are mechanisms of scarcity, including things like copyrights and special-access mechanisms and so on, but there’s also a lot of forces towards open source abundance. So often we’ll have a cutting edge product, which is scarce and costly, and then a just behind the leading edge product, which is maybe open source and abundant. So it wouldn’t entirely surprise me if something like that ended up being possible in virtual worlds too. But this is probably one of those places where it’s going to matter what the principles are that regulate your virtual world.
AZEEM AZHAR: Right. I think this is one of the questions I find really quite interesting because we are able to make choices about the characteristics of the sort of technical primitives that we’re using to build these virtual worlds. So we can decide whether we imbue that code with an idea of scarcity or not. And most of the people who are building these systems believe that scarcity is important in the world that they’re building. I suspect the reason they believe that is not because that’s what’s needed for the world to function, but it’s what’s needed for them to make money in the biological world and that’s why it’s there. But I just wonder what the implication is for the kind of society that might emerge in a world that is built first and foremost with an assumption that scarcity is important in the objects and the behaviors that exist within it.
DAVID CHALMERS: We should not expect such a world to end up being an egalitarian utopia. In fact, the reverse of it. These are going to be worlds which are designed for a certain subset of people to make a lot of money in virtue of scarcities. But I mean, it’s fine for there to exist some worlds like this, but I very much hope in the end there are some worlds designed on entirely different social principles by, say, communities who want to put community and equality first. People who band together to set up virtual worlds where everyone is on a par and where there’s many fewer of these mechanisms of artificial scarcity, where goods really are abundant for a very broad range of people. It’s an open question whether such worlds will end up being one step behind the other worlds in some respects or one step ahead, but this is one place where I think that meta-utopia idea really can get some engagement. Sure, some people can experiment with blockchain and decentralize scarcity and so on, but it doesn’t mean that everyone has to model their virtual worlds this way.
AZEEM AZHAR: Take us to a virtual world which is built on principles that are not necessarily starting from the basis of sort of a Lockean property right and a excludability, but perhaps from a basis that is much more community-oriented. What might life in that virtual world look like?
DAVID CHALMERS: I guess I’m thinking there could still be analogs of property because property is not a mechanism for inequality exactly in a world where property is abundant, where basically everyone can have their own planet if you like, assuming energy is cheap enough. And their own mansion at the beach, as the cliche goes. In a virtual world everyone has access at the very least to those material goods. Merely abundant material goods don’t get rid of all the mechanisms of inequality. There are so many social relations of power and domination which are likely to find their way into a virtual world. People will always find ways to make certain things scarce.
DAVID CHALMERS: Of course, there’s also the question who’s doing the work? Who’s designing all these amazing abundant goods? When people were designing it, maybe there was a case where okay, where they ought to be rewarded for this. But at a certain point, it’s going to be AI systems probably which are doing this design. And once it’s AI designing all the amazing stuff, then I think the case for the rewards to be spread out will be all the stronger than when it was people doing it.
AZEEM AZHAR: Within these virtual worlds, of course, the first intelligence is there, which will be people like us will want to experiment and they will want to build, and they will want to build digital artificially intelligent avatars. And these will become more and more capable because the technology is becoming more capable, and they could be more capable than the best things that we have today in the world in 2022. These objects might start to become increasingly sentient. Do you think they could start to generate their own sense of self?
DAVID CHALMERS: Yeah, I do think in principle there’s no limit. There’s no more limitations on digital beings than there are on biological beings. So I don’t see anything which is special about the neuron or carbon biology as a basis of consciousness compared to silicon in computer systems. You could replace my neurons by silicon chips, and my view at least is that could more or less preserve my consciousness. Of course, the AI systems we have right now are very primitive and not yet of a level of being conscious in anything like the way that humans are, but give it 50 or 100 years. I think we’re certainly going to have the possibility of digital beings which are conscious. And the moment you’ve got digital beings which are conscious like us, the question comes up, do they then get rights like ours? Do they get to be part of this system of equality and so on, or are they somehow second-class citizens? And it’s easy to imagine, at least at the beginning, AI systems are going to be second-class citizens. They’ll be produced for us to exploit and to help us with what we do, but in the long run, my own view is that would be an unjust system. If these beings are really conscious like us, they deserve rights. And I do believe that they’ll eventually develop a sense of self and maybe there could be a whole civil rights movement here for AI systems.
AZEEM AZHAR: Would it matter if we couldn’t tell apart an entity in a virtual world from one that is the facade of a biological being sitting in their lounge chair at rather than an artificially intelligent agent?
DAVID CHALMERS: I think for now, it very much matters. We want to know when we’re talking to a human agent compared to a bot. And this actually comes up already with interactions with our chatline on the internet. Are you human or are you a bot? Because I mean, right now humans are conscious, bots or not. We want to know if we’re actually interacting with a real conscious being. Sometimes you don’t know if you’re interacting with a non-player character in a virtual world or not. I think people really care about that. And I think where my recommendation would be, at least for now, it should be some kind of regulation that you know when you’re interacting with a human and when you’re interacting with a bot. Likewise, to know when you’re in virtual reality or when you’re in physical reality. I think that really matters. In the long run, when bots are maybe indistinguishable from humans, or at least when we have a class of AI systems which are indistinguishable from humans, some of them may even be uploaded humans where we take our brains and upload it to a computer and then inhabit the world that way. Maybe there’s a case that, yeah, at that point, having special markers for the AI systems might be unjust.
AZEEM AZHAR: Let’s go back to this 50 to 100-year timeframe. Why do you think 50 to 100 years towards sort of full artificial general intelligence is a reasonable estimate?
DAVID CHALMERS: Oh, I don’t know. My timelines have sped up recently. Something I wrote back in 2010 said I was more than 50% confident we’d get to human level intelligence in AI by the end of the 21st century. But so much has happened in the 10, 12 years since then. The deep learning revolution, the progress on recognition, and language, and game playing, and science, and coding. And just about everything that we thought was difficult has seen significant progress. I think that’s actually led me to shorten the timelines. Maybe I’m now at least 50% that we’ll have AI by 2050 instead of by 2100. But I think there’s still an awful lot we don’t understand about the human mind. No one has AI systems looking terribly like human-like general intelligence yet. But just seeing the amazing progress of the last 10 years makes me much more open to the possibility that maybe there could be something human-like in 10 or 20 years.
AZEEM AZHAR: I’m calculating. I’m calculating what would need to happen in that time. I mean, I just read in the last day or so that there’s a computer that will be a exaFLOP computer designed for 500-trillion parameter deep neural networks available in 2024. And GPT-3 was a one and a bit trillion parameter network from 18 months ago. So there’s definitely some acceleration going on in terms of the complexity of these networks and the power of the computing. But I guess that the thing that I sort of come back to is we don’t really know what we’re trying to build. And we don’t know that if we make the ladder longer and longer and longer and longer if it will ever reach the moon or if you need a completely different way of getting to the moon.
DAVID CHALMERS: That’s the thing about machine learning, is in some ways we never really know what we’re doing. We just kind of set up the system and watch it go. But it’s just done so many amazing things in the last 10 years. Progress from GPT-1, to GPT-2, to GPT-3, every time just by multiplying the number of parameters, not by any vast insight necessarily, just led to really qualitative differences in what these systems could do. So I’m still waiting for GPT-4 four and GPT-5 and whatever their analogs might be. Look, I do think it’s entirely possible that as we do that, we will also come up against fundamental limitations. We’ll just realize, okay, those things we’ve done so far, those were the easier things. And what we’ve still got left is the real hard limitations for AI, maybe for genuine creativity, or genuine common sense reasoning, or genuinely integrating all this into an agent. Maybe something about that is going to prove to be very difficult. It’s got to the point though now where we’re actually got the research program which is actually going to tell us whether that’s going to happen or not in the next 10, 20 years.
AZEEM AZHAR: I mean, there’s always this interesting relationship between the scientist and the philosopher. And sometimes a scientist comes in and sort of puts pay to a lot of philosophical questions. I think about Galileo turned the telescope up to the skies and suddenly we don’t need to argue about what’s at the center of the solar system. And I wonder when we look at things like what it is to be human, we have a microbiome that has more genetic material and genetic diversity in it than we have within our own cells. And that there is sort of increasing evidence that the behavior of the microbiome and its balance will affect our mental states, our desire to feel hungry, our levels of depression, our levels of anxiety. And science seems to perhaps challenge some of the framing that philosophers use to look at these questions. If things like the microbiome are a part of our identity in some meaningful sense, what kind of implication does that have towards this idea of being able to upload ourselves or being able to create realistic artificial agents, whether in the physical world or in these virtual realities?
DAVID CHALMERS: Yeah, I do think you’re right that science transforms philosophical questions and the best philosophy has to be responsive to the science of the time. At the same time I think some of these philosophical questions run very deep. And even as the science and technology advances, they just become all the more pressing. So I think for example, as we develop AI systems that seem more and more like humans in their capacities, questions like are AI systems conscious aren’t just going to vanish. Instead there’s going to be a very large philosophical debate about whether the AI systems we’re building are actually conscious. It’s beginning to happen already, but it will happen all the more as the AI systems get impressive. So that philosophical debate will become probably an intense practical debate. Or another question is just say the question of uploading ourselves to AI systems ourselves. Is that something that we should do? One philosophical view is if we do that’ll be a form of potential immortality and survival. Another view is if you upload yourself, that’s not you. You’ve just created a wholly separate being. Once we actually have uploading technology, we are going to be faced with that philosophical question as a practical question. Do I think that will still be me or will that be a form of death? And yeah, your question about the microbiome, so is consciousness essentially biological, or is it more a matter of information processing? Both of those are out there as philosophical views, but I suspect that once we have AI and uploading, we’re going to have some people who say, “No, you got to have the biology, therefore none of these systems have genuine minds.” And you’ll have some people who are going to say, “No, what matters is the information processing,” in which case the AI systems can have genuine minds without biology.
AZEEM AZHAR: When these AI bots first started to emerge, I was less worried about their moral status. I was actually more concerned with what would happen when I would send a friend to my AI scheduling robot on email. And they would spend time with a very courteous email with floral language. “Hello, Andrew. How are you doing? Azeem didn’t tell me you were working with him. And it was great to meet you. Here are some times I could manage.” And this thing, which was basically a collection of Perl scripts, would respond. “This time, this time, or this time in this location.” And I started to actually feel that there was a moral question. It was not about how the Perl scripts felt. It was about deceiving and taking up the one thing that humans have a limited supply of, which is their time and attention, when effectively dealing with a bunch of push buttons.
DAVID CHALMERS: I think this brings us back to this question that, yeah, we really want to know who and what we’re interacting with. So maybe there need to be like these bot flags. If people know that at this point they’re going to get messages from a bot and that’s what they’re getting it from, then it’s fine. If you use GPT-3 to make it look kind of human-like when in fact it’s not, then at some point that’s going to come out. And yeah, I think this is a case where I think we’re going to trend more and more towards a situation where people know who or what they’re interacting with. In that case, does the moral issue you’re worried about still arise?
AZEEM AZHAR: Probably less so. I would be tempted to carry a bot flag myself so no one would talk to me. That would be easier. I could sort of skulk around pretending to just be a cleaning script. Let’s extend this conversation and think of about the point at which we have many different metaverses and people can go in and have these rich experiences over extended periods of time. And they don’t get the nausea that you get when you wear a VR headset, and it feels very real. Will we want to come back to our biological selves?
DAVID CHALMERS: I think this is probably going to vary between people. Many people are very, very invested in biological reality. They greatly value their biological embodiment. They greatly value the natural environment that we’re in. For those people maybe the virtual reality is somewhat less attractive. Maybe they’ll still be able to spend some hours a day there, but they’re going to want to always come back. But I think for other people that may be less important. I mean, ultimately there are going to be new forms of digital embodiment in VR with wonderful new bodies where we can do wonderful new things. There’ll be virtual worlds with all kinds of possibilities that transcend what we do in physical reality, not just be able to break the laws of nature, fly, antigravity, but maybe there’ll be infinite amount of space. Maybe the VR worlds are going to be much faster than the physical world and some of the leading edge of community may end up somehow being in virtual worlds because of this. So I think there may be strong incentives to spend a lot of time in virtual worlds. In the short term, it’s almost certain to be a division. In the long term, yeah, there is going to be the possibility perhaps of uploading oneself, to become a holy virtual creature oneself with a virtual brain and a virtual body in a virtual world. This is also The Matrix scenario, where your whole body is plugged into a VR in a pod. Which many people find unattractive, but who’s to say if done well, that it couldn’t bring many of the benefits of the uploading scenario.
AZEEM AZHAR: The way we make sense of the passage of time has been optimized through that process of natural selection to make sense of the world. And there are natural limits almost at the chemical level of the speed with which a neurotransmitter clears a particular junction. We might find that those are actual physical limits to how we can experience ourselves in virtual worlds, where those limits are going to be governed by the speed of silicon or quantum bits or whatever it happens to be. And so you sort of wonder that whether that starts to create the incentive for people to say, “How can I better experience the full spectrum of what could be given to me in this virtual space?”
DAVID CHALMERS: The brain is pretty adaptive and it does a good job of adapting to new environments. We already find in virtual worlds, you present a virtual world quite different from the physical world and the brain can get used to it. So to some extent that may take us a long way, but you’re right, we’re going to come up against fundamental limitations of brain processing. The brain may be fundamentally slow at some level. It’s kind of surprising that the brain already, it doesn’t just use electrical processing. It actually uses chemical processing at the junction between neurons. And that’s like super slow. Why would a brain do that? So it may well be there are points at which, for example, a silicon system will just be fundamentally faster than a brain. And furthermore, it may turn out that our cognitive mechanisms move to a new cognitive architecture for a brain. That may be much better as well. So it may well be that if we want to actually keep up with the leading edge of intelligence, we will have to actually transform our brains, whether through uploading or silicon extension or enhancement at that point. I mean, that’s something which could happen inside in a physical reality or in a virtual reality. But I think the two could somehow naturally go together. Once we’re working at a pace of a thousand times the speed of a human brain, physical reality may come to seem unbearably slow to us. And maybe virtual reality could be much better adapted to our new digital brains, and that could in itself provide a further incentive to inhabit virtual realities, which now may be adapted for our wholly different brains.
AZEEM AZHAR: In that sort of context, of course, there’s a whole set of things that we would stop doing. I mean, it would feel that real reality has a sense of the pedestrian, the linear about it. We might feel that we don’t need to get off planet because you can go to many, many planets within the virtual domain. And then I suppose a question that one asks is if this is a progression of technology, perhaps many, many advanced societies and species out there in the universe achieved that level and decided, well, we don’t need to get off planet because we’ve got the energy we need coming in from the sun and we can have as much fun as we like in our own virtual worlds.
DAVID CHALMERS: Yeah. I remember the science fiction novel Accelerando by Charles Stross where it turns out everyone had just moved inwards to their virtual worlds after their singularity. And yeah, no one bothered exploring the universe. My strong suspicion is that there will always be something special about physical reality, even when virtual realities are somehow sensorially, in some ways, better. But also all these virtual world, these virtual environments are going to be grounded in the physical environment. They’re all going to be running on computers which exist somewhere in the physical world. And I assume that there are still going to be potential conflicts and interactions and so on. So it’s going to matter who controls the physical world. We better not neglect physical world because the physical world’s going to have to be in good shape to support good virtual worlds. And of course, some people are going to want to explore the physical world just for the sake of knowing about the world that we live in, that we inhabited.
AZEEM AZHAR: They’re not going to know it’s the physical world, are they? Because they could be told, “Go to your home pod in your virtual world to take off your headset and join the physical world.” But of course they could just be hopping into another room in the metaverse.
DAVID CHALMERS: They could be. And this is the question, of course, that faces all of us at any moment. Could all this be a simulation? But of course that does require some kind of serious manipulation or deception for that to happen. So I guess I’m assuming a society where it’s transparent that we have all these virtual worlds and there are some transparent mechanisms of government. And then, yeah, the whole thing could have been a simulation set up by somebody else.
AZEEM AZHAR: I’m pretty convinced, David, that you are not a simulation. I’m going to put it out there that you are a real human that I’m having this conversation with.
DAVID CHALMERS: Well, I don’t know myself. It would be nice to be one of the biological beings at level zero. But if it turns out that I’m actually a simulated being in a simulated universe having a simulated conversation with you, I think the conversation is still just as good as it was. This is still a real thing that we are taking part in, that we exchanged ideas. We interacted with another person. I don’t see why that would somehow be something less if it turned out we were in a simulation.
AZEEM AZHAR: Well, DAVID CHALMERS, it’s been fantastic to have this conversation, which may have been real. It may have not been real, but it was certainly very, very valuable. And it’s been wonderful having you as my guest today.
DAVID CHALMERS: Great talking with you.
AZEEM AZHAR: Well, thanks for listening. If you enjoyed this conversation, then do go back to my conversation with neuroscientist Anil Seth where we look at his theory of consciousness. And if you enjoy this podcast, you will love my weekly newsletter. As a listener, you get 20% off on the annual membership, which will give you access to weekly insights, a community, and in-depth essays on all things near future. Get a discounted membership at www.exponentialview.co/listener. The podcast was produced by Mischa Frankl-Duval, Fred Casella, and Marija Gavrilov. Bojan Sabioncello is our sound editor.