Vitalik Buterin on making sense in a changing world
February 2024
Listen
Show notes
This is my conversation with Vitalik Buterin, creator of Ethereum.
Timestamps:
- (00:00:00) intro
- (00:01:07) sponsor: Optimism
- (00:02:17) micro prediction markets, community notes, AIs as participants
- (00:14:13) decentralized social networks, zk identity, Dark Forest, and Frogcrypto
- (00:25:54) the dense jungle
- (00:30:08) sponsor: Privy
- (00:31:29) political instability, technology
- (00:34:16) coordination and technology in climate
- (00:36:13) AI, debugging and drawing, agency, security
- (00:44:02) timeline for the singularity
- (00:52:36) living to a 1000 years old
- (00:54:00) brain-computer interfaces
- (01:02:15) Lojban
Transcript
Sina: I thought I would start with a line that you had in one of your recent blog posts, that prediction markets are the “holy grail epistemic technology,” and that you’ve felt this for some time. So, yeah, what do you mean by that?
Vitalik: So, the way I think a lot of people naturally think about prediction markets is they imagine people betting on major events, with large numbers of people betting on a relatively small number of events over a fairly long timescale. So, you might be betting on the outcome of a sports game, or the outcome of an election, or whether or not LK-99 is real. These are all good prediction market use cases. Sports betting, I think, is just a fun thing, but sometimes you need the fun things to create the network effect for the super socially valuable things.
But, you know, Robin Hanson, for example, has long supported the idea of futarchy, where you’d have a market in which, for instance, you’d simultaneously trade a company’s shares conditional on whether they hire one person as CEO versus another person as CEO. Then, you’d see which shares float higher, and you’d use that to decide who to hire as CEO. You could even imagine that process being automated, and that would just be futarchy as a direct form of governance. So, yeah, those kinds of ideas are how we traditionally think about prediction markets.
Sina: Taking a quick tangent on this idea of futarchy because it’s so sci-fi and cool, has it been experimented with anywhere? I’m trying to think, but I feel like the answer is not really.
Vitalik: Yeah, I mean, the answer is not really because, in order to even have those kinds of prediction markets, you want to get the basics working first. Conditional prediction markets that feed directly into decisions are like stage two. Getting prediction markets working at all at scale is stage one. But it’s interesting that you call the space kind of science fiction. To me, I’m sort of quickly running through this whole thing as though it’s old news, no big deal.
Sina: Well, yeah, first you need to walk before you can run. But it does make me think, especially, it would almost be cool to have—yeah, we can talk about prediction markets, but even the idea of futarchy, almost playing with it in more low-stakes environments. Like, say, an individual deciding what they’re going to do next and kind of doing it as a “Choose Your Own Adventure” type thing where people are betting on the different paths. I don’t know, but we digress, right?
Vitalik: Actually, not a digression at all. My core case for why I think prediction markets are a much larger category than what people have been thinking is that there’s a huge space of micro prediction markets that could be all kinds of really powerful mechanisms.
So, I can give one example. Let’s take Community Notes on Twitter. It’s a very powerful mechanism. I wrote a blog post on it. Anyone can suggest a note for a tweet, and then there’s this voting mechanism where people vote on notes. It’s not even selecting the most popular note; it’s selecting notes that are consistently popular for people from both sides of major political divides. It’s sort of really nice, and I think people generally agree that it’s nice. But the main criticism I’ve seen is basically that these notes do not appear fast enough.
For instance, some random explosion happens, and someone makes a tweet saying, “Look at these three pixels over here, the Blue Party did it.” And it turns out that’s total fake news; it was actually the Purple Party that did it. But in the meantime, during those two days, the meme spreads, and people have some very preformed opinions on the situation.
The question is, can you even make Community Notes appear faster? I think the default challenge here is that democratic mechanisms are slow. Elections are slow. It takes a long time for lots of people to look at something, especially if people are less sophisticated. You don’t want to press people to decide anything quickly.
So, one natural idea that comes up is, what if you just have a prediction market on whether or not some post is going to have a note, or if it’s going to have a note with some particular set of contents? The idea is basically that you’re introducing a second kind of voting, except this kind of voting is incentivized in the sense that you get rewarded if your immediate vote matches the final outcome, and you get penalized if your immediate vote goes against the final outcome. Over time, that would basically concentrate voting power in the hands of those who are more accurate.
Vitalik: So, that’s one example of a potential micro prediction market use case. I’ve talked about this a couple of times in the context of moderation for decentralized social media sites. The idea is that you have some DAO as a final panel of judges—maybe an inefficient group of 15 or 100 people or whatever—but then you have prediction markets or some kind of challenge game leading up to that. Basically, you can flag something, but if your flag goes against the final decision or against too many other people’s weighted flags, and no one else supports you, then you get penalized. If it aligns with the final decision, you get rewarded. To me, these are all prediction market use cases.
There are even ones within the crypto space, like determining if a URL is a scam. That’s the simplest thing that could benefit a massive number of people. When a scam starts, it can spread fast. There was a pretty wild one this morning where someone managed to make 30 seconds of deepfake audio and video of me. Eventually, I was able to point out specific things that showed it wasn’t me, but I could totally see how someone who’s only heard me a few times wouldn’t catch on. It started spreading around as one of those fake StarkWare airdrops. The challenge with these things is that even if they get community-noted in 12 hours, a lot can happen in those 12 hours.
So, it’s that exact same kind of use case. Micro prediction markets map onto huge numbers of functions that would be really valuable for preventing scams, preventing misinformation, and giving people faster access to higher-quality information—all of these things that we care about.
Sina: Do you think this is possible to build today, or do you see any edge cases or pieces of this idea that still need to be fleshed out more?
Vitalik: I think the idea is there for the taking. What’s not built yet are the actual user interfaces and the ecosystem of people participating in these kinds of things. Realistically, in a lot of micro use cases, it won’t even be people—it’ll be AIs participating. There are some interesting experiments happening there. Martin Köppelmann, for instance, has been doing a lot of stuff around AI participating in Omen, which I linked to in my latest blog post.
The other piece, of course, is that blockchains have to be scalable enough. This has been the big dealbreaker that’s prevented a lot of super interesting blockchain applications from happening for such a long time. I talked about this in my other post on making Ethereum cypherpunk again. My thesis is that the reason DeFi dominates so much is because when transaction fees go up to $80, a DeFi thing is the only thing that makes sense to build. But when fees go down to 0.8 cents, suddenly all the cool stuff starts making sense again.
I think 2023 and 2024 are on this arc of making massive progress on layer 2 scaling. We saw Arbitrum hit stage one last year, and lots of rollups are planning to hit stage one or stage two this year. I’m excited about us actually turning the corner on that. It’s a whole new space of applications that’s possible.
Sina: Maybe one interesting prompt is that 2024 is also a year of elections, not only in the US but elsewhere in the world. Taiwan just had theirs, India’s having theirs, and if the last election cycle is any example, it’s going to be total chaos in the online information environment. This time around, the big difference is that AI has matured to the point where you can have very closely resembling audio or video of someone, making it hard to tell if it’s real. It’s interesting that this is coinciding with a time when crypto also has this opportunity with layer 2s to build new things. I do wonder if decentralized social networks as a whole have a role to play, given the public key infrastructure baked into them that’s legible to everyone on the outside.
Vitalik: I think there are a couple of things that all have synergies with each other. Decentralized social networks, absolutely. It’s been amazing to see how much Farcaster has been taking off, and how Lens has been doing recently as well. It’s incredible that we’re starting to see a crypto application that people actually use, and it’s not just finance-related. It’s something people have wanted for a really long time, and it’s actually happening. It’s interesting to see people on Twitter sometimes having a hard time comprehending it. When I commented on Farcaster yesterday, some people replied with things like, “Oh wow, Vitalik says Farcaster is the narrative. Does this mean I need to buy degen tokens?” Their immediate interface is, “How do I align my bag with the thing?” Of course, that’s not the whole point. The point of Farcaster is that it’s a social network—you use it, you post on it, you yell at people on it, hopefully nicely.
I saw Dan reply to some folks asking, “When token on Farcaster?” and he said that if you say this, you’re going to get nerfed and deprioritized by the algorithm. It’s a good way for them to be the benevolent dictator right now and set good cultural norms. It’ll be interesting to see if the culture actually manages to stick two or four years into the future.
So, we’ve talked about micro prediction markets and decentralized social networks. I feel like I’ve already covered the micro prediction use cases within decentralized social. These are things people have been experimenting with to some extent. There have been ideas around crypto social networks where you can make bets on people’s long-form posts. I think even a bunch of the Chinese ones tried to do that five years ago, but this could easily be one of those things that doesn’t succeed until it does.
The third piece is all of the ZK identity stuff. Zupass, which started at Zuzalu, and that whole ecosystem, as well as Exup, is increasingly growing into a fully-fledged product. That’s been good to see.
Vitalik: Yeah, and I think the synergy there is that you have your crypto asset wallet, but then you also need your identity wallet that stores things like attestations and any kinds of off-chain claims about yourself. You can make proofs over a whole bunch of these different claims that prove some particular statement about you without revealing any other information. So, that’s really powerful, and it’s something that decentralized social networks could plug into.
Another natural use case of this is airdrops. The thing that every airdropper wants is to reach people who are actually part of their community and people who are likely to hold on to and care about the token, not just dump it when they discover it three months later while scrolling through something like Rabby. So, the idea here would be that instead of doing per-address airdrops or using very proprietary algorithms to weed out airdrop farming abuse—which are often justified by the need to prevent abuse but can also be massive opportunities for the team or rogue employees to create hidden pre-mines—you airdrop to people who, let’s say, have at least five ZK stamps in a particular category. Or you airdrop to people who attended Devcon or ZK events, or have a Gitcoin Passport score of at least 0.3, all while preserving privacy. I think that stuff is super interesting, and it feels like the technology is slowly getting to the point where we’re right at the cusp of being able to do it.
I’m actually really hoping to see some ZK-gated airdrops start to happen because I do think airdrops can be good at motivating people to learn and go through complicated stuff when other incentives don’t. If we suddenly educate a large number of people about having a zero-knowledge proof wallet, that’s something that can start getting used in a whole bunch of other contexts.
So, let’s see. We went through micropredictions, a more decentralized alternative to USDT on Tron, crypto social, ZK enterprise applications—or I guess the category I have in mind here is things that in today’s world are done centrally, but where you could add about 20% decentralization to them, and that actually gives very useful guarantees to users. The easiest example of this isn’t even in the traditional categories people have been exploring for ten years, like healthcare or supply chain, but something close to our hearts, which is proof of solvency for exchanges. If you’re a crypto exchange, you can zero-knowledge prove that your database is being updated correctly and that you actually have an underlying coin for every coin in users’ balances. This ties into my blog post titled “How to Have Safe CEX”—proof of not being FTX, proof of not being Mt. Gox.
I think the interesting thing is, if you analyze what it actually means to do ZK-based proof of solvency, you realize that technologically speaking, it’s basically the same thing as a Validium. It’s a root hash on-chain where the root hash is only allowed to be updated based on a ZK-Stark that satisfies certain rules. That shows there’s actually a continuum there.
So, exchanges doing proof of solvency is number one. Number two is games. We already saw the Dark Forest game a few years ago. One of the reasons I really respect Brian Gu and his team is that the critique I gave yesterday of some existing so-called “SocialFi” things—a term I totally hate because it’s like, okay, we have this amazing opportunity to do crypto social, so what’s the social problem you want to solve in crypto? Oh, not enough speculation. But aside from that, the critique I gave is similar to GameFi, where the thing that fails is the game. People use financial speculation as a substitute for fun, and that’s what I feel like Axie Infinity and a lot of the others ended up stumbling into doing. People probably even thought they were genuinely having fun while all the token prices were going up in 2021, but then when prices started going down, it just became a ghost town. From what I’ve seen, if that chart was correct, a similar thing is happening with Friend.tech.
So, basically, you have to use the blockchain but make sure the game is fun in a way that’s not dependent on the token dynamics going in one direction. The Dark Forest game from 2020 does a super amazing job of that. It’s not a financialized game, but it is a blockchain game. Of course, the same guys did Frog Crypto at Devcon, which I think was also a similar and actually fun game in the same spirit. Blockchain games where you have on-chain proofs of state, making it easy to provably export assets from one server and potentially import them to another server, and do really interesting things there—I really support that kind of thing.
From there, you can start going into various business applications. I think the big value unlock that people are looking for is that if you can make these ironclad proofs that you have some kind of object in progress going through an enterprise system—like a package in transit or a product in the process of being manufactured—then you could use that as collateral. That could make it cheaper to take out loans, which I think is super interesting too.
And then, of course, finally, the fifth category is DAOs. We’ve been seeing a proliferation of DAOs, and I think people are starting to get smarter on governance mechanisms. So, wow, I think we just went through all five categories that I described in my other post about what in the Ethereum application ecosystem excites me, from back in December 2022. It’s good to see that stuff has remained stable for a year.
Sina: Yeah, that was a very comprehensive tour. It’s nice, right? It feels like the same kinds of things, but all of those things feel so much more mature.
Vitalik: Totally, we’ve made a lot of progress.
Sina: So, maybe taking the conversation one layer deeper and thinking about why these things actually matter. Ultimately, why is it helpful to build these sorts of applications and mechanisms? I found one concept that you’ve talked about recently—that the world is becoming more of a dense jungle—to be interesting. What do you mean by that?
Vitalik: The dense jungle, I think…
Vitalik: The dense jungle was this metaphor first introduced by a post I wrote all the way back at the end of 2020. We talked about how there are all these social actors becoming more and more powerful in a world whose size is still staying the same. So, it feels like we’re entering this environment where, if you think back 15 years ago in the US, and I think Canada too, the political debate in both of those lands—and actually probably the entire Anglosphere and Europe as well—was centered around one part of the spectrum being afraid of big government, and the other part being afraid of big business.
It feels like the outcome, 15 years down the line, is that both of those are probably more powerful than they were back then. I’m not saying that’s good, and I’m not even saying that’s not terrifying, but it feels like it’s the reality. And it feels like lots of other things are getting bigger at the same time too. Mobs are getting more powerful, cryptography is getting more powerful, AI is also getting more powerful. Basically, the point I made a few years ago was that if your criterion for a good future is a big thing you dislike being beaten down or going away, then you probably feel like you’ve lost. But if your criterion for success is a new thing that you like being introduced, then you probably feel like you’ve won.
I think this is even true in the cypherpunk space. On the one hand, we definitely don’t have crypto anarchy, but on the other hand, we have amazing privacy-preserving technology. We have zero-knowledge proofs more powerful than most people could have even dreamed of. We actually have private keys in each person’s hands on a massive scale. We have all these different things. So, if you look at it from the perspective of what kinds of things we gained, well, it turns out we actually gained a lot.
It’s this paradoxical situation where you have all these really powerful forces simultaneously progressing and expanding themselves. When that happens, you basically have these really powerful things pushing against each other more and more. I feel like this interplay dynamic between these different forces is going to be one of the defining trends of the 21st century.
Sina: So, does this imply that to understand the world, whereas before you needed to understand maybe a couple or a handful of big forces shaping the future, now you have to have a more cross-sectional perspective on things?
Vitalik: I think that’s definitely a big part of it. You have to have a cross-sectional perspective on pretty much everything at this point.
Sina: I want to take a quick moment to mention that Into the Bytecode is sponsored by Privy. One of the biggest problems we’re grappling with as builders working on crypto-enabled applications is how to make the right trade-offs between user experience on the one hand, and security and privacy on the other. How do we promote self-custody and ownership while letting the application shine rather than the crypto behind it?
Privy plays an important role here. They provide simple onboarding so anyone can connect to your app easily by allowing them to sign in with an existing wallet or by making it easy for you to provision a new self-custodial wallet for them, linking to social logins like Google, Twitter, or Discord. I personally have faith in Privy because of the team. Henri Stern, one of the co-founders, was previously on an episode of this podcast, so you can listen to that conversation for a deeper dive. He and his partner, Austa Lee, have been thinking about data privacy and security for a long time, and you can see this in the level of thought they’re putting into the product. So, if you’re working on a new product and thinking about how to reach a wider group of users without compromising on either user experience or privacy and security, I encourage you to check out Privy at privy.com.
Now, back to the conversation. Vitalik, I’d like to dive into the big trends that are guiding the world in general. What do you see as the major forces at play?
Vitalik: I think there are a few. The big political mega-trend is probably the collapse of a condition of relative peace that existed to a great extent in the 1990s, and to a lesser extent since around 1945. This is a kind of order that we’ve all basically taken for granted as permanent, even treating the direction of it continuing to improve as permanent. But over the last decade or so, we’ve basically seen it collapse.
There are a lot of different explanations for this, but ultimately, any one of these orders is going to be finite because the pressures that caused it to come into existence are no longer there. Lots of other aspects of the world are changing. The people who even remember the events that motivated the creation of this order are either retiring or dying out. We’re seeing an era of instability in all kinds of ways, both within individual countries and, of course, between countries as well. It feels like the environment is getting more tense, more competitive, and hostile in a lot of ways.
One way to interpret this is as some kind of malignant new thing, but the other way to interpret it is that this is actually how history has been all along, and we’re just exiting a period of a relatively unusual amount of calm. That’s the interpretation I’m more sympathetic to.
The second major trend is the rapid growth of technology. Crypto is a big part of that, AI is absolutely a big part of that, biotech is starting to be a big part of that, and also various kinds of hard tech, like solar panels, for example. It feels like we’re on the cusp of this transition in climate change. Twenty years ago, when technology was worse but political cooperation was better, there was a lot of idealism around the idea of abandoning big cars and big houses to live simpler lives for the sake of our shared Earth. These days, even if all the hippies do that, the non-hippies won’t, and it turns out the non-hippies alone are enough to potentially cook the planet.
At the same time, solar panels are getting to the point where they’re actually cheaper than a lot of the non-renewables we hate for very good reasons, like coal. That’s creating a perspective shift. Twenty years ago, the conversation was much more about climate as religion, personal penance, and worrying about your footprint. These days, there’s a larger share of the conversation that’s on climate tech and solutions.
Vitalik: I’ve met some amazing, really smart 19-year-olds from India at the Emerging Venture Summit back in August. They were basically converting CO2 into bricks. And there are people developing incredible solar panels and batteries and so forth. I think that’s one of those interesting examples where the two forces—innovation and sustainability—go in the same direction.
AI, on the other hand, is massively changing a lot of things. It’s having a couple of different effects. One is obviously increasing the set of things that can be done with no human effort at all. But the other thing it’s doing is, for tasks where you still need some level of human involvement, it’s expanding the scope of what a human can do. It’s also really changing and resetting the kind of expertise you need. It’s shifting the rankings of what skills are important to have and which ones are less so.
I can give a concrete example. I’ve been using AI to help with programming a lot, and it’s amazing. There are a lot of things it can help me code five times faster. But one place where AI isn’t as noticeably helpful is in introducing me to new frameworks. These aren’t intellectually complicated, but they’re specific systems where you have to code in a particular way to interact with them. Many thousands of developers have done it before, but I just have no idea how. So, making a Chrome extension or an Android app—these are things that have become vastly easier and more accessible to people. It’s now in my power to just go do it as a day project, when that was totally not true before.
Also, the time I spend programming—before, it was always 80% debugging, but now it’s like 95% debugging. Drawing is the same thing. I’ve used AI to draw a whole bunch, and if you want to draw with the objective of dazzling people, you can generally put a prompt into Stable Diffusion or DALL-E and get it right on the first try. But if your job is to draw something specific that you need for a purpose, then generally you have to do like 10 rounds of telling it to draw, telling it to edit a particular corner, telling it to make the cat 50% larger, or clarifying that no, the cat has four legs, not five. It’s multiple rounds of modifications. So, the kind of work being done changes a lot.
I also think it makes a lot of things more accessible. There was a fear for a while that AI would empower the elites much more than regular people, but in some ways, it feels like the opposite is true. I think Noah Smith wrote about this—I forget the name of the post, but it was something about the age of the normies coming back. His point was that there are even papers showing that AI’s ability to increase people’s productivity, the percentage increase, is significantly greater for amateurs than for experts. That’s something I’ve noticed too. It improved my ability to create browser extensions by a factor of five, but if you’re a browser extension expert, it might improve it by a factor of 1.1. So that’s interesting.
One personality trait that I think becomes more important in this era is agency—just the willingness to get off your butt and do something. I think the way a lot of people are going to fail is by staying in the mindset where things feel off to them because it’s an area they haven’t even looked into before. They expect it’s going to be super hard, but they haven’t updated to the fact that, with large language models, it’s a lot less hard. If you can just make that mental switch in your head, your ability to operate in the new world is probably going to increase quite significantly just from that alone.
Another thing I totally did not predict, and I think nobody predicted, is how AI is replacing the doctors before the plumbers—or maybe a better analogy is replacing the lawyers before the plumbers. Internet work gets replaced way before physical work, which is just super crazy and interesting to see. From a societal concern perspective, I think that’s a massive positive. The big scare we had about AI making the most vulnerable people unemployable is, fortunately, not happening for a while. The people AI hits first are in a better position to handle it. But at the same time, eventually, AI is going to come for everything, and that’s something we really need to start being prepared for.
The other big topic is how AI intersects with trust. We talked earlier about deepfakes and how you can’t trust someone’s voice or video anymore. There are some pretty serious cases of this too. I saw a report yesterday of someone in Hong Kong who lost 25 million because someone impersonated a bunch of people, including the CEO or CFO, on a video call. I asked my security friends about it, and they basically said that’s a really exceptional situation because there were multiple total security fails there. Any reasonable enterprise would have had multiple layers of protection against that. But it shows how people who’ve been slacking off on security and feeling safe doing so can’t really slack off any longer.
This is where crypto might come in because we have many more of these digital attestation-based technologies for figuring out what it is that we’re actually trusting. So, there are a lot of interesting shifts happening in all kinds of different sectors. Right now, obviously, AI is pulling ahead and is probably something like half of the story, but crypto, hard tech, bio—all of these things matter a lot too. The world is just going to continue flipping itself over in interesting ways pretty much once a decade from here on, all the way up until the singularity.
Sina: And when is the singularity?
Vitalik: My 95% confidence interval is 2030 to 2200.
Sina: Okay, that’s quite a wide range.
Vitalik: It is. Actually, a year ago, I think it was 2027 to 2200, but yeah.
Vitalik: Over the past year, AI progress has been significantly slower than I had expected, so I’ve adjusted my predictions away from the earlier side of my distribution. But it’s just so hard to predict these things ahead of time. To give some example reasons why we should think this way, consider the difference between the ENIAC, the very first computer from the late 1940s, and today, where we have computers embedded in watches, in random $10 electronics, everywhere. Now imagine that entire trend progressing for another 70 years. Or think about AI progressing from 30 years ago, when it was a joke, to today, where you can make a good case that modern AI is roughly as performant as AI in science fiction, like the computer on the Enterprise or those science fiction robots. That level of AI doesn’t even feel unrealistic anymore with today’s technology.
It’s also interesting to see how we’ve been wrong on the specifics. I was watching a science fiction movie, Passengers, recently, and one scene that came to mind was this robot character, basically a bartender. One of the humans physically attacks the bartender, and the robot just doesn’t mind it. This is a pretty common trope—that robots don’t feel pain, so you can punch them, slap them, kick them to the ground, and they just stand back up, bow before you, and say, “Hope you’re enjoying your day,” without expressing any emotion. But that’s totally not how something like ChatGPT works. If you put ChatGPT into that body and slap it, it will say, “Ow!” If you then remind it, “Hey, you’re a robot, you don’t feel pain,” it will reply, “Oh, I’m sorry, I apologize. I am a robot and do not feel pain. Sorry for the misunderstanding.” But at first, it reacts as if it feels the pain. So, we’ve gotten a lot of things wrong.
And it raises the question: how do we know anything about the experience of a conscious being? You can’t see on the inside, so you have nothing else to go on. My usual reply to excessive pessimism on this is that you can tell if an organism is conscious if it starts talking about consciousness. If aliens came to Earth, started reading all of our philosophy, and noticed that we talk about this internal experience thing, they’d be pretty confident that we have internal experience. But the problem with robots is that they’ve been trained on multi-terabytes of human-made data. So if they start talking about internal experience, they might just be pattern-matching humans.
So, back to timelines. One way of thinking about this is considering what modern AI is. The way I see it, it’s a set of algorithms that you create not by specifying them explicitly, but by stirring a computational soup and putting into that soup some kind of optimization pressure that pushes it toward creating things that are more and more like what you want over time. You stir it for a really long time and put in lots of compute. What’s interesting about this description is that it’s also a perfectly valid description of the process that created humans—evolution, from the primordial soup all the way up to where we are today.
On one hand, it’s a dismissive description of AI, but on the other hand, if you want to have a dismissive description of AI, that’s fair, but then your description has to be similarly dismissive of humans. Ultimately, humans are not magic; we came about through this exact same kind of process. With that frame, you can ask the question: how big does the soup need to be to create something like humans? There was this interesting article, I think it was called something like “Projecting AI from Computation,” that tried to use both a single human lifespan’s worth of learning and the entire process of evolution as metaphors for how much compute would need to go into training before we would logically expect it to be that smart.
This is a very difficult thing to compute because it’s a completely different regime, and you have no idea what parts are important and what parts aren’t. But it gave answers ranging between about 10 to the 25 and 10 to the 55. The AI we see today is around 10 to the 25, and 10 to the 40 is where we might see computation closer to the end of this century becoming possible. So by the end of the century, it looks like we’ll have enough compute that we’ll be able to stir a soup in some pretty naive ways, and eventually, someone will accidentally create something as complicated and intelligent as humans out the other end.
That’s one of my intuitive bases for the timeline. The other intuitive basis is this rough idea of asking: think about the difference between modern AIs and the AIs of 20 years ago. What is the time ratio between that and AIs that are actually smarter than humans at everything? That actually gives a more aggressive answer. If that was my only frame, I’d say we’ll get superhuman AI before 2040. But then we have to take into account that the last few years of massive rapid growth didn’t just come from improvements in algorithms; they came from a one-time transition from putting a little bit of our resources into algorithms and compute to putting a significant fraction of all of humanity’s resources into algorithms and compute. That’s a transition that’s not going to repeat itself.
So, there are about 20 different ways to look at the problem, and they all give different answers. My uncertainty over which frame is more correct than another is itself the reason why my 95% confidence interval is so wide. But my median, I think, is around 2060, or maybe the late 2050s. Things could totally go in all kinds of ways, but this is, in a very real way, the last and most important chapter for humanity before either doom or utopia comes. So, yeah, it’s going to be interesting.
Sina: Okay, maybe to close, I’ll have a couple of fun rapid-fire questions if you’re up for it. So, the first one is, we’ve talked about this future and…
Sina: Also, for a long time, you had in your bio this Nick Bostrom essay, “The Fable of the Dragon Tyrant.” So, my first fun question is, how long do you think either one of us will live? What does our longevity look like as people in our 30s today?
Vitalik: I think my own prediction for living to a thousand years is about 50%. Keep in mind that this combines both anti-aging research progressing optimistically and the possibility that we’ll have an AI Singularity pretty soon. There’s also the possibility of cryonics, like reawakening us. If you take that into account, the probability might even go up to 55% or so. So, yeah, pretty high.
Sina: Okay, okay. Second question. In your post about thinking about the long-term future, you talked about brain-computer interfaces as one potential path for a good AI future. So, how do you think about the state of BCIs today and the potential promises or dangers of that path?
Vitalik: I see BCIs as a spectrum, where the question is basically how tight the feedback loop is between stuff happening inside a computer and stuff happening inside your brain. AI systems are obviously pretty extreme on one end of the spectrum because you put in a prompt and then wait for it to do stuff for an entire 30 seconds. Things like AutoGPT go even further to that extreme because you just spin up an agent, and it does whatever for hours.
Then, with the stuff we have today, I think we can easily get into the hundreds of milliseconds range. Even things like AI photo editing tools, where with a keyboard and mouse you can have pretty rapid feedback in both directions. With the Apple Vision Pro, which a lot of people are talking about, it does eye tracking. I view eye tracking as being on the spectrum of BCI because, technically, it’s just watching your muscles, but the eyes are definitely more closely connected to your brain and much more subconscious than your hands, fingers, or feet. That already reduces the feedback time even more.
Then we have non-invasive ways of reading actual neuron firings inside the brain, and of course, there’s the invasive stuff where you put chips under the skull. Unfortunately, it does look like the invasive stuff is significantly more powerful. But at this point, there have been some pretty impressive results. I think the latest is that with the best BCIs and a trained subject, you can type from your brain at about 60 words per minute or something crazy like that. That’s about half of a relatively expert typing speed.
The Holy Grail, I think, is that you get an experience that feels like being able to multiply eight-digit numbers in your head and remember them exactly, or think of something like, “What is the population of Yale?” and get the answer just as easily as if you asked yourself a simple personal question like, “Which country am I from?” I think it’s going to feel more like thought than language eventually. There definitely is a subconscious way that people operate that can be even faster than language, but I think it’ll all start with language and then optimize from there.
It does feel like our ability to do all of those things is increasing pretty rapidly. To me, the final eventual goal is just mind uploading, where our actual brains end up running on some digital substrate. That’s amazing for a couple of reasons. One that I think people don’t think about is that it just solves physical safety. If you’re streaming a backup of your mind live, then if someone blows up a bomb right beside your server, you just restart somewhere else, and you almost don’t even feel it. Given how much we worry about safety and how many horrible things are justified in the name of safety, I think that by itself is such a massive win.
Another reason is our ability to explore the stars. There’s a project that Mark Zuckerberg and some others are participating in called Breakthrough Starshot. Their plan is literally to get to Alpha Centauri, the star 4.3 light years away, before the end of the century. How can anyone do that? The answer is you have a space probe, like a robot, with a light sail, and you shine a laser at it. The laser accelerates it at 10,000 Gs, which is literally 100 kilometers per second every second. That’s an acceleration so rapid that a human would just get squished immediately, but robots can handle it just fine. Then you accelerate to 20% the speed of light, and you get to Alpha Centauri.
That’s far-future stuff, though. In order to even start thinking about doing those kinds of things, we need way better neuroscience than we do now. We need a much better understanding of how our brains work. Starting to have a stronger BCI industry is probably one of the better ways of building up that expertise. The final goal here, I think, is to ensure that we don’t get massively outcompeted by super-intelligent AI whenever they come. If the question is whether we prefer a universe populated by things that can think a million times faster than us but are human, have memories of being human, and of being born in a particular city in a particular country in 1991, versus a future where totally non-conscious robots populate the universe, I think the first is just a much better deal from the perspective of pretty much any philosophy. That’s kind of how I think about super long-term stuff, but that’s all Singularity-level thinking.
Vitalik: Yeah, there’s a long transition of improving regular people’s ability to interact with the environment, do work, and handle all those things in a way where the human is still contributing the largest share of the output. Instead of being relegated to more of a tiny role of just giving prompts to the AIs until eventually the AI starts writing the prompts themselves. So, yeah, it’s a very exciting path.
Sina: Okay, last question and then we can call it. What is Lojban? Did I say that right?
Vitalik: Oh, yeah, it’s Lojban. I think you had a bit of dyslexia with the order of letters there. It’s a really fun artificial language that people have invented over the last half century. It’s kind of similar to Esperanto in that it’s a constructed language, except the goal of Lojban is to have a perfectly logical grammatical structure. Words can still have meanings that are vague or approximate because that’s unavoidable when interacting with the real world. But the grammatical structure is such that a computer program can parse what is the verb, what is the noun, or the equivalents of those things, and what refers to what.
The goal is to make it easier to express very precise and clear ideas, and also potentially create a language that’s easier to learn because the vocabulary and rules are more self-contained and consistent. Though, I think Lojban itself doesn’t do a super great job of that. I feel like Toki Pona does somewhat better, though it goes in a totally different direction. But yeah, it’s a fun language.
Sina: How do I say something like, “Nice to meet you,” in Lojban?
Vitalik: Oh, it’s something like “Mipmi.” Nice to meet you. If only we could change green tea to something like “I like Vitalik” with a red line in it as well, right? There are definitely artificial languages that have Easter eggs in them. I know one in Klingon, for example. There’s this meme among linguists about how you pronounce “ghoti.” It’s pronounced “fish.” Why? Because you take the “gh” from “enough,” the “o” from “women,” and the “ti” from “nation,” and you get “fish.” It’s a common in-joke among people who make fun of English’s totally broken spelling system. But in Klingon, the word for fish actually is “ghoti.”
Sina: Wow, that’s hilarious! Okay, well, on that profound note, I think we can bring it to a close. Thanks so much for taking the time, Vitalik. I really appreciate it.
Vitalik: Yeah, thank you so much, Sina. It’s been fun.