Dr. Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030!

🚀 Add to Chrome – It’s Free - YouTube Summarizer

Category: AI Ethics

Tags: AIsafetysimulationsuperintelligenceunemployment

Entities: AI safetyDr. Roman YimpolskiSam Altmansimulation theorysuperintelligence

Building WordCloud ...

Summary

    AI Safety Concerns
    • Dr. Roman Yimpolski has worked on AI safety for over two decades, emphasizing the risks of superintelligence.
    • He predicts that by 2027, AI will have the capability to replace most human jobs, leading to unprecedented unemployment levels.
    • Yimpolski argues that while AI capabilities are advancing rapidly, AI safety measures are not keeping pace.
    • Efforts to control AI are likened to patching over issues, and the unpredictability of AI remains a significant challenge.
    Superintelligence and Society
    • Yimpolski warns that the race to develop superintelligence could lead to catastrophic outcomes if not properly managed.
    • He suggests that superintelligence could solve or exacerbate existential risks, depending on how it's handled.
    • The concept of singularity is discussed as a point where AI advancement becomes uncontrollable and unpredictable.
    Simulation Theory
    • Yimpolski expresses a strong belief that we are living in a simulation, citing advancements in AI and virtual reality as indicators.
    • He suggests that the simulation theory aligns with many religious beliefs about a higher power or creator.
    Economic and Social Implications
    • The potential for AI to create economic abundance is contrasted with the challenge of finding meaning in a world with mass unemployment.
    • Yimpolski discusses the paradigm shift in job retraining, suggesting that traditional jobs may no longer exist.
    Takeaways
    • AI safety is a critical concern that requires more attention and resources.
    • The development of superintelligence poses both opportunities and risks for humanity.
    • Simulation theory offers a framework for understanding our existence and aligns with some religious beliefs.
    • Economic models may need to adapt to a future with widespread AI-driven unemployment.
    • Individuals and organizations should consider the ethical implications of AI development.

    Transcript

    00:00

    You've been working on AI safety for two decades at least. Yeah, I was convinced we can make safe AI, but the more I looked at it, the more I realized it's not something we can actually do.

    You have made a series of predictions about a variety of different states. So, what is your prediction for 2027?

    00:17

    [Music] Dr. Roman Yimpolski is a globally recognized voice on AI safety and associate professor of computer science.

    He educates people on the terrifying truth of AI and what we need to do to save humanity. In 2 years, the capability to replace

    00:32

    most humans in most occupations will come very quickly. I mean, in 5 years, we're looking at a world where we have levels of unemployment we never seen before.

    Not talking about 10% but 99%. And that's without super intelligence.

    A system smarter than all humans in all

    00:49

    domains. So, it would be better than us at making new AI.

    But it's worse than that. We don't know how to make them safe and yet we still have the smartest people in the world competing to win the race to super intelligence.

    But what do you make of people like Saman's journey with AI? So decade ago we published guard rails

    01:06

    for how to do AI, right? They violated every single one and he's gambling 8 billion lives on getting richer and more powerful.

    So I guess some people want to go to Mars, others want to control the universe. But it doesn't matter who builds it.

    The moment you switch to

    01:21

    super intelligence, we will most likely regret it terribly. And then by 2045, now this is where it gets interesting.

    Dr. Roman Gimpolski, let's talk about simulation theory.

    I think we are in one. And there is a lot of agreement on this and this is

    01:37

    what you should be doing in it so we don't shut it down. First, I see messages all the time in the comment section that some of you didn't realize you didn't subscribe.

    So, if you could do me a favor and double check if you're a subscriber to this channel, that would be tremendously appreciated. It's the simple, it's the free thing

    01:54

    that anybody that watches this show frequently can do to help us here to keep everything going in this show in the trajectory it's on. So, please do double check if you've subscribed and uh thank you so much because in a strange way, you are you're part of our history and you're on this journey with us and I appreciate you for that.

    So, yeah, thank

    02:09

    you, Dr. Roman Yimpolski.

    What is the mission that you're currently on? Cuz it's quite clear to me that you are on a bit of a mission and you've been on this mission for I think the best part of two decades at least.

    02:26

    I'm hoping to make sure that super intelligence we are creating right now does not kill everyone. Give me some give me some context on that statement because it's quite a shocking statement.

    02:42

    Sure. So in the last decade we actually figured out how to make artificial intelligence better.

    Turns out if you add more compute, more data, it just kind of becomes smarter. And so now smartest people in the world, billions

    02:58

    of dollars, all going to create the best possible super intelligence we can. Unfortunately, while we know how to make those systems much more capable, we don't know how to make them safe.

    how to make sure they don't do something

    03:14

    we will regret and that's the state-of-the-art right now. When we look at just prediction markets, how soon will we get to advanced AI?

    The timelines are very short couple years two three years according to prediction markets

    03:31

    according to CEOs of top labs and at the same time we don't know how to make sure that those systems are aligned with our preferences. So we are creating this alien intelligence.

    If aliens were coming to

    03:49

    earth and you you have three years to prepare you would be panicking right now. But most people don't don't even realize this is happening.

    So some of the counterarguments might be well these are very very smart people. These are very big companies with lots

    04:05

    of money. They have a obligation and a moral obligation but also just a legal obligation to make sure they do no harm.

    So I'm sure it'll be fine. The only obligation they have is to make money for the investors.

    That's the legal obligation they have. They have no moral or ethical obligations.

    Also,

    04:22

    according to them, they don't know how to do it yet. The state-of-the-art answers are we'll figure it out when we get there, or AI will help us control more advanced AI.

    That's insane. In terms of probability, what do you think is the probability that something goes catastrophically wrong?

    04:40

    So, nobody can tell you for sure what's going to happen. But if you're not in charge, you're not controlling it, you will not get outcomes you want.

    The space of possibilities is almost infinite. The space of outcomes we will like is tiny.

    04:56

    And who are you and how long have you been working on this? I'm a computer scientist by training.

    I have a PhD in computer science and engineering. I probably started work in AI safety mildly defined as control of

    05:13

    bots at the time uh 15 years ago. 15 years ago.

    So you've been working on AI safety before it was cool. Before the term existed, I coined the term AI safety.

    So you're the founder of the term AI safety. The term?

    Yes. Not the field.

    There are

    05:28

    other people who did brilliant work before I got there. Why were you thinking about this 15 years ago?

    Because most people have only been talking about the term AI safety for the last two or three years. Yeah.

    It started very mildly just as a security project. I was looking at poker

    05:43

    bots and I realized that the bots are getting better and better. And if you just project this forward enough, they're going to get better than us, smarter, more capable.

    And it happened. They are playing poker way better than average players.

    But more generally, it

    06:01

    will happen with all other domains, all the other cyber resources. I wanted to make sure AI is a technology which is beneficial for everyone.

    So I started to work on making AI safer. Was there a particular moment in your career where you thought oh my god?

    06:19

    First 5 years at least I was working on solving this problem. I was convinced we can make this happen.

    We can make safe AI and that was the goal. But the more I looked at it, the more I realized every single component of that equation is not something we can actually do.

    And the

    06:36

    more you zoom in, it's like a fractal. You go in and you find 10 more problems and then 100 more problems.

    And all of them are not just difficult. They're impossible to solve.

    There is no seinal work in this field where like we solved

    06:51

    this, we don't have to worry about this. There are patches.

    There are little fixes we put in place and quickly people find ways to work around them. They drill break whatever safety mechanisms we have.

    So while progress in AI

    07:07

    capabilities is exponential or maybe even hyper exponential, progress in AI safety is linear or constant. The gap is increasing.

    The gap between the how capable the systems are and how well we can control them, predict what

    07:23

    they're going to do, explain their decision making. I think this is quite an important point because you said that we're basically patching over the issues that we find.

    So, we're developing this this core intelligence and then to stop it doing things or to stop it showing some of its

    07:40

    unpredictability or its threats, the companies that are developing this AI are programming in code over the top to say, "Okay, don't swear, don't say that read word, don't do that bad thing." Exactly. And you can look at other examples of that.

    So, HR manuals, right?

    07:55

    We have those humans. They're general intelligences, but you want them to behave in a company.

    So they have a policy, no sexual harassment, no this, no that. But if you're smart enough, you always find a workaround.

    So you're just pushing behavior into a different not

    08:11

    yet restricted subdomain. We we should probably define some terms here.

    So there's narrow intelligence which can play chess or whatever. There's the artificial general intelligence which can operate across domains and then super intelligence which is smarter than all humans in all

    08:27

    domains. And where are we?

    So that's a very fuzzy boundary, right? We definitely have many excellent narrow systems, no question about it.

    And they are super intelligent in that narrow domain. So uh protein folding is a problem which was solved using narrow AI

    08:44

    and it's superior to all humans in that domain. In terms of AGI, again I said if we showed what we have today to a scientist from 20 years ago, they would be convinced we have full-blown AGI.

    We have systems which can learn. They can perform in hundreds of domains and they

    09:00

    better than human in many of them. So you can argue we have a weak version of hi.

    Now we don't have super intelligence yet. We still have brilliant humans who are completely dominating AI especially in science and engineering.

    09:17

    But that gap is closing so fast. You can see especially in the domain of mathematics 3 years ago large language models couldn't do basic algebra multiplying three-digit numbers was a challenge now they helping with mathematical proofs

    09:33

    they winning mathematics olympiads competitions they are working on solving millennial problems hardest problems in mathematics so in 3 years we closed the gap from subhuman performance to better than most mathematicians in the

    09:49

    And we see the same process happening in science and in engineering. You have made a series of predictions and they correspond to a variety of different dates.

    I have those dates in front of me here. What is your prediction for the year

    10:04

    2027? We're probably looking at AGI as predicted by prediction markets and tops of the labs.

    So we have artificial general intelligence by 2027. And how would that make the world different to how it is now?

    10:22

    So if you have this concept of a drop in employee, you have free labor, physical and cognitive, trillions of dollars of it. It makes no sense to hire humans for most jobs.

    If I can just get, you know, a $20 subscription or a free model to do

    10:38

    what an employee does. First, anything on a computer will be automated.

    And next, I think humanoid robots are maybe 5 years behind. So in five years all the physical labor can also be automated.

    So we're looking at a world where we have levels of unemployment we

    10:55

    never seen before. Not talking about 10% unemployment which is scary but 99%.

    All you have left is jobs where for whatever reason you prefer another human would do it for you. But anything else can be fully

    11:11

    automated. It doesn't mean it will be automated in practice.

    A lot of times technology exists but it's not deployed. Video phones were invented in the 70s.

    Nobody had them until iPhones came around. So we may have a lot more time with jobs

    11:28

    and with world which looks like this. But capability to replace most humans and most occupations will come very quickly.

    H okay. So let's try and drill down into that and and stress test it.

    So,

    11:46

    a podcaster like me. Would you need a podcaster like me?

    So, let's look at what you do. You prepare.

    You ask questions. You ask follow-up questions.

    And you look good on camera.

    12:01

    Thank you so much. Let's see what we can do.

    Large language model today can easily read everything I wrote. Yeah.

    And have very solid understanding better. I I assume you haven't read every single one of my books.

    Right? That thing would do it.

    It can train on every podcast you ever did. So, it knows

    12:18

    exactly your style, the types of questions you ask. It can also find correspondence between what worked really well.

    Like this type of question really increased views. This type of topic was very promising.

    So, you can optimize I think better than you can

    12:34

    because you don't have a data set. Of course, visual simulation is trivial at this point.

    So it can you can make a video within seconds of me sat here and so we can generate videos of you interviewing anyone on any topic very efficiently and you just have to get

    12:51

    likeness approval whatever are there many jobs that you think would remain in a world of AGI if you're saying AGI is potentially going to be here whether it's deployed or not by 2027 what kind and then okay so let's take out of this any physical labor jobs

    13:07

    for a second are there any jobs that you think a human would be able to do better in a world of AGI still? So that's the question I often ask people in a world with AGI and I think almost immediately we'll get super intelligence as a side effect.

    So the

    13:22

    question really is in a world of super intelligence which is defined as better than all humans in all domains. What can you contribute?

    And so you know better than anyone what it's like to be you. You know what ice cream tastes to you?

    Can you get paid

    13:39

    for that knowledge? Is someone interested in that?

    Maybe not. Not a big market.

    There are jobs where you want a human. Maybe you're rich and you want a human accountant for whatever historic reasons.

    Old people like traditional ways of

    13:57

    doing things. Warren Buffett would not switch to AI.

    He would use his human accountant. But it's a tiny subset of a market.

    Today we have products which are man-made in US as opposed to mass-produced in China and some people

    14:12

    pay more to have those but it's a small subset. It's a almost a fetish.

    There is no practical reason for it and I think anything you can do on a computer could be automated using that technology.

    14:27

    You must hear a lot of rebuttals to when this when you say it because people experience a huge amount of mental discomfort when they hear that their job, their career, the thing they got a degree in, the thing they invested $100,000 into is going to be taken away from them. So, their natural reaction some for some people is that cognitive

    14:44

    dissonance that no, you're wrong. AI can't be creative.

    It's not this. It's not that.

    It'll never be interested in my job. I'll be fine because you hear these arguments all the time, right?

    It's really funny. I ask people and I ask people in different occupations.

    I

    15:00

    ask my Uber driver, "Are you worried about self-driving cars?" And they go, "No, no one can do what I do. I know the streets of New York.

    I can navigate like no AI. I'm safe." And it's true for any job.

    Professors are saying this to me. Oh, nobody can lecture like I do.

    Like,

    15:16

    this is so special. But you understand it's ridiculous.

    We already have self-driving cars replacing drivers. That is not even a question if it's possible.

    It's like how soon before you fired. Yeah.

    I mean, I've just been in LA

    15:31

    yesterday and uh my car drives itself. So, I get in the car, I set put in where I want to go and then I don't touch the steering wheel or the brake pedals and it takes me from A to B, even if it's an hourong drive without any intervention at all.

    I actually still park it, but other than that, I'm not I'm not driving

    15:48

    the car at all. And obviously in LA we also have Whimo now which means you order it on your phone and it shows up with no driver in it and takes you to where you want to go.

    Oh yeah. So it's quite clear to see how that is potentially a matter of time for those people cuz we do have some of those

    16:04

    people listening to this conversation right now that their occupation is driving to offer them a and I think driving is the biggest occupation in the world if I'm correct. I'm pretty sure it is the biggest occupation in the world.

    One of the top ones. Yeah.

    16:19

    What would you say to those people? What should they be doing with their lives?

    What should they should they be retraining in something or what time frame? So that's the paradigm shift here.

    Before we always said this job is going to be automated, retrain to do this other job. But if I'm telling you that all jobs will be automated, then there

    16:36

    is no plan B. You cannot retrain.

    Look at computer science. Two years ago, we told people learn to code.

    you are an artist, you cannot make money. Learn to code.

    Then we realized,

    16:52

    oh, AI kind of knows how to code and getting better. Become a prompt engineer.

    You can engineer prompts for AI. It's going to be a great job.

    Get a four-year degree in it. But then we're like, AI is way better at designing prompts for other AIs than any human.

    So that's

    17:09

    gone. So I can't really tell you right now.

    The hardest thing is design AI agents for practical applications. I guarantee you in a year or two it's going to be gone just as well.

    So I don't think there is a this occupation needs to learn to do this

    17:24

    instead. I think it's more like we as a humanity then we all lose our jobs.

    What do we do? What do we do financially?

    Who's paying for us? And what do we do in terms of meaning?

    What do I do with my extra 60 80 hours a week?

    17:42

    You've thought around this corner, haven't you? a little bit.

    What is around that corner in your view? So the economic part seems easy.

    If you create a lot of free labor, you have a lot of free wealth, abundance, things which are right now not very affordable

    17:59

    become dirt cheap and so you can provide for everyone basic needs. Some people say you can provide beyond basic needs.

    You can provide very good existence for everyone. The hard problem is what do you do with all that free time?

    For a lot of people, their jobs are what gives

    18:15

    them meaning in their life. So they would be kind of lost.

    We see it with people who uh retire or do early retirement. And for so many people who hate their jobs, they'll be very happy not working.

    But now you have people who are chilling all day. What happens to

    18:32

    society? How does that impact crime rate, pregnancy rate, all sorts of issues?

    Nobody thinks about. governments don't have programs prepared to deal with 99% unemployment.

    18:47

    What do you think that world looks like? Again, I I think you very important part to understand here is the unpredictability of it.

    We cannot predict what a smarter than us system will do. And the point when we get to

    19:04

    that is often called singularity by analogy with physical singularity. You cannot see beyond the event horizon.

    I can tell you what I think might happen, but that's my prediction. It is not what actually is going to happen because I just don't have cognitive ability to

    19:20

    predict a much smarter agent impacting this world. Then you read science fiction.

    There is never a super intelligence in it actually doing anything because nobody can write believable science fiction at that level. They either banned AI like

    19:36

    Dune because this way you can avoid writing about it or it's like Star Wars. You have this really dumb bots but not nothing super intelligent ever cuz by definition you cannot predict at that level because by definition of it being super

    19:51

    intelligent it will make its own mind up. By definition if it was something you could predict you would be operating at the same level of intelligence violating our assumption that it is smarter than you.

    If I'm playing chess with super intelligence and I can predict every move, I'm playing at that level.

    20:07

    It's kind of like my French bulldog trying to predict exactly what I'm thinking and what I'm going to do. That's a good cognitive gap.

    And it's not just he can predict you're going to work, you're coming back, but he cannot understand why you're doing a podcast. That is something completely outside of his model of the world.

    20:25

    Yeah. He doesn't even know that I go to work.

    He just sees that I leave the house and doesn't know where I go. Buy food for him.

    What's the most persuasive argument against your own perspective here? That we will not have unemployment due to advanced technology

    20:41

    that there won't be this French bulldog human gap in understanding and I guess like power and control. So some people think that we can enhance human minds either through combination

    20:58

    with hardware. So something like Neurolink or through genetic re-engineering to where we make smarter humans.

    Yeah, it may give us a little more intelligence. I don't think we are still competitive in biological form with

    21:13

    silicon form. Silicon substrate is much more capable for intelligence.

    It's faster. It's more resilient, more energy efficient in many ways, which is what computers are made out of versus the brain.

    Yeah. So I don't think we can keep up just with improving our

    21:30

    biology. Some people think maybe and this is very speculative we can upload our minds into computers.

    So scan your brain connect of your brain and have a simulation running on a computer and you can speed it up give it more capabilities. But to me that feels like

    21:47

    you no longer exist. We just created software by different means and now you have AI based on biology and AI based on some other forms of training.

    You can have evolutionary algorithms. You can have many paths to reach AGI but at the end none of them are humans.

    22:04

    I have a another date here which is 2030. What's your prediction for 2030?

    What will the world look like? So we probably will have uh humanoid robots with enough flexibility,

    22:20

    dexterity to compete with humans in all domains including plumbers. We can make artificial plumbers.

    Not the plumbers where that was that felt like the last bastion of uh human employment. So 2030, 5 years from now,

    22:36

    humanoid robots, so many of the companies, the leading companies including Tesla are developing humanoid robots at light speed and they're getting increasingly more effective. And these humanoid robots will be able to move through physical space for, you know, make an omelette, do anything

    22:52

    humans can do, but obviously have be connected to AI as well. So they can think, talk, right?

    They're controlled by AI. They always connected to the network.

    So they are already dominating in many ways.

    23:08

    Our world will look remarkably different when humanoid robots are functional and effective because that's really when you know I start think like the combination of intelligence and physical ability is really really doesn't leave much does

    23:26

    it for us um human beings not much. So today if you have intelligence through internet you can hire humans to do your bidding for you.

    You can pay them in bitcoin. So you can have bodies just not directly

    23:41

    controlling them. So it's not a huge game changer to add direct control of physical bodies.

    Intelligence is where it's at. The important component is definitely higher ability to optimize to solve problems to find patterns people cannot see.

    And then by 2045,

    24:01

    I guess the world looks even even more um which is 20 years from now. So if it's still around, if it's still around, Ray Kurszswe predicts that that's the year for the singularity.

    That's the year where progress becomes so fast. So

    24:16

    this AI doing science and engineering work makes improvements so quickly we cannot keep up anymore. That's the definition of singularity.

    point beyond which we cannot see, understand, predict, see, understand, predict the intelligence itself or

    24:33

    what is happening in the world, the technology is being developed. So right now if I have an iPhone, I can look forward to a new one coming out next year and I'll understand it has slightly better camera.

    Imagine now this process of researching and developing this phone is automated. It happens every 6 months,

    24:50

    every 3 months, every month, week, day, hour, minute, second. You cannot keep up with 30 iterations of iPhone in one day.

    You don't understand what capabilities it has, what proper controls are. It just escapes

    25:06

    you. Right now, it's hard for any researcher and AI to keep up with the state-of-the-art.

    While I was doing this interview with you, a new model came out and I no longer know what the state-of-the-art is. Every day, as a percentage of total knowledge, I get dumber.

    I may still know more because I

    25:23

    keep reading. But as a percentage of overall knowledge, we're all getting dumber.

    And then you take it to extreme values, you have zero knowledge, zero understanding of the world around you. Some of the arguments against this

    25:39

    eventuality are that when you look at other technologies like the industrial revolution, people just found new ways to to work and new careers that we could never have imagined at the time were created. How do you respond to that in a

    25:54

    world of super intelligence? It's a paradigm shift.

    We always had tools, new tools which allowed some job to be done more efficiently. So instead of having 10 workers, you could have two workers and eight workers had to find a new job.

    And there was another job. Now you can supervise those workers or do

    26:12

    something cool. If you creating a meta invention, you're inventing intelligence.

    You're inventing a worker, an agent, then you can apply that agent to the new job. There is not a job which cannot be automated.

    That never happened before.

    26:28

    All the inventions we previously had were kind of a tool for doing something. So we invented fire.

    Huge game changer. But that's it.

    It stops with fire. We invent the wheel.

    Same idea. Huge implications.

    But wheel itself is not an

    26:44

    inventor. Here we're inventing a replacement for human mind.

    A new inventor capable of doing new inventions. It's the last invention we ever have to make.

    At that point it takes over and the process of doing science research even ethics research

    27:02

    morals all that is automated at that point. Do you sleep well at night?

    Really well. Even though you you spent the last what 15 20 years of your life working on AI safety and it's suddenly among us in a in a way that I don't

    27:19

    think anyone could have predicted 5 years ago. When I say among us, I really mean that the amount of funding and talent that is now focused on reaching super intelligence faster has made it feel more inevitable and more soon than any of us could have possibly

    27:34

    imagined. We as humans have this built-in bias about not thinking about really bad outcomes and things we cannot prevent.

    So all of us are dying. Your kids are dying, your parents are dying, everyone's dying, but you still sleep well.

    you still go on with your

    27:50

    day. Even 95 year olds are still doing games and playing golf and whatnot cuz we have this ability to not think about the worst outcomes especially if we cannot actually modify the outcome.

    So that's the same infrastructure being

    28:07

    used for this. Yeah, there is humanity level deathlike event.

    We're happening to be close to it probably, but unless I can do something about it, I I can just keep enjoying my life. In fact, maybe

    28:23

    knowing that you have limited amount of time left gives you more reason to have a better life. You cannot waste any.

    And that's the survival trait of evolution, I guess, because those of my ancestors that spent all their time worrying wouldn't have spent enough time having babies and hunting to survive.

    28:40

    Suicidal ideiation. People who really start thinking about how horrible the world is usually escape pretty soon.

    One of the you co-authored this paper um analyzing the key arguments people make

    28:56

    against the importance of AI safety. And one of the arguments in there is that there's other things that are of bigger importance right now.

    It might be world wars. It could be nuclear containment.

    It could be other things. There's other things that the governments and podcasters like me should be talking about that are more important.

    What's

    29:12

    your rebuttal to that argument? So, super intelligence is a meta solution.

    If we get super intelligence right, it will help us with climate change. It will help us with wars.

    It can solve all the other existential risks. If we don't get it right, it

    29:30

    dominates. If climate change will take a hundred years to boil us alive and super intelligence kills everyone in five, I don't have to worry about climate change.

    So either way, either it solves it for me or it's not an issue. So you think it's the most important

    29:45

    thing to be working on? Without question, there is nothing more important than getting this right.

    And I know everyone says it. you take any class with you take English professor's class and he tells you this is the most important class you'll ever

    30:00

    take but u you can see the meta level differences with this one another argument in that paper is that we all be in control and that the danger is not AI um this particular argument asserts that AI is just a tool humans

    30:16

    are the real actors that present danger and we can always m maintain control by simply turning it off can't we just pull the plug out I see that every time we have a conversation on the show about AI, someone says, "Can't we just unplug it?" Yeah, I get those comments on every podcast I make and I always want to like

    30:31

    get in touch with a guy and say, "This is brilliant. I never thought of it.

    We're going to write a paper together and get a noble price for it. This is like, let's do it." Because it's so silly.

    Like, can you turn off a virus? You have a computer virus.

    You don't like it. Turn it off.

    How about Bitcoin?

    30:47

    Turn off Bitcoin network. Go ahead.

    I'll wait. This is silly.

    Those are distributed systems. You cannot turn them off.

    And on top of it, they're smarter than you. They made multiple backups.

    They predicted what you're going to do. They will turn you off before you can turn them off.

    The idea

    31:03

    that we will be in control applies only to preup intelligence levels. Basically what we have today, today humans with AI tools are dangerous.

    They can be hackers, malevolent actors. Absolutely.

    But the moment super intelligence

    31:18

    becomes smarter, dominates, they no longer the important part of that equation. It is the higher intelligence I'm concerned about, not the human who may add additional malevolent payload, but at the end still doesn't control it.

    It is tempting

    31:35

    to follow your the next argument that I saw in that paper, which basically says, listen, this is inevitable. So, there's no point fighting against it because there's really no hope here.

    So, we should probably give up even trying and be faithful that it'll work itself

    31:50

    out because everything you've said sounds really inevitable. And if with with China working on it, I'm sure Putin's got some secret division.

    I'm sure Iran are doing some bits and pieces. Every European country's trying to get ahead of AI.

    The United States is leading the way. So, it's it's

    32:06

    inevitable. So, we probably should just have faith and pray.

    Well, praying is always good, but incentives matter. If you are looking at what drives this people, so yes, money is important.

    So there is a lot of money

    32:21

    in that space and so everyone's trying to be there and develop this technology. But if they truly understand the argument, they understand that you will be dead.

    No amount of money will be useful to you, then incentive switch. They would want to not be dead.

    A lot of

    32:36

    them are young people, rich people. They have their whole lives ahead of them.

    I think they would be better off not building advanced super intelligence concentrating on narrow AI tools for solving specific problems. Okay, my company cures breast cancer.

    That's all.

    32:53

    We make billions of dollars. Everyone's happy.

    Everyone benefits. It's a win.

    We are still in control today. It's not over until it's over.

    We can decide not to build general super intelligences. I mean the United States might be able

    33:09

    to conjure up enough enthusiasm for that but if the United States doesn't build general super intelligences then China are going to have the big advantage right so right now at those levels whoever has more advanced AI has more advanced military no question we see it

    33:25

    with existing conflicts but the moment you switch to super intelligence uncontrolled super intelligence it doesn't matter who builds it us or them and if they understand this argument they also would not build it. It's a mutually assured destruction on both

    33:40

    ends. Is this technology different than say nuclear weapons which require a huge amount of investment and you have to like enrich the uranium and you need billions of dollars potentially to even build a nuclear weapon.

    33:56

    But it feels like this technology is much cheaper to get to super intelligence potentially or at least it will become cheaper. I wonder if it's possible that some some guy some startup is going to be able to build super intelligence in you know a couple of years without the need of you know

    34:12

    billions of dollars of compute or or electricity power. That's a great point.

    So every year it becomes cheaper and cheaper to train sufficiently large model. If today it would take a trillion dollars to build super intelligence, next year it could be a hundred billion and so on at some

    34:27

    point a guy in a laptop could do it. But you don't want to wait four years for make it affordable.

    So that's why so much money is pouring in. Somebody wants to get there this year and lucky and all the winnings lite cone level award.

    So

    34:43

    in that regard they both very expensive projects like Manhattan level projects which was the nuclear bomb project. The difference between the two technologies is that nuclear weapons are still tools.

    some dictator, some country, someone has

    35:00

    to decide to use them, deploy them. Whereas super intelligence is not a is not a tool.

    It's an agent. It makes its own decisions and no one is controlling it.

    I cannot take out this dictator and now super intelligence is safe. So that's a fundamental difference to me.

    35:17

    But if you're saying that it is going to get incrementally cheaper, like I think it's Mo's law, isn't it? the technology gets cheaper then there is a future where some guy on his laptop is going to be able to create super intelligence without oversight or regulation or employees etc.

    35:33

    Yeah that's why a lot of people suggesting we need to build something like an surveillance planet where you are monitoring who's doing what and you're trying to prevent people from doing it. Do I think it's feasible?

    No. At some

    35:48

    point it becomes so affordable and so trivial that it just will happen. But at this point we're trying to get more time.

    We don't want it to happen in five years. We want it to happen in 50 years.

    I mean that's not very hopeful. See depends on how old you are.

    36:05

    Depends on how old you are. I mean if you're saying that you believe in the future people will be able to make super intelligence without the resources that are required today then it is just a matter of time.

    Yeah. But so will be true for many other

    36:21

    technologies. We're getting much better in synthetic biology where today someone with a bachelor's degree in biology can probably create a new virus.

    This will also become cheaper other technologies like that. So we are approaching a point where it's very difficult to make sure

    36:37

    no technological breakthrough is the last one. So essentially in many directions we have this uh pattern of making it easier in terms of resources in terms of intelligence to destroy the world.

    If

    36:53

    you look at uh I don't know 500 years ago the worst dictator with all the resources could kill a couple million people. He couldn't destroy the world.

    Now we know nuclear weapons we can blow up the whole planet multiple times over. Synthetic biology we saw with CO you can

    37:09

    very easily create a combination virus which impacts billions of people and all of those things becoming easier to do in the near term. You talk about extinction being a real risk, human extinction being a real risk.

    Of all the the pathways to human extinction that

    37:26

    you think are most likely, what what is the leading pathway? because I know you talk about there being some issue pre-eployment of these AI tools like you know someone makes a mistake um when they're designing a model or other issues post deployment when I say post-

    37:42

    deployment I mean once a chat or something an an agent's released into the world and someone hacking into it and changing it and reprogram reprogramming it to be malicious of all these potential paths to human extinction which one do you think is the highest probability So I can only talk

    38:00

    about the ones I can predict myself. So I can predict even before we get to super intelligence someone will create a very advanced biological tool create a novel virus and that virus gets everyone or most everyone I can envision it.

    I can understand the pathway. I can say

    38:17

    that. So just to zoom in on that then that would be using an AI to make a virus and then releasing it.

    Yeah. And would that be intentional or There is a lot of psychopaths, a lot of terrorists, a lot of doomsday cults.

    We

    38:32

    seen historically again they try to kill as many people as they can. They usually fail.

    They kill hundreds of thousands. But if they get technology to kill millions of billions, they would do that gladly.

    The point I'm trying to emphasize is

    38:47

    that it doesn't matter what I can come up with. I am not a malevolent actor you're trying to defeat here.

    It's a super intelligence which can come up with completely novel ways of doing it. Again, you brought up example of your dog.

    Your dog cannot understand all the ways

    39:03

    you can take it out. It can maybe think you'll bite it to death or something, but that's all.

    Whereas you have infinite supply of resources. So if I asked your dog exactly how you

    39:18

    going to take it out, it would not give you a meaningful answer. It can talk about biting.

    And this is what we know. We know viruses.

    We experienced viruses. We can talk about them.

    But what an AI system capable of doing novel

    39:33

    physics research can come up with is beyond me. One of the things that I think most people don't understand is how little we understand about how these AIs are actually working.

    Because one would assume, you know, with computers, we kind of understand how a computer works. We we know that it's doing this and then

    39:49

    this and it's running on code, but from reading your work, you describe it as being a black box. We actually So, in the context of something like ChatBT or an AI, we know you're telling me that the people that have built that tool don't actually know what's going on

    40:05

    inside there. That's exactly right.

    So even people making those systems have to run experiments on their product to learn what it's capable of. So they train it by giving it all of data.

    Let's say all of internet text. They run it on a lot

    40:21

    of computers to learn patterns in that text and then they start experimenting with that model. Oh, do you speak French?

    Oh, can you do mathematics? Oh, are you lying to me now?

    And so maybe it takes a year to train it and then 6 months to get some fundamentals about

    40:38

    what it's capable of some safety overhead. But we still discover new capabilities and old models.

    If you ask a question in a different way, it becomes smarter. So it's no longer

    40:54

    engineering how it was the first 50 years where someone was a knowledge engineer programming an expert system AI to do specific things. It's a science.

    We are creating this artifact growing it. It's like a alien plant and then we study it to see what it's doing.

    And

    41:11

    just like with plants we don't have 100% accurate knowledge of biology. We don't have full knowledge here.

    We kind of know some patterns. We know okay if we add more compute it gets smarter most of the time but nobody can tell you precisely what the outcome is going to

    41:28

    be given a set of inputs. I've watched so many entrepreneurs treat sales like a performance problem.

    When it's often down to visibility because when you can't see what's happening in your pipeline, what stage each conversation is at, what's stalled, what's moving, you can't improve

    41:44

    anything and you can't close the deal. Our sponsor, Pipe Drive, is the number one CRM tool for small to medium businesses.

    Not just a contact list, but an actual system that shows your entire sales process, end to end, everything that's live, what's lagging, and the

    41:59

    steps you need to take next. All of your teams can move smarter and faster.

    Teams using Pipe Drive are on average closing three times more deals than those that aren't. It's the first CRM made by salespeople for salespeople that over 100,000 companies around the world rely

    42:16

    on, including my team who absolutely love it. Give Piperive a try today by visiting piperive.com/ceo.

    And you can get up and running in a couple of minutes with no payment needed. And if you use this link, you'll get a 30-day free trial.

    What do you

    42:32

    make of OpenAI and Sam Alman and what they're doing? And obviously you're aware that one of the co-founders was it um was it Ilia Jack?

    Ilia Ilia. Yeah.

    Ilia left and he started a new company called Super Intelligent Safety. Super AI safety wasn't challenging

    42:49

    enough. He decided to just jump right to the hard problem.

    as an onlooker when you see that people are leaving OpenAI to to start super intelligent safety companies. What was your read on that situation?

    43:06

    So, a lot of people who worked with Sam said that maybe he's not the most direct person in terms of being honest with them and they had concerns about his views on safety. That's part of it.

    So, they wanted more control. They wanted

    43:23

    more concentration on safety. But also, it seems that anyone who leaves that company and starts a new one gets a $20 billion valuation just for having it started.

    You don't have a product, you don't have customers, but if you want to make many billions of dollars, just do

    43:39

    that. So, it seems like a very rational thing to do for anyone who can.

    So, I'm not surprised that there is a lot of attrition meeting him in person. He's super nice, very smart.

    absolutely

    43:55

    perfect public interface. You see him testify in the Senate, he says the right thing to the senators.

    You see him talk to the investors, they get the right message. But if you look at what people who know him personally are saying, it's

    44:10

    probably not the right person to be controlling a project of that impact. Why?

    He puts safety second. Second to

    44:25

    winning this race to super intelligence, being the guy who created Godic and controlling light corn of the universe. He's worse.

    Do you suspect that's what he's driven by is by the the legacy of being an impactful person that did a remarkable

    44:41

    thing versus the consequence that that might have on for society. Because it's interesting that he's his other startup is Worldcoin which is ba basically a platform to create universal basic income i.e.

    a platform to give us income in a world where people don't have jobs

    44:57

    anymore. So in one hand you're creating an AI company and the other hand you're creating a company that is preparing for people not to have employment.

    It also has other properties. It keeps track of everyone's biometrics.

    it uh keeps you in charge of the world's

    45:15

    economy, world's wealth. They're retaining a large portion of world coins.

    So I I think it's kind of very reasonable part to integrate with world dominance. If you have a super intelligence system and you control

    45:30

    money, you're doing well. Why would someone want world dominance?

    People have different levels of ambition. Then you a very young person with billions of dollars fame.

    You start

    45:46

    looking for more ambitious projects. Some people want to go to Mars.

    Others want to control Litecoin of the universe. What did you say?

    Litecoin of the universe. Litecoin.

    Every part of the universe light can reach from this point. Meaning anything

    46:01

    accessible you want to grab and bring into your control. Do you think Sam Alman wants to control every part of the universe?

    I I suspect he might. Yes.

    It doesn't mean he doesn't want a side effect of it being a very beneficial

    46:17

    technology which makes all the humans happy. Happy humans are good for control.

    If you had to guess what the world looks like in 2,100,

    46:32

    if you had to guess, it's either free of human existence or it's completely not comprehensible to someone like us. It's one of those extremes.

    So there's either no humans.

    46:48

    It's basically the world is destroyed or it's so different that I cannot envision those predictions. What can be done to turn this ship to a more certain positive outcome at this

    47:03

    point? Is is there still things that we can do or is it too late?

    So I believe in personal self-interest. If people realize that doing this thing is really bad for them personally, they will not do it.

    So our job is to convince everyone with any power in this

    47:18

    space creating this technology working for those companies they are doing something very bad for them. Not just forget our 8 billion people you experimenting on with no permission, no consent.

    You will not be happy with the outcome. If we can get everyone to

    47:36

    understand that's a default and it's not just me saying it. You had Jeff Hinton, Nobel Prize winner, founder of a whole machine learning space.

    He says the same thing. Benjio, dozens of others, top scholars.

    We had a statement about dangers of AI signed by thousands of

    47:51

    scholars, computer scientists. This is basically what we think right now.

    And we need to make it a universal. No one should disagree with this.

    And then we may actually make good decisions about what technology to build. It doesn't guarantee long-term safety for humanity,

    48:09

    but it means we're not trying to get there as soon as possible to the worst possible outcome. And do are you hopeful that that's even possible?

    I want to try. We have no choice but to try.

    And what would need to happen and who

    48:24

    would need to act? What is it government legislation?

    Is it Unfortunately, I don't think making it illegal is sufficient. There are different jurisdictions.

    There is, you know, loopholes. And what are you going to do if somebody does it?

    You going to find them for destroying humanity? Like very steep fines for it?

    Like what are

    48:40

    you going to do? It's not enforceable.

    If they do create it, now the super intelligence is in charge. So the judicial system we have is not impactful.

    And all the punishments we have are designed for punishing humans. Prisons capital punishment doesn't apply to AI.

    You know, the problem I have is

    48:57

    when I have these conversations, I never feel like I walk away with I hope that something's going to go well. And what I mean by that is I never feel like I walk away with clear some kind of clear set of actions that can

    49:12

    course correct what might happen here. So what should what should I do?

    What should the person sat at home listening to this do? You you talk to a lot of people who are building this technology.

    Mhm. Ask them precisely to explain some of

    49:28

    those things they claim to be impossible. How they solved it or going to solve it before they get to where they going.

    Do you know? I don't think Sam Orman wants to talk to me.

    I don't know. He seems to go on a lot of podcasts.

    Maybe he does. He wants to go online.

    49:43

    I I wonder why that is. I wonder why that is.

    I'd love to speak to him, but I don't I don't think he wants to I don't think he wants me to uh interview him. Have an open challenge.

    Maybe money is not the incentive, but whatever attracts

    49:59

    people like that. Whoever can convince you that it's possible to control and make safe super intelligence gets the prize.

    They come on your show and prove their case. anyone.

    If no one claims the price or even accepts the challenge after a few years, maybe we don't have anyone with

    50:16

    solutions. We have companies valued again at billions and billions of dollars working on safe super intelligence.

    We haven't seen their output yet. Yeah, I'd like to speak to Ilia as well because I know he's he's working on safe

    50:32

    super intelligence. So like notice a pattern too.

    If you look at history of AI safety organizations or departments within companies, they usually start well, very ambitious, and then they fail and disappear. So, Open

    50:49

    AI had super intelligence alignment team. The day they announced it, I think they said we're going to solve it in 4 years.

    Like half a year later, they canled the team. And there is dozens of similar examples.

    Creating a perfect

    51:04

    safety for super intelligence, perpetual safety as it keeps improving, modifying, interacting with people, you're never going to get there. It's impossible.

    There's a big difference between difficult problems in computer science and be complete problems and impossible

    51:20

    problems. And I think control, indefinite control of super intelligence is such a problem.

    So what's the point trying then if it's impossible? Well, I'm trying to prove that it is specifically that once we establish something is impossible, fewer people will waste their time claiming they can do it and find looking for

    51:37

    money. So many people going, "Give me a billion dollars in 2 years and I'll solve it for you." Well, I don't think you will.

    But people aren't going to stop striving towards it. So, if there's no attempts to make it safe and there's more people increasingly striving towards it, then

    51:53

    it's inevitable. But it changes what we do.

    If we know that it's impossible to make it right, to make it safe, then this direct path of just build it as soon as you can become suicide mission hopefully fewer people will pursue that they may go in other directions like again I'm a

    52:09

    scientist I'm an engineer I love AI I love technology I use it all the time build useful tools stop building agents build narrow super intelligence not a general one I'm not saying you shouldn't make billions of dollars I love billions of dollars

    52:25

    But uh don't kill everyone, yourself included. They don't think they're going to though.

    Then tell us why. I hear things about intuition.

    I hear things about we'll solve it later. Tell me specifically in

    52:42

    scientific terms. Publish a peer-reviewed paper explaining how you're going to control super intelligence.

    Yeah, it's strange. It's strange to it's strange to even bother if there was even a 1% chance of human extinction.

    strange to do something like if there was a 1% chance someone told me there was a 1%

    52:57

    chance that if I got in a car I might not I might not be alive. I would not get in the car.

    If you told me there was a 1% chance that if I drank whatever liquid is in this cup right now I might die. I would not drink the liquid.

    Even if there was

    53:12

    a billion dollars if I survived. So the 99% chance I get a billion dollars.

    The 1% is I die. I wouldn't drink it.

    I wouldn't take the chance. It's worse than that.

    Not just you die. Everyone dies.

    Yeah. Yeah.

    Now, would we let you drink it at any odds? That's for us to decide.

    You don't

    53:29

    get to make that choice for us. To get consent from human subjects, you need them to comprehend what they are consenting to.

    If those systems are unexplainable, unpredictable, how can they consent? They don't know what they are consenting to.

    53:45

    So, it's impossible to get consent by definition. So, this experiment can never be run ethically.

    By definition they are doing unethical experimentation on human subjects. Do you think people should be protesting?

    There are people protesting. There is stop AI, there is pause AI.

    They block

    54:01

    offices of open AI. They do it weekly, monthly, quite a few actions and they're recruiting new people.

    Do you think more people should be protesting? Do you think that's an effective solution?

    If you can get it to a large enough scale to where majority of population is

    54:17

    participating, it would be impactful. I don't know if they can scale from current numbers to that.

    But uh I support everyone trying everything peacefully and legally. And for the for the person listening at home, what should they what should they be doing?

    What what what cuz they they don't want to feel powerless. None of us

    54:33

    want to feel powerless. So it depends on what scale we're asking about time scale.

    Are we saying like this year your kid goes to college, what major to pick? Should they go to college at all?

    Yeah. Should you switch jobs?

    Should you go into certain industries? Those questions we can answer.

    We can talk about

    54:50

    immediate future. What should you do in 5 years with uh this being created for an average person?

    Not much. Just like they can't influence World War II, nuclear, holocaust, anything like that.

    It's not something anyone's going to ask

    55:06

    them about. Today, if you want to be a part of this movement, yeah, join POSAI, join Stop AI.

    those uh organizations currently trying to build up momentum to bring democratic powers to influence

    55:21

    those individuals. So in the near term, not a huge amount.

    I was wondering if there there are any interesting strategies in the near term. Like should I be thinking differently about my family about I mean you've got kids, right?

    You got three kids that I know about. Yeah.

    55:36

    Three kids. How are you thinking about parenting in this world that you see around the corner?

    How are you thinking about what to say to them, the advice to give them, what they should be learning? So there is general advice uh outside of this domain that you should live your every day as if it's your last.

    It's a

    55:52

    good advice no matter what. If you have three years left or 30 years left, you lived your best life.

    So try to not do things you hate for too long. Do interesting things.

    Do impactful things. If you can do all that while

    56:08

    helping people do that. Simulation theory is a interesting uh sort of adjacent subject here because as computers begin to accelerate and get more intelligent and we're able to you know, do things with AI that we

    56:23

    could never have imagined in terms of like can imagine the world that we could create with virtual reality. I think it was Google that recently released what was it called?

    Um like the AI worlds. You take a picture and it generates a whole world.

    Yeah. And you can move through the

    56:39

    world. I'll put it on the screen for people to see.

    Google have released this technology which allows you I think with a simple prompt actually to make a threedimensional world that you can then navigate through and in that world it has memory. So in the world if you paint on a wall and turn away you look back

    56:54

    the wall it's persistent. Yeah it's persistent.

    And when I saw that I go jeez bloody hell this is this is like the foothills of being able to create a simulation that's indistinguishable from everything I see here. Right.

    That's why I think we are in one.

    57:10

    That's exactly the reason AI is getting to the level of creating human agents, human level agents, and virtual reality is getting to the level of being indistinguishable from ours. So, you think this is a simulation?

    I'm pretty sure we are in a simulation. Yeah.

    57:26

    For someone that isn't familiar with the simulation arguments, what are what are the first principles here that convince you that we are currently living in a simulation? So, you need certain technologies to make it happen.

    If you believe we can create human level AI, yeah,

    57:41

    and you believe we can create virtual reality as good as this in terms of resolution, haptics, whatever properties it has, then I commit right now the moment this is affordable, I'm going to run billions of simulations of this exact moment, making sure you are

    57:56

    statistically in one. Say that last part again.

    You're going to run, you're going to run, I'm going to commit right now and it's very affordable. It's like 10 bucks a month to run it.

    I'm going to run a billion simulations of this interview. Why?

    58:12

    Because statistically that means you are in one right now. The chances of you being in a real one is one in a billion.

    Okay. So to make sure I'm clear on this, it's a retroactive placement.

    Yeah. So the minute it's affordable, then you can run billions of them and

    58:30

    they would feel and appear to be exactly like this interview right now. Yeah.

    So assuming the AI has internal states, experiences, qualia, some people argue that they don't. Some say they already have it.

    That's a separate philosophical question. But if we can simulate this, I

    58:46

    will. Some people might misunderstand.

    You're not you're not saying that you will. You're saying that someone will.

    I I can also do it. I don't mind.

    Okay. Of course, others will do it before I get there.

    If I'm getting it for $10,

    59:02

    somebody got it for a,000. That's not the point.

    If you have technology, we're definitely running a lot of simulations for research, for entertainment, games, uh, all sorts of reasons. And the number of those greatly exceeds the number of real worlds we're in.

    Look at all the

    59:19

    video games kids are playing. Every kid plays 10 different games.

    There's, you know, billion kids in the world. So there is 10 billion simulations in one real world.

    Mhm. Even more so when we think about advanced AI super intelligent systems,

    59:35

    their thinking is not like ours. They think in a lot more detail.

    They run experiments. So running a detailed simulation of some problem at the level of creating artificial humans and simulating the whole planet would be something they'll do routinely.

    So there

    59:51

    is a good chance this is not me doing it for $10. It's a future simulation thinking about something in this world.

    H. So it could be the case that

    00:07

    a species of humans or a species of intelligence in some form got to this point where they could affordably run simulations that are in indistinguishable from this and they decided to do it and this is it right

    00:22

    now. And it would make sense that they would run simulations as experiments or for games or for entertainment.

    And also when we think about time in the world that I'm in in this simulation that I could be in right now, time feels long

    00:37

    relatively you know I have 24 hours in a day but on their in their world it could be time is relative. Relative yeah it could be a second.

    My whole life could be a millisecond in there. Right.

    You can change speed of simulations you're running for sure.

    00:53

    So your belief is that this is probably a simulation most likely and there is a lot of agreement on that. If you look again returning to religions, every religion basically describes what a super intelligent being, an engineer, a programmer creating a fake world for

    01:09

    testing purposes or for whatever. But if you took the simulation hypothesis paper, you go to jungle, you talk to primitive people, a local tribe and in their language you tell them about it.

    Go back two generations later. They have

    01:25

    religion. That's basically what the story is.

    Religion. Yeah.

    Describes a simulation the theory. Basically somebody created.

    So by default that was the first theory we had. And now with science more and more people are going like I'm giving it non-trivial probability.

    A few people

    01:41

    are as high as I am, but a lot of people give it some credence. What percentage are you at in terms of believing that we are currently living in a simulation?

    Very close to certainty. And what does that mean for the nature of your life?

    If you're close to 100%

    01:57

    certain that we are currently living in a simulation, does that change anything in your life? So all the things you care about are still the same.

    Pain still hurts. Love still love, right?

    Like those things are not different. So it doesn't matter.

    They're still important. That's what matters.

    The little 1% different is that

    02:16

    I care about what's outside the simulation. I want to learn about it.

    I write papers about it. So that's the only impact.

    And what do you think is outside of the simulation? I don't know.

    But we can look at this world and derive some properties of the

    02:31

    simulators. So clearly brilliant engineer, brilliant scientist, brilliant artist, not so good with morals and ethics.

    Room for improvement in our view of what morals and ethics should be. Well, we know there is suffering in the

    02:47

    world. So unless you think it's ethical to torture children, then I'm questioning your approach.

    But in terms of incentives to create a positive incentive, you probably also need to create negative incentives. suffering seems to be one of the negatives and incentives built into our

    03:03

    design to stop me doing things I shouldn't do. So like put my hand in a fire, it's going to hurt.

    But it's all about levels, levels of suffering, right? So unpleasant stimuli, negative feedback doesn't have to be at like negative infinity hell levels.

    You don't want to burn alive and feel it.

    03:20

    You want to be like, "Oh, this is uncomfortable. I'm going to stop." It's interesting because we we assume that they don't have great moral mor morals and ethics but we too would we take animals and cook them and eat them for dinner and we also conduct experiments on mice and rats

    03:36

    but to get university approval to conduct an experiment you submit a proposal and there is a panel of efficists who would say you can't experiment on humans you can't burn babies you can't eat animals alive all those things would be banned in most parts of the world

    03:52

    where they have ethical boards. Yeah.

    Some places don't bother with it, so they have easier approval process. It's funny when you talk about the simulation theory, there's there's an element of the conversation that makes life feel less meaningful in a weird way.

    it like it I know it doesn't matter

    04:12

    but whenever I have this conversation with people not on the podcast about are we living in a simulation you almost see a little bit of meaning come out of their life for a second and then they forget and then they carry on but the the the thought that this is a simulation almost posits that it's not

    04:29

    important or that I think humans want to believe that this is the highest level and we're at the most important and we're the it's It's all about us. We're quite egotistical by design.

    And I just an interesting observation I've always had when I have these conversations with people that it it

    04:44

    seems to strip something out of their life. Do you feel religious people feel that way?

    They know there is another world and the one that matters is not this one. Do you feel they don't value their lives the same?

    I guess in some religions I think um they think that this world is

    05:01

    being created for them and that they are going to go to this heaven or or hell and that still puts them at the very center of it. But but if it's a simulation, you know, we could just be some computer game that four-year-old alien has is messing around with while

    05:16

    he's got some time to burn. But maybe there is, you know, a test and there is a better simulation you go to and a worse one.

    Maybe there are different difficulty levels. Maybe you want to play it on a harder setting next time.

    I've just invested millions into this

    05:32

    and become a co-owner of the company. It's a company called Ketone IQ.

    And the story is quite interesting. I started talking about ketosis on this podcast and the fact that I'm very low carb, very very low sugar, and my body produces ketones which have made me incredibly focused, have improved my endurance, have improved my mood, and

    05:49

    have made me more capable at doing what I do here. And because I was talking about it on the podcast, a couple of weeks later, these showed up on my desk in my HQ in London, these little shots.

    And oh my god, the impact this had on my ability to articulate myself, on my

    06:05

    focus, on my workouts, on my mood, on stopping me crashing throughout the day was so profound that I reached out to the founders of the company, and now I'm a co-owner of this business. I highly, highly recommend you look into this.

    I highly recommend you look at the science behind the product. If you want to try

    06:21

    it for yourself, visit ketone.com/stephven for 30% off your subscription order. And you'll also get a free gift with your second shipment.

    That's ketone.com/stephven. And I'm so honored that once again, a company I own can sponsor my podcast.

    06:37

    I've built companies from scratch and backed many more. And there's a blind spot that I keep seeing in early stage founders.

    They spend very little time thinking about HR. And it's not because they're reckless or they don't care.

    It's because they're obsessed with building their companies. And I can't fault them for that.

    At that stage,

    06:53

    you're thinking about the product, how to attract new customers, how to grow your team, really, how to survive. And HR slips down the list because it doesn't feel urgent.

    But sooner or later, it is. And when things get messy, tools like our sponsor today, Just Works, go from being a nice to have to

    07:08

    being a necessity. Something goes sideways and you find yourself having conversations you did not see coming.

    This is when you learn that HR really is the infrastructure of your company and without it things wobble and just work stops you learning this the hard way. It takes care of the stuff that would otherwise drain your energy and your

    07:24

    time automating payroll, health insurance benefits and it gives your team human support at any hour. It grows with your small business from startup through to growth even when you start hiring team members abroad.

    So if you want HR support that's there through the exciting times and the challenging times

    07:40

    head to justworks.com now. That's just works.com.

    And do you think much about longevity? A lot.

    Yeah. It's probably the second most important problem because if AI doesn't get us, that will.

    What do you mean? You're going to die of old age.

    07:56

    Which is fine. That's not good.

    You want to die? I mean, you don't have to.

    It's just a disease. We can cure it.

    Nothing stops you from living forever as long as universe exists. Unless we escape the simulation.

    08:12

    But we wouldn't want a world where everybody could live forever, right? That would be Sure, we do.

    Why? Who do you want to die?

    Well, I don't know. I mean, I say this because it's all I've ever known that people die.

    But wouldn't the world become pretty overcrowded if No, you stop reproducing if you live forever. You have kids because you want

    08:28

    a replacement for you if you live forever. You're like, I'll have kids in a million years.

    That's cool. I'll go explore universe first.

    Plus, if you look at actual population dynamics outside of like one continent, we're all shrinking. We're not growing.

    Yeah. This is crazy.

    It's crazy that the

    08:45

    more rich people get, the less kids they they have, which aligns with what you're saying. And I do actually think I think if I'm going to be completely honest here, I think if I knew that I was going to live to a thousand years old, there's no way I'd be having kids at 30.

    Right. Exactly.

    Biological clocks are

    09:01

    based on terminal points. Whereas if your biological clock is infinite, you'll be like one day.

    And you think that's close being able to extend our lives? It's one breakthrough away.

    I think somewhere in our genome, we have this rejuvenation loop and it's set to

    09:18

    basically give us at most 120. I think we can reset it to something bigger.

    AI is probably going to accelerate that. That's one very important application area.

    Yes, absolutely. So maybe Brian Johnson's right when he says don't die now.

    He keeps saying to

    09:34

    me, he's like don't die now. Don't die ever.

    But you know, he's saying like don't die before we get to the technology, right? Longevity escape velocity.

    You want to long live long enough to live forever. If at some point we every year of your existence at 2 years to your

    09:51

    existence through medical breakthroughs, then you live forever. You just have to make it to that point of longevity, escape, velocity.

    And he thinks that long longevity escape velocity especially in a world of AI is pretty is pretty is decades away minimum which means

    10:06

    as soon as we fully understand human genome I think we'll make amazing breakthroughs very quickly because we know some people have genes for living way longer. We have generations of people who are centarians.

    So if we can understand that and copy that or copy it from some animals which will live

    10:22

    forever we'll get there. Would you want to live forever?

    Of course. Reverse reverse the question.

    Let's say we lived forever and you ask me, "Do you want to die in 40 years?" Why would I say yes? I don't know.

    Maybe you're just used to the default. Yeah, I am used to the default.

    And nobody wants to die. Like no matter

    10:38

    how old you are, nobody goes, "Yeah, I want to die this year." Everyone's like, "Oh, I want to keep living." I wonder if life and everything would be less special if I lived for 10,000 years. I wonder if going to Hawaii for the first time or I don't know a

    10:55

    relationship all of these things would be way less special to me if they were less scarce and if that I just you know it could be individually less special but there is so much more you can do right now you can only make plans to do something for a decade or two. You

    11:11

    cannot have an ambitious plan of working in this project for 500 years. Imagine possibilities open to you with infinite time in the infinite universe.

    Gosh. Well, you can feels exhausting.

    It's a big amount of time. Also, I don't

    11:27

    know about you, but I don't remember like 99% of my life in detail. I remember big highlights.

    So, even if I enjoyed Hawaii 10 years ago, I'll enjoy it again. Are you thinking about that really practically as as in terms of, you know, if in the same way that Brian Johnson

    11:42

    is, Brian Johnson is convinced that we're like maybe two decades away from being able to extend life. Are you thinking about that practically and are you doing anything about it?

    Diet, nutrition. I try to think about investment strategies which pay out in a million years.

    Yeah. Really?

    Yeah. Of course.

    11:57

    What do you mean? Of course.

    Of course. Why wouldn't you?

    If you think this is what's going to happen, you you should try that. So, if we get AI right now, what happens to economy?

    We talked about world coin. We talked about free labor.

    What's money? Is it now Bitcoin?

    Do you invest in that? Is there something else

    12:14

    which becomes the only resource we cannot fake? So those things are very important research topics.

    So you're investing in Bitcoin, aren't you? Yeah, because it's a it's the only scarce resource.

    Nothing

    12:30

    else has scarcity. Everything else if price goes up will make more.

    I can make as much gold as you want given a proper price point. You cannot make more Bitcoin.

    Some people say Bitcoin is just this thing on a computer that we all agreed was value. We are a thing on a computer,

    12:48

    remember? Okay.

    So, I mean, not investment advice, but investment advice. It's hilarious how that's one of those things where they tell you it's not, but you know it is immediately.

    There is a your call is important to us. That means your call is of zero importance.

    And

    13:03

    investment is like that. Yeah.

    Yeah. When they say no investment advice, it's definitely investment advice.

    Um but it's not investment advice. Okay.

    So you're bullish on Bitcoin because it's it can't be messed with. It is the only thing which we know how much there is in the universe.

    So gold

    13:22

    there could be an asteroid made out of pure gold heading towards us devaluing it. Well also killing all of us.

    But Bitcoin I know exactly the numbers and even the 21 million is an upper limit. How many are lost?

    Passwords forgotten.

    13:38

    I don't know what Satoshi is doing with his million. It's getting scarcer every day while more and more people are trying to accumulate it.

    Some people worry that it could be hacked with a supercomput. A quantum computer can break that

    13:53

    algorithm. There is uh strategies for switching to quantum resistant cryptography for that.

    And quantum computers are still kind of weak. Do you think there's any changes to my life that I should make following this conversation?

    Is there anything that I

    14:08

    should do differently the minute I walk out of this door? I assume you already invest in Bitcoin heavily.

    Yes, I'm an investor in Bitcoin. Business financial advice.

    Uh, no. Just you seem to be winning.

    Maybe it's your simulation. You're rich, handsome, you have famous people hang out with you.

    14:25

    Like that's pretty good. Keep it up.

    Robin Hansen has a paper about how to live in a simulation, what you should be doing in it. And your goal is to do exactly that.

    You want to be

    14:40

    interesting. You want to hang out with famous people so they don't shut it down.

    So you are part of a part someone's actually watching on pay-per-view or something like that. Oh, I don't know if you want to be watched on pay-per-view because then it would be the same.

    Then they shut you down. If no one's watching, why would they play it?

    14:57

    I'm saying, don't you want to fly under the radar? Don't you want to be the the guy just living a normal life that the the masters?

    Those are NPCs. Nobody wants to be an NPC.

    Are you religious? Not in any traditional sense, but I believe in simulation hypothesis which has a super intelligent being.

    So,

    15:14

    but you don't believe in the like you know the religious books. So different religions.

    This religion will tell you don't work Saturday. This one don't work Sunday, don't eat pigs, don't eat cows.

    They just have local traditions on top of that theory. That's all it is.

    They all the same religion.

    15:31

    They all worship super intelligent being. They all think this world is not the main one.

    And they argue about which animal not to eat. Skip the local flavors.

    Concentrate on what do all the religions have in common.

    15:46

    And that's the interesting part. They all think there is something greater than humans.

    Very capable, all knowing, all powerful. Then I run a computer game.

    Four of those characters in a game. I am that I can change the whole world.

    I can shut it down. I know

    16:01

    everything in a world. It's funny.

    I was thinking earlier on when we started talking about the simulation theory that there's there might be something innate in us that is been left from the creator almost like a clue like a like an intuition cuz that's what we we tend to have through history.

    16:17

    Humans have this intuition. Yeah.

    That all the things you said are true. that there's this somebody above and we have generations of people who were religious who believed God told them and was there and give them books and that has been passed on for many generations.

    16:33

    This is probably one of the earliest generations not to have universal religious belief. Wonder if those people are telling the truth.

    I wonder if those people those people that say God came to them and said something. Imagine that.

    Imagine if that was part of this. I'm looking at the news today.

    Something happened an hour ago and I'm getting

    16:50

    different conflicting results. I can't even get with cameras, with drones, with like guy on Twitter there.

    I still don't know what happened. And you think 3,000 years ago we have accurate record of translations and no of course not.

    You know these conversations you have

    17:06

    around AI safety, do you think they make people feel good? I don't know if they feel good or bad, but people find it interesting.

    It's one of those topics. So I can have a conversation about different cures for cancer with an average person, but

    17:22

    everyone has opinions about AI. Everyone has opinions about simulation.

    It's interesting that you don't have to be highly educated or a genius to understand those concepts. Cuz I tend to think that it makes me feel not positive.

    17:39

    And I understand that, but I've always been of the opinion that you shouldn't live in a world of delusion where you're just seeking to be positive, have sort of uh positive

    17:54

    things said and avoid uncomfortable conversations. Actually, progress often in my life comes from like having uncomfortable conversations, becoming aware about something, and then at least being informed about how I can do something about it.

    And so

    18:09

    I think that's why that's why I asked the question because I I assume most people will should if they're you know if they're normal human beings listen to these conversations and gosh that's scary and this is concerning and and then I keep coming back to this

    18:26

    point which is like what what do I do with that energy? Yeah.

    But I'm trying to point out this is not different than so many conversations we can talk about. Oh, there is starvation in this region, genocide in this region, you're all dying, cancer is spreading, autism is

    18:42

    up. You can always find something to be very depressed about and nothing you can do about it.

    And we are very good at concentrating on what we can change, what we are good at, and uh basically

    18:57

    not trying to embrace the whole world as a local environment. So historically, you grew up with a tribe, you had a dozen people around you.

    If something happened to one of them, it was very rare. It was an accident.

    Now if I go on the internet, somebody gets killed everywhere all the time. Somehow

    19:13

    thousands of people are reported to me every day. I don't even have time to notice.

    It's just too much. So I have to put filters in place.

    And I think this topic is what people are very good at filtering as like this was this

    19:28

    entertaining talk I went to kind of like a show and the moment I exit it ends. So usually I would go give a keynote at a conference and I tell them basically you're all going to die you have two years left any questions and people be like will I lose

    19:46

    my job? How do I lubricate my sex robot?

    like all sorts of nonsense clearly not understanding what I'm trying to say there and those are good questions interesting questions but not fully embracing the result they still in their bubble of local versus global

    20:03

    and the people that disagree with you the most as it relates to AI safety what is it that they say what are their counterarguments typically so many don't engage at all like they have no background knowledge in a

    20:18

    subject. They never read a single book, single paper, not just by me, by anyone.

    They may be even working in a field. So they are doing some machine learning work for some company maximizing ad clicks and to them those systems are very narrow and then they hear that oh

    20:37

    this guy is going to take over of the world like it has no hands. How would it do that?

    It it's nonsense. This guy is crazy.

    He has a beard. Why would I listen to him?

    Right? That's uh then they start reading a little bit.

    They go, "Oh, okay. So maybe AI can be

    20:52

    dangerous. Yeah, I see that.

    But we always solve problems in the past. We're going to solve them again.

    I mean at some point we fixed a computer virus or something. So it's the same." And uh basically the more exposure they have, the less likely they are to keep that

    21:08

    position. I know many people who went from super careless developer to safety researcher.

    I don't know anyone who went from I worry about AI safety to like there is nothing to worry about.

    21:29

    What are your closing statements? Uh let's make sure there is not a closing statement we need to give for humanity.

    Let's make sure we stay in charge in control. Let's make sure we only build things which are beneficial to us.

    Let's make sure people who are

    21:44

    making those decisions are remotely qualified to do it. They are good not just at science, engineering and business but also have moral and ethical standards.

    And uh if you doing something which impacts other people, you should ask

    21:59

    their permission before you do that. If there was one button in front of you and it would shut down every AI company in the world right now permanently with the inability for anybody to start a new one, would you press the button?

    22:15

    Are we losing narrow AI or just super intelligent AGI part? Losing all of AI.

    That's a hard question because AI is extremely important. It controls stock market power plants.

    It controls hospitals. It would be a devastating

    22:32

    accident. Millions of people would lose their lives.

    Okay, we can keep narrow AI. Oh yeah, that's what we want.

    We want narrow AI to do all this for us, but not God we don't control doing things to us. So you would stop it.

    You would stop AGI

    22:47

    and super intelligence. We have AGI.

    What we have today is great for almost everything. We can make secretaries out of it.

    99% of economic potential of current technology has not been deployed. We make AI so quickly it doesn't have time to propagate through

    23:02

    the industry through technology. Something like half of all jobs are considered BS jobs.

    They don't need to be done. jobs.

    So those can be not even automated. They can be just gone.

    But I'm saying we can replace 60% of jobs today with existing models.

    23:19

    We're not done that. So if the goal is to grow economy to develop we can do it for decades without having to create super intelligence as soon as possible.

    Do you think globally especially in the western world unemployment is only going to go up from here? Do you think relatively this is the low of

    23:35

    unemployment? I mean it fluctuates a lot with other factors.

    There are wars there is economic cycles but overall the more jobs you automate and the higher is the intellectual necessity to start a job the fewer people qualify.

    23:50

    So if we plotted it on a graph over the next 20 years, you're assuming unemployment is gradually going to go up over that time. I think so.

    Fewer and fewer people would be able to contribute already. We kind of understand it because we created minimum wage.

    We understood some people

    24:07

    don't contribute enough economic value to get paid anything really. So we had to force employers to pay them more than they worth.

    Mhm. And we haven't updated it.

    It's what 725 federally in US. If you keep up with

    24:22

    economy, it should be like $25 an hour now, which means all these people making less are not contributing enough economic output to justify what they getting paid. We have a closing tradition on this podcast where the last guest leaves a

    24:37

    question for the next guest not knowing who they're leaving it for. And the question left for you is what are what are the most important characteristics for a friend, colleague or mate?

    Those are very different types of

    24:54

    people. But for all of them, loyalty is number one.

    And what does loyalty mean to you? Not betraying you, not screwing you, not cheating on you.

    25:10

    despite the temptation, despite the world being as it is, situation, environment. Dr.

    Roman, thank you so much. Thank you so much for doing what you do because you're you're starting a conversation and pushing forward a conversation and doing research that is incredibly

    25:26

    important and you're doing it in the face of a lot of um a lot of skeptics. I'd say there's a lot of people that have a lot of incentives to discredit what you're saying and what you do because they have their own incentives and they have billions of dollars on the line and they have their jobs on the

    25:41

    line potentially as well. So, it's really important that there are people out there that are willing to, I guess, stick their head above the parapit and come on shows like this and go on big platforms and talk about the unexplainable, unpredictable,

    25:56

    uncontrollable future that we're heading towards. So, thank you for doing that.

    This book, which which I think everybody should should check out if they want a continuation of this conversation, I think was published in 2024, gives a holistic view on many of the things we've talked about today. Um, preventing AI failures and much, much

    26:12

    more, and I'm going to link it below for anybody that wants to read it. If people want to learn more from you, if they want to go further into your work, what's the best thing for them to do?

    Where do they go? They can follow me.

    Follow me on Facebook. Follow me on X.

    Just don't follow me home. Very important.

    Follow you home. Okay.

    Okay, so I'll put your Twitter, your ex account um as well

    26:29

    below so people can follow you there and yeah, thank you so much for doing what you do. remarkably eye opening and it's given me so much food for thought and it's actually convinced me more that we are living in a simulation but it's also made me think quite differently of religion I have to say because um you're right all the religions when you get

    26:45

    away from the sort of the local traditions they do all point at the same thing and actually if they are all pointing at the same thing then maybe the fundamental truths that exist across them should be something I pay more attention to things like loving thy neighbor things like the fact that we are all one that there's a a divine

    27:00

    creator and maybe also they all seem to consequence beyond this life. So maybe I should be thinking more about how I behave in this life and and where I might end up thereafter.

    Roman, thank you. Amen.

    [Music]

    27:33

    [Music]