Sam Altman Shows Me GPT 5... And What's Next

🚀 Add to Chrome – It’s Free - YouTube Summarizer

Category: AI Future

Tags: AIEthicsInnovationSocietyTechnology

Entities: ChatGPTGPT5Jensen HuangNvidiaOpenAIPatrick CollisonSam AltmanStripe

Building WordCloud ...

Summary

    Introduction to AI and GPT5
    • Sam Altman, CEO of OpenAI, discusses the rapid development and power of AI, highlighting the recent launch of GPT5.
    • The conversation explores the concept of superintelligence and its potential to surpass human capabilities.
    • Altman emphasizes the profound impact of AI on industries and society, likening the current technological shift to science fiction becoming reality.
    Capabilities of GPT5
    • GPT5 can perform complex tasks such as coding and scientific inquiry, offering instantaneous software creation.
    • The model is designed to enhance knowledge work and learning, co-evolving with societal expectations.
    • Altman shares a personal anecdote about using GPT5 to create a game, highlighting its creative potential.
    Challenges and Implications of AI
    • The discussion touches on the balance between AI aiding creativity and potentially reducing cognitive effort.
    • Altman acknowledges the dual nature of AI as both a tool for innovation and a potential escape from critical thinking.
    • There is a focus on AI's ability to adapt to cultural contexts and individual preferences, enhancing personalized experiences.
    Future of AI and Society
    • Altman envisions AI making significant scientific discoveries and improving healthcare by 2035.
    • He discusses the potential for AI to transform job markets, emphasizing opportunities for new, innovative roles.
    • The conversation addresses societal adaptation to AI, comparing it to past industrial revolutions.
    Ethical Considerations and AI Development
    • Altman stresses the importance of aligning AI development with societal benefits and ethical considerations.
    • He highlights the need for abundant and accessible AI compute to prevent conflicts over resources.
    • The conversation explores the role of AI companies in shaping the future and the shared responsibility with society.
    Takeaways
    • Use AI tools to become fluent in their capabilities and integrate them into daily life.
    • Embrace the potential for AI to enhance creativity and innovation in various fields.
    • Stay informed about ethical considerations and advocate for responsible AI development.
    • Prepare for a rapidly changing job market by developing adaptable skills and exploring new opportunities.
    • Recognize the shared responsibility between AI developers and society in shaping a positive future.

    Transcript

    00:00

    This is like a crazy amount of power for  one piece of technology and it's happened   to us so fast. You just launched GPT-5.

    A kid  born today will never be smarter than AI. How   do we figure out what's real and what's not  real?

    We haven't put a sex bot avatar in ChatGPT   yet. Super intelligence.

    What does that  actually mean? This thing is remarkable.

    00:20

    I'm about to interview Sam Alman, the CEO  of Open AI. Open AI.

    Open AI. Reshaping   industries.

    Dude's a straightup tech lord. Let's  be honest.

    Right now, they're trying to build a   super intelligence that could far exceed humans  in almost every field. And they just released  

    00:36

    their most powerful model yet. Just a couple years  ago, that would have sounded like science fiction.   Not anymore.

    In fact, they're not alone. We are  in the middle of the highest stakes global race   any of us have ever seen.

    Hundreds of billions of  dollars and an unbelievable amount of human worth.  

    00:53

    This is a profound moment. Most people never  live through a technological shift like this,   and it's happening all around you and me right  now.

    So, in this episode, I want to try to time   travel with Sam Alman into the future that  he's trying to build to see what it looks  

    01:10

    like so that you and I can really understand  what's coming. Welcome to Huge Conversations.

    How are you? Great to meet you.

    Thanks for  doing this. Absolutely.

    So, before we dive in,  

    01:27

    I'd love to tell you my goal here. Okay.

    I'm  not going to ask you about valuation or AI   talent wars or fundraising or anything like that.  I think that's all very well covered elsewhere. It   does seem like it.

    Our big goal on this show is to  cover how we can use science and tech to make the  

    01:44

    future better. And the reason that we do all of  that is because we really believe that if people   see those better futures, they can then help  build them.

    So, my goal here is to try my best   to time travel with you into different moments  in the future that you're trying to build and see  

    02:01

    what it looks like. Fantastic.

    Awesome. Starting  with what you just announced, you recently said,   surprisingly recently, that GPT4 was the dumbest  model any of us will ever have to use again.   But GPT4 can already perform better than 90% of  humans at the SAT and the LSAT and the GRE and it  

    02:21

    can pass coding exams and sommelier exams and medical  licensing. And now you just launched GPT5.

    What   can GPT5 do that GPT4 can't? First of all, one  important takeaway is you can have an AI system   that can do all those amazing things you just  said.

    And it doesn't it clearly does not replicate  

    02:40

    a lot of what humans are good at doing, which I  think says something about the value of SAT tests   or whatever else. But I think had you gone back  to if we were having this conversation the day of   GPT4 launch and we told you how GPT4 did at those  things, you were like, "Oh man, this is going to   have huge impacts and some negative impacts on  what it means for a bunch of jobs or you know  

    03:01

    what people are going to do." And you know, this  is a bunch of positive impacts that you might have   predicted that haven't yet come true. Uh, and so  there there's something about the way that these   models are good that does not capture a lot of  other things that we need people to to do or care  

    03:17

    about people doing. And I suspect that same thing  is going to happen again with GPT5.

    People are   going to be blown away by what it does. Uh, it's  really good at a lot of things and then they will   find that they want it to do even more.

    Um, people  will use it for all sorts of incredible things.  

    03:34

    uh it will transform a lot of knowledge work,  a lot of the way we learn, a lot of the way we   create um but we people society will co-eolve with  it to expect more with you know better tools. So  

    03:50

    yeah like I think this model is quite remarkable  in many ways quite limited in others but the fact   that for you know 3 minute 5 minute 1-hour tasks  that uh like an expert in a in a field could maybe  

    04:06

    do or maybe struggle with that the fact that you  have in your pocket one piece of software that   can do all of these things is really amazing.  I think this is like unprecedented at any point   in human history that I that a technology has  improved this much this fast and and the fact  

    04:24

    that we have this tool now, you know, we're like  living through it and we're kind of adjusting step   by step. But if we could go back in time five or  10 years and say this thing was coming, we would   be like probably not.

    Let's assume that people  haven't seen the headlines. What are the topline  

    04:39

    specific things that you're excited about? and  also the things that you seem to be caveatting,   the things that maybe you won't expect it to do.  Um, the thing that I am most excited about is this   is a model for the first time where I feel like I  can ask kind of any hard scientific or technical  

    05:00

    question and get a pretty good answer. And I'll  give a fun example actually.

    Uh when I was in   junior high uh or maybe it was nth grade,  I got a TI83, this old graphing calculator,   and I spent so long making this game called Snake.  Yeah. Uh it was very popular game with kids in my  

    05:21

    school. And I was I was like uh I was like pro and  it was dumb, but it was like programming on TID3   was extremely painful and took a long time and  it was really hard to like debug and whatever.   And on a whim with an early copy of GPT5, I was  like, I wonder if it can make a TI83 style Game  

    05:37

    of Snake. And of course, it did that perfectly  in like 7 seconds.

    And then I was like, okay,   am I supposed to be would my like 11-year-old  self think this was cool or like, you know,   miss something from the process? And I  had like 3 seconds of wondering like, oh,   is this good or bad?

    And then I immediately said,  actually, now I'm missing this game. I have this  

    05:57

    idea for a crazy new feature. Let me type it  in.

    it implements it and it just the game live   updates and I'm like actually I'd like it to look  this way. Actually, I'd like to do this thing and   I had this like this very like kind of you have  this experience that reminded me of being like 11   in programming again where I was just like I now I  want to try this now I have this idea now I but I  

    06:16

    could do it so fast and I could like express ideas  and try things and play with things in such real   time. I was like, "Oh man, you know, I was worried  for a second about kids like missing the struggle   of learning to program in this sort of stone age  way." And now I'm just thrilled for them because  

    06:31

    the the way that people will be able to create  with these new tools, the speed with which you   can sort of bring ideas to life, you know, in  that's that's pretty amazing. So this idea that   GPT5 can just not only like answer all these hard  questions for you but really create like ondemand  

    06:48

    almost instantaneous software that's I think  that's going to be one of the defining elements   of the GPD5 era in a way that did not exist with  GPD4. As you're talking about that I find myself   thinking about a concept in weightlifting of time  under tension.

    Yeah. And for those who don't know  

    07:05

    it's you can squat 100 pounds in 3 seconds or you  can squat 100 pounds in 30. You gain a lot more   by squatting it in 30.

    And when I think about our  creative process and when I've felt most like I've   done my best work, it has required an enormous  amount of cognitive time under tension. And I  

    07:22

    think that that cognitive time under tension  is so important. And it's it's ironic almost   because these tools have taken enormous cognitive  time under tension to develop.

    But in some ways I   do think people might say they're you people are  using them as a escape hatch for thinking in some  

    07:42

    ways maybe. Now you might say yeah but we did that  with the calculator and we just moved on to harder   math problems.

    Do you feel like there's something  different happening here? How do you think about   this?

    It's different with I mean there are some  people who are clearly using chachine not to  

    07:58

    think and there are some people who are using  it to think more than they ever have before.   I am hopeful that we will be able to build the  tool in a way that encourages more people to   stretch their brain with it a little more and  be able to do more. And I think that like you  

    08:14

    know society is a competitive place like if you  give people new tools uh in theory maybe people   just work less but in practice it seems like  people work ever harder and the expectations of   people just go up. So my my guess is that like  other tools uh some people like other pieces  

    08:34

    of technology some people will do more and some  people will do less but certainly for the people   who want to use chatbt to increase their cognitive  time under tension they are really able to and it   is I take a lot of inspiration from what like the  top 5% of most engaged users do with chacht like  

    08:52

    it's really amazing how much people are learning  and doing and you know outputting. So my I've   only had GPT5 for a couple hours so I've been  playing.

    What do you think so far? I'm I'm just   learning how to interact with it.

    I mean part of  the interesting thing is I feel like I just caught  

    09:09

    up on how to use GPT4 and now I'm trying to learn  how to use GPD5. I'm curious what the specific   tasks that you found most interesting are because  I imagine you've been using it for a while now.   I I have been most impressed by the coding tasks.  I mean, there's a lot of other things it's really  

    09:26

    good at, but this this idea of the AI can write  software for anything. And that means that you   can express ideas in new ways that the AI can  do very advanced things.

    It can do, you know,  

    09:42

    it can like in some sense you could like ask  GPT4 anything, but because GPT5 is so good at   programming, it feels like it can do anything. Of  course, it can't do things in the physical world,   but it can get a computer to do very complex  things.

    And software is this super powerful,  

    09:58

    you know, way to like control some stuff and  actually do some things. So, that that for me   has been the most striking.

    Um, it's gotten it's  much better at writing. So, this is like there's   this whole thing of AI slop like AI writes in this  kind of like quite annoying way and M dashes.

    M we  

    10:18

    still have the M dashes in GPT5. A lot of people  like them dashes, but the writing quality of GPT5   is gotten much better.

    We still have a long way  to go. We want to improve it more, but like uh   I've a thing we've heard a lot from people inside  of OpenAI is that man, they started using GPT5,  

    10:36

    they knew it was better on all the metrics, but  there's this like nuance quality they can't quite   articulate, but then when they have to go back  to GPT4 to test something, it feels terrible.   And I I don't know exactly what the cause  of that is, but I suspect part of it is the   writing feels so much more natural and better.  I in preparation for this interview reached out  

    10:55

    to a couple other leaders in AI and technology  and gathered a couple questions for you. Okay,   so this next question is from Stripe CEO Patrick  Collison.

    This will be a good one. Read this   verbatim.

    It's about the next stage. What what  comes after GBT5?

    In which year do you think a  

    11:13

    large language model will make a significant  scientific discovery and what's missing such   that it hasn't happened yet? He caveed here that  we should leave math and special case models like   alpha fold aside.

    He's specifically asking about  fully general purpose models like the GPT series.   I would say most people will agree that that  happens at some point over the next two years.  

    11:32

    But the definition of significant matters a lot.  And so some people significant might happen,   you know, in early 25. Some people might maybe  not until late 2026.

    Sorry, early 2026. Maybe some   people not until late 2027, but I would I would  bet that by late 27, most people agree that there  

    11:50

    has been an AIdriven significant new discovery.  And the thing that I think is missing is just   the kind of cognitive power of these models.  A framework that one of the researchers said   to me that I really liked is, you know, a year  ago we could do well on like a high school like  

    12:07

    a basic high school math competition problems that  might take a professional mathematician seconds to   a few minutes. We very recently got an IMO gold  medal.

    That is a crazy difficult like could you   explain what that means? That's kind of like the  hardest competition math test.

    This is something  

    12:23

    that like the very very top slice of the world.  many many professional mathematicians wouldn't   solve a single problem and we scored at the top  level. Now there are some humans that got an even   higher score in the gold medal range but we we  like this is a crazy accomplishment and these  

    12:38

    each of these problems it's like six problems over  9 hours so hour and a half per problem for a great   mathematician. So we've gone from a few seconds  to a few minutes to an hour and a half maybe to   prove a significant new mathematical theorem is  like a thousand hours of work for a top person  

    12:54

    in the world. So we've got to go from, you know,  another significant gain.

    But if you look at our   trajectory, you can say like, okay, we're getting  to that. We have a path to get to that time   horizon.

    We just need to keep scaling the models.  The long-term future that you've described is  

    13:11

    super intelligence. What does that actually mean?  And how will we know when we've hit it?

    If we had   a system that could do better research, better AI  research than uh say the whole open AI research   team, like if we were willing, if we said, "Okay,  the best way we can use our GPUs is to let this AI  

    13:31

    decide what experiments we should run smarter than  like the whole brain trust of Open AAI." Yeah. And   if that same to make a personal example, if that  same system could do a better job running open AI   than I could.

    So you have something that's like,  you know, better than the best researchers, better   than me at this, better than other people at their  jobs, that would feel like super intelligence to  

    13:47

    me. That is a sentence that would have sounded  like science fiction just a couple years ago.   And now it kind of does, but it's you can like see  it through the fog.

    Yes. And so one of the steps   it sounds like you're saying on that path is this  moment of scientific discovery of asking better  

    14:02

    questions of grappling with things in a in a way  that expert level humans do to come up with new   discoveries. One of the things that keeps knocking  around in my head is if we were in 1899 say and   we were able to give it all of physics up until  that point and play it out a little bit.

    Nothing  

    14:18

    further than that. Like at what point would one  of these systems come up with general relativity?   Interesting question is did you like if we think  about that forward like like if we think of where   we are now should a if if we never got another  piece of physics data.

    Yeah. Do we expect that a  

    14:38

    really good super intelligence could just think  super hard about our existing data and maybe   say like solve high energy physics with no new  particle accelerator or does it need to build a   new one and design new experiments? Obviously  we don't know the answer to that.

    Different   people have different speculation. Uh but I  suspect we will find that for a lot of science,  

    14:57

    it's not enough to just think harder about data we  have, but we will need to build new instruments,   conduct new experiments, and that will take some  time. Like that that is the real world is slow   and messy and you know whatever.

    So I'm sure we  could make some more progress just by thinking   harder about the current scientific data we  have in the world. But my guess is to make  

    15:16

    the big progress we'll also need to build new  machines and run new experiments and there will   be some slowdown built into that. Another way of  of thinking about this is AI systems now are just   incredibly good at answering almost any question.  But maybe one of the things we're saying is it's  

    15:35

    another leap yet. And what Patrick's question  is getting at is to ask the better questions.   Or or if we go back to this kind of timeline  question, we could maybe say that AI systems   are superhuman on one minute tasks, but a long  way to go to the thousand hour tasks.

    And there's  

    15:52

    a dimension of human intelligence that seems  very different than AI systems when it comes   to these long horizon tasks. Now, I think we will  figure it out, but today it's a real weak point.   We've talked about where we are now with GBC5.  We talked about the end goal or future goal of  

    16:09

    super intelligence. One of the questions that  I have, of course, is what does it look like   to walk through the fog between the two.

    The next  question is from Nvidia CEO Jensen Hong. I'm going   to read this verbatim.

    Fact is what is. Truth is  what it means.

    So facts are objective. Truths are  

    16:30

    personal. They depend on perspective, culture,  values, beliefs, context.

    One AI can learn and   know the facts. But how does one AI know the  truth for everyone in every country and every   background?

    I'm going to accept as axioms those  definitions. I'm not sure if I agree with them,  

    16:47

    but in the issues of time, I will just take them.  I will take those definitions and go with it. Um, I have been surprised, I think many other people  have been surprised too about how fluent AI is   at adapting to different cultural contexts and  individuals.

    One of my favorite features that we  

    17:06

    have ever launched in chatbt is the the sort of  enhanced memory that came out earlier this year.   like it really feels like my Chad GBT gets to  know me and what I care about and like my life   experiences and background and the things that  have led me to where they are. A friend of mine  

    17:22

    recently who's been a huge CHBT user, so he's  got a lot of a a lot of he's put a lot of his   life into all these conversations. He gave his  Chad GBT a bunch of personality tests and asked   them to answer as if they were him and it got  the same scores he actually got, even though  

    17:38

    he'd never really talked about his personality.  And my ChachiBD has really learned over the years   of me talking to it about my culture, my  values, my life. And I have used, you know,   I sometimes will use it in like uh I'll use like  a free account just to see what it's like without  

    17:57

    any of my history and it feels really really  different. So I think we've all been surprised on   the upside of how good AI is at learning this and  adapting.

    And so do you envision in many different   parts of the world people using different  AIs with different sort of cultural norms and  

    18:14

    contexts? Is that what we're saying?

    I think that  everyone will use like the same fundamental model,   but there will be context provided to that model  that will make it behave in sort of personalized   way they want their community wants. Whatever.  I think when we're getting at this idea of facts   and truth and uh it brings me to this seems like a  good moment for our first time travel trip.

    Okay,  

    18:35

    we're going to 2030. This is a serious question,  but I want to ask it with a light-hearted example.   Have you seen the bunnies that are jumping on  the trampoline?

    Yes. So, for those who haven't   seen it, maybe it looks like backyard footage of  bunnies enjoying jumping on a trampoline.

    And this  

    18:51

    has gone incredibly viral recently. There's a  humanmade song about it.

    It's a whole thing.   There were a trampoline. And I think the reason  why people reacted so strongly to it, it was maybe   the first time people saw a video, enjoyed it,  and then later found out that it was completely AI  

    19:10

    generated. In this time travel trip, if we imagine  in 2030, we are teenagers and we're scrolling   whatever teenagers are scrolling in 2030.

    How do  we figure out what's real and what's not real?   I mean, I can give all sorts of literal answers  to that question. We could be cryptographically  

    19:29

    signing stuff and we could decide who we trust  their signature if they actually filmed something   or not. But but my sense is what's going to  happen is it's just going to like gradually   converge.

    You know, even like a photo you take  out of your iPhone today, it's like mostly real,  

    19:47

    but it's a little not. There's like in some AI  thing running there in a way you don't understand   and making it look like a little bit better and  sometimes you see these weird things where the   moon.

    Yeah. Yeah.

    Yeah. Yeah.

    But there's like  a lot of processing power between the photons  

    20:03

    captured by that camera sensor and the image  you eventually see. And you've decided it's real   enough or most people decided it's real enough.  But we've accepted some gradual move from when it   was like photons hitting the film in a camera.

    And  you know, if you go look at some video on Tik Tok,  

    20:22

    there's probably all sorts of video editing tools  being used to make it better than real look. Yeah,   exactly.

    Or it's just like, you know, whole  scenes are completely generated or some of   the whole videos are generated like those bunnies  on that trampoline. And and I think that the the  

    20:38

    sort of like the threshold for how real does it  have to be to consider to be real will just keep   moving. So it's sort of a education question.  It's a people will Yeah.

    I mean media is always   like a little bit real and a little bit not real.  Like you know we watch like a sci-fi movie. We  

    20:58

    know that didn't really happen. You watch like  someone's like beautiful photo of themselves on   vacation on Instagram.

    like, okay, maybe that  photo was like literally taken, but you know,   there's like tons of tourists in line for the same  photo and that's like left out of it. And I think   we just accept that now.

    Certainly, a higher  percentage of media both will will feel not  

    21:16

    real. Um, but I think that's been the long-term  trend.

    Anyway, we're going to jump again. Okay,   2035, we're graduating from college, you and me.  There are some leaders in the AI space that have   said that in 5 years half of the entry level  white collar workforce will be replaced by AI.  

    21:34

    So we're college graduates in 5 years. What do  you hope the world looks like for us?

    I think   there's been a lot of talk about how AI might  cause job displacement, but I'm also curious. I   have a job that nobody would have thought we  could have, you know, totally a decade ago.  

    21:51

    What are the things that we could look ahead if  we're thinking about in 2035 that like graduating   college student, if they still go to college at  all, could very well be like leaving on a mission   to explore the solar system on a spaceship in some  kind of completely new exciting, super well- paid,  

    22:07

    super interesting job and feeling so bad for you  and I that like we had to do this kind of like   really boring old kind of work and everything  is just better. Like I I 10 years feels very   hard to imagine at this point because it's too  far.

    It's too far. If you compound the current  

    22:22

    rate of change for 10 more years, it's probably  something we can't even time travel trips. I 10   like I mean I think now would be really hard  to imagine 10 years ago.

    Yeah. Uh but I think   10 years forward will be even much harder, much  more different.

    So let's make it 5 years. We're  

    22:41

    still going to 2030. I'm curious what you  think the pretty short-term impacts of this   will be for for young people.

    I mean, these like  half of entry- level jobs replaced by AI makes   it sound like a very different world that they  would be entering than the one that I did. Um,

    23:02

    I think it's totally true that some classes of  jobs will totally go away. This always happens   and young people are the best at adapting to this.  I'm more worried about what it means, not for the   like 22-y old, but for the 62-y old that doesn't  want to go re retrain or reskill or whatever the  

    23:17

    politicians call it that no one actually wants  but politicians and most of the time. If I were   22 right now and graduating college, I would  feel like the luckiest kid in all of history.   Why?

    Because there's never been a more amazing  time to go create something totally new, to go  

    23:33

    invent something, to start a company, whatever  it is. I think it is probably possible now to   start a company that is a oneperson company that  will go on to be worth like more than a billion   dollars and more importantly than that deliver an  amazing product and service to the world and that   that is like a crazy thing.

    You have access to  tools that can let you do what used to take teams  

    23:52

    of hundreds and you just have to like you know  learn how to use these tools and come up with a   great idea and it's it's like quite amazing. If  we take a step back, I think the most important   thing that this audience could hear from you  on this optimistic show is in two parts.

    First,  

    24:13

    there's tactically, how are you actually trying  to build the world's most powerful intelligence   and what are the rate limiting factors to doing  that? And then philosophically, how are you and   others working on building that technology in  a way that really helps and not hurts people?  

    24:30

    So just taking the tactical part right now.  My understanding is that there are three big   categories that have been limiting factors for  AI. The first is compute, the second is data and   the third is algorithmic design.

    How do you think  about each of those three categories right now?  

    24:49

    And if you were to help someone understand  the next headlines that they might see,   how would you help them make sense of all this?  I I would say there's a fourth too which is uh   figuring out the products to build like techn like  scientific progress on its own not put into the  

    25:06

    hands of people is of limited utility and doesn't  sort of co-evolve with society in the same way   but if I could hit all four of those um so on  the compute side yeah this is like the biggest   infrastructure project certainly that I've ever  seen possibly it will become the I think it will   maybe already is the biggest and most expensive  one in human history but the the whole supply  

    25:27

    chain from making the chips and the memory and  the networking gear, racking them up in servers,   doing, you know, a giant construction project to  build like a mega mega data center, putting the,   you know, finding a way to get the energy, which  is often a limiting factor piece of this and all  

    25:44

    the other components together. This is hugely  complex and expensive.

    And we are we're still   doing this in like a sort of bespoke one-off way  although it's getting better. Like eventually we   will just design a whole kind of like mega factory  that takes you know I mean spiritually it will be  

    26:05

    melting sand on one end and putting out fully  built AI compute on the other but we are a long   way to go from that and it's a it's an enormously  complex and expensive process. uh we are putting  

    26:20

    a huge amount of work into building out as much  compute as we can and to do it fast and you know   it's going to be like sad because GP5 is going  to launch and there's going to be another big   spike in demand and we're not going to be able  to serve it and it's going to be like those early   GPD4 days and the world just wants much more AI  than we can currently deliver and building more  

    26:39

    compute is an important part of doing that.  That's actually this is what I expect to turn   the majority of my attention to is how we build  compute at much greater scales. Uh so how we go   from millions to tens of millions and hundreds of  millions and eventually hopefully billions of GPUs  

    26:56

    that are sort of in service of what people want  to do with this. When you're thinking about it,   what are the big challenges here in this category  that you're going to be thinking about?

    We're   currently most limited by energy. um you know like  if you're gonna you want to run a gigawatt scale   data center it's like a gigawatt how hard can that  be to find it's really hard to find a gigawatt of  

    27:14

    power available in short term we're also very much  limited by the processing chips and the memory   chips uh how you package these all together how  you build the racks and then there's like a list   of other things that are you know there's like  permits there's construction work uh but but  

    27:32

    again the goal here will be to really automate  this once we get some of those robots built,   they can help us automate it even more. But just,  you know, like a world where you can basically   pour in money and get out a pre-built data center.  Uh so that'll be that'll be a huge unlock if we  

    27:48

    can get it to work. Second category, data.

    Yeah,  these models have gotten so smart. There was a   time when we could just feed it another physics  textbook and got a little bit smarter at physics,   but now like honestly GBT5 understands  everything in a physics textbook pretty well.  

    28:04

    We're excited about synthetic data. We're very  excited about our users helping us create harder   and harder tasks and environments to go off and  have the system solve.

    But uh I think we're data   will always be important, but we're entering a  realm where the models need to learn things that  

    28:24

    don't exist in any data set yet. They have to  go discover new things.

    So that's like a crazy   new How do you teach a model to discover new  things? Well, humans can do it.

    like we can   go off and come up with hypotheses and test them  and get experimental results and update on what we   learn. So probably the same kind of way.

    And then  there's algorithmic design. Yeah, we've made huge  

    28:42

    progress on algorithmic design. Uh the thing that  the thing that I think open does best in the world   is we have built this culture of repeated and big  algorithmic research gains.

    So we kind of you know   figured out the what became the GPT paradigm. We  figured out became the reasoning paradigm.

    We're  

    29:00

    working on some new ones now. Um, but it is very  exciting to me to think that there are still many   more orders of magnitudes of algorithmic  gains ahead of us.

    We we just yesterday   uh released a model called GPOSS, open source  model. It's a model that is as smart as 04 Mini,  

    29:17

    which is a very smart model that runs locally on  a laptop. And this blows my mind.

    Yeah. Like if   you had asked me a few years ago when we'd have  a model of that intelligence running on a laptop,   I would have said many many years in the future.  But then we we found some algorithmic gains  

    29:36

    um particularly around reasoning but also some  other things that let us do a a tiny model that   can do this amazing thing. And you know those are  those are the most fun things.

    That's like kind of   the coolest part of the job. I can see you really  enjoying thinking about this.

    I'm curious for  

    29:51

    people who don't quite know what you're talking  about, who aren't familiar with how an algorithmic   design would lead to a better experience that they  actually use. Could you summarize the state of   things right now?

    Like what what is it that you're  thinking about when you're thinking about how fun   this problem is? Let me start back in history  and then I'll get to some things for today.

    So,  

    30:11

    GPT1 was an idea at the time that was quite  mocked by a lot of experts in the field,   which was can we train a model to play a little  game, which is show it a bunch of words and have   it guess the one that comes next in the sequence.  That's called unsupervised learning. There's not  

    30:29

    you're not really saying like this is a cat,  this is a dog. You're saying here's some words,   guess the next one.

    And the fact that that can  go learn these very complicated concepts that   can go learn all the stuff about physics and math  and programming and keep predicting the word that  

    30:46

    comes next and next and next and next seemed  ludicrous, magical, unlikely to work. Like how   was that all going to get encoded?

    And yet humans  do it. you know, babies start hearing language and   figure out what it means kind of largely uh or at  least to some significant degree on their own.

    And  

    31:08

    and so we did it and then we also realized that if  we scaled it up, it got better and better, but we   had to scale over many many orders of magnitude.  So it wasn't that good in the GPT1 day. It wasn't   good at all in the GPT1 days.

    And a lot of experts  in the field said, "Oh, this is ridiculous. It's   never going to work.

    It's not going to be robust."  But we had these things called scaling laws. And  

    31:26

    we said, "Okay, so this gets predictably better as  we increase compute, memory, data, whatever. And   we can we can decide we can use those predictions  to make decisions about how to scale this up and   do it and get great results." And that has worked  over Yeah.

    a crazy number of orders of magnitude.  

    31:47

    And it was so not obvious at the time. like  that was that was I think the the reason the   world was so surprised is that that seemed like  such an unlikely finding.

    Another one was that we   could use these language models with reinforcement  learning where we're saying this is good, this is   bad to teach it how to reason. And this led to the  01 and 03 and now the GBT5 progress.

    And that that  

    32:11

    was another thing that felt like uh if it works  it's really great but like no way this is going   to work. It's too simple.

    And now we're on to new  things. We've figured out how to make much better   video models.

    We are we are discovering new ways  to use new kinds of data and environment to kind  

    32:28

    of scale that up as well. Um and I think again  you know 5 10 years out that's too hard to say in   this field but the next couple of years we have  very smooth very strong scaling in front of us.   I think it has become a sort of public narrative  that we are on this smooth path from one to two to  

    32:47

    three to four to five to more. Yeah.

    But it also  is true behind the scenes that it's a it's not   linear like that. It's messier.

    Tell us a little  bit about the mess before GPT5. What was what were   the interesting problems that you needed to solve?  Um, we did a model called Orion that we released  

    33:08

    as GPT 4.5. And we had we did too big of a  model.

    It was just it was it's a very cool model,   but it's unwieldly to use. And we realized that  for kind of some of the research we need to do on   top of a model, we need a different shape.

    So we  we followed one scaling law that kept being good  

    33:26

    without without really internalizing. There was  a new even steeper scaling law that we got better   returns for compute on, which was this reasoning  thing.

    So that was like one alley we went down and   turned around, but that's fine. That's part of  research.

    Um, we had some problems with the way   we think about our data sets as these models like  really have to get get this big and um, you know,  

    33:45

    learn from this much data. So So yeah, I think  like in the in the middle of it in the day-to-day,   you kind of you make a lot of U-turns as  you try things or you have an architecture   idea that doesn't work, but the the aggregate the  summation of all the squiggles has been remarkably  

    34:03

    smooth on the exponential. One of the  things I always find interesting is that   by the time I'm sitting here interviewing  you about the thing that you just put out,   you're thinking about Exactly.

    What are the things  that you can share that are at least the problems  

    34:18

    that you're thinking about that I would be  interviewing you about in a year if I came back? I mean, possibly you'll be asking me like,  what does it mean that this thing can go  

    34:34

    discover new science? Yeah.

    What how how  is the world supposed to think about GPT6   discovering new science? Now, maybe  not like maybe we don't deliver that,   but it feels within grasp.

    If you did, what  would you say? What would your what would the  

    34:49

    implications of that kind of achievement  be? Imagine you do succeed.

    Yeah. I mean,   I think the great parts will be great.

    the bad  parts will be scary and the bizarre parts will   be like bizarre on the first day and then we'll  get used to them really fast. So we'll be like,   "Oh, it's incredible that this is like being  used to cure disease and be like, oh, it's  

    35:07

    extremely scary that models like this are being  used to like create new biocurity threats." And   then we'll also be like, man, it's really weird  to like live through watching the world speed up   so much and you know the economy grows so fast  and the like it will feel like vertigo inducing  

    35:30

    uh the sort of the rate of change and then like  happens with everything else the remarkable   ability of of people of humanity to adapt to kind  of like any amount of change. we'll just be like,   "Okay, you know, this is like this is it." Um, a  kid born today will never be smarter than AI ever.  

    35:51

    And a kid born today, by the time that kid like  kind of understands the way the world works, will   just always be used to an incredibly fast rate of  things improving and discovering new science. They   will just they will never know any other world.

    It  will seem totally natural. will seem unthinkable  

    36:07

    and stone age like that we used to use computers  or phones or any kind of technology that was not   way smarter than we were. You know, we will think  like how bad those people of the 2020s had it.

    I'm   thinking about having kids. You should.

    It's the  best thing ever. I know you just had your first  

    36:23

    kid. How does what you just said affect how I  should think about parenting a kid in that world?

    What advice would you give me? Probably nothing  different than the way you've been parenting kids  

    36:39

    for tens of thousands of years. Like love your  kids, show them the world, like support them in   whatever they want to do and teach them like how  to be a good person.

    And that probably is what's   going to matter. It sounds a little bit like  some of the you know you've said a couple of  

    36:55

    things like this that that you know you might not  go to college you might there there are a couple   of things that you've said so far that feed into  this I think and it sounds like what you're saying   is there will be more optionality for them in a  in a world that you envision and therefore they  

    37:15

    will have more more ability to say I want to build  this here's the superpowered tool that will help   me do that or yeah like I want my kid to think  I had a terrible constrained life and that he   has this incredible infinite canvas of stuff to  do that that that is like the way of the world.  

    37:34

    We've said that uh 2035 is a little bit too far in  the future to think about. So maybe this this was   going to be a jump to 2040 but maybe it will keep  it shorter than that.

    When I think about the area   where AI could have for both our kids and us the  biggest genuinely positive impact on all of us,  

    37:51

    it's health. So if we are in pick your year, call  it 2035 and I'm sitting here and I'm interviewing   the dean of Stanford medicine, what do you hope  that he's telling me AI is doing for our health   in 2035?

    Start with 2025. Okay.

    Um yeah, please.  One of the things we are most proud of with GPT5  

    38:14

    is how much better it's gotten at health advice.  Um, people have used the GPT4 models a lot for   health advice. And you know, I'm sure you've seen  some of these things on the internet where people   are like, I had this life-threatening disease  and no doctor could figure it out and I like  

    38:31

    put my symptoms and a blood test into CHBT. It  told me exactly the rare thing I had.

    I went to   a doctor. I took a pill.

    I'm cured. Like that's  amazing.

    obviously and a huge fraction of ChatGpt   queries are health related. So we wanted to get  really good at this and we invested a lot in  

    38:47

    GPT5 is significantly better at healthcare related  queries. What does better mean here?

    It gives you   a better answer just more accurate more accurate  hallucinates less uh more likely to like tell you   what you actually have what you actually should  do. Um, yeah, and better healthcare is wonderful,  

    39:06

    but obviously what people actually want  is to just not have disease. And by 2035,   I think we will be able to use these tools to  cure a significant number or at least treat a   significant number of diseases that currently  plague us.

    I think that'll be one of the most  

    39:25

    viscerally felt benefits of of AI. People talk a  lot about how AI will revolutionize healthcare,   but I'm curious to go one turn deeper on  specifically what you're imagining.

    Like,   is it that these AI systems could have helped  us see GLP-1s earlier, this medication that has  

    39:44

    been around for a long time, but we didn't know  about this other effect? Is it that, you know,   alpha fold and protein folding is helping create  new medicines?

    I would like to be able to ask GBT   8 to go cure a particular cancer and I would like  GPT8 to go off and think and then say uh okay I  

    40:04

    read everything I could find. I have these ideas.  I need you to uh go get a lab technician to run   these nine experiments and tell me what you find  for each of them.

    And you know wait 2 months for   the cells to do their thing. Send the results back  to GBT8.

    Say I tried it. Here you go.

    Think think.  

    40:19

    Say okay I just need one more experiment. That was  a surprise.

    Run one more experiment. Give it back.   GPT says, "Okay, go synthesize this molecule and  try, you know, mouse studies or whatever." Okay,   that was good.

    Like, try human studies. Okay,  great.

    It worked. Um, here's how to like run   it through the FDA.

    I think anyone with a loved  one who's died of cancer would also really like  

    40:39

    that. Okay, we're going to jump again.

    Okay. I was  going to say 2050, but again, all of my timelines   are getting much, much shorter.

    But I It does  feel like the world's going very fast now. It   does.

    Yeah. And when I talk to other leaders in  AI, one of the things that they refer to is the  

    40:56

    industrial revolution. They say, "I chose 2050  because I've heard people talk about how by then   the change that we will have gone through will  be like the industrial revolution, but quote 10   times bigger and 10 times faster." The industrial  revolution gave us modern medicine and sanitation  

    41:12

    and transportation and mass production and all all  of the conveniences that we now take for granted.   It also was incredibly difficult for a lot of  people for about 100 years. If this is going to   be 10 times bigger and 10 times faster if we keep  reducing the timelines that we're talking about   here, even in this conversation, what does that  actually feel like for most people?

    And I think  

    41:32

    what I'm trying to get at is if this all goes the  way you hope, who still gets hurt in the meantime?   I don't I don't really know what this is going  to feel like to live through. Um I think we're  

    41:49

    in uncharted waters here. Uh I do believe in  like human adaptability and sort of infinite   creativity and desire for stuff and I think  we always do figure out new things to do but   the transition period if this happens as fast  as it might and I don't think it will happen  

    42:05

    as fast as like some of my colleagues say the  technology will but society has like a lot of   inertia. Mhm.

    people adapt their way of living.  Yeah. Surprisingly slowly.

    There are to classes   of jobs that are going to totally go away and  there will be many classes of jobs that change  

    42:21

    significantly and there'll be the new things in  the same way that your job didn't exist some time   ago. Neither did mine.

    And in some sense, this  has been going on for a long time. And you know,   it's it's still disruptive to individuals, but  society has gotten has proven quite resilient  

    42:37

    to this. And then in some other sense like we  have no idea how far or fast this could go.   And thus I think we need an unusual degree  of humility and openness to considering

    42:55

    new solutions that would have seemed way  out of the Overton window not too long ago.   I'd like to talk about what some of those could  be because I'm not a historian by any means, but   the first industrial revolution, my understanding  is led to a lot of public health implementations  

    43:13

    because public health got so bad. Led to modern  sanitation because public health got so bad.   The second industrial revolution led to workforce  protections because labor conditions got so bad.   Every big leap creates a mess and that mess needs  to be cleaned up and and we've done that.

    And I'm  

    43:31

    curious, this is going to be it sounds like  an we're in the middle of this enormously. How   specific can we get as early as possible about  what that mess can be?

    What what are the public   interventions that we could do ahead of time to  reduce the mess that we think that we're headed  

    43:48

    for? I would again c I'm going to speculate for  fun but caveed by I'm not an economist even uh   much less someone who can see the future.

    I I it  seems to me like something fundamental about the  

    44:06

    social contract may have to change. It may not.  It may it may be that like actually capitalism   works as it's been working surprisingly well and  like demand supply balances do their thing and we   all just figure out kind of new jobs and new  ways to transfer value to each other.

    But it  

    44:25

    seems to me likely that we will decide we need  to think about how access to this maybe most   important resource of the future gets shared.  The best thing that it seems to me to do is to  

    44:40

    make AI compute as abundant and cheap as possible  such that we're just like there's way too much   and we run out of like good new ideas to really  use it for and it's just like anything you want   is happening. Without that, I can see like quite  literal wars being fought over it.

    But, you know,  

    44:55

    new ideas about how we distribute access to AGI  compute, that seems like a really great direction,   like a crazy but important thing to think about.  One of the things that I find myself thinking   about in this conversation is we often ascribe  almost full responsibility of the AI future that  

    45:14

    we've been talking about to the companies building  AI, but we're the ones using it. We're the ones   electing people that will regulate it.

    And so I'm  curious, this is not a question about specific,   you know, federal regulation or anything like  that, although if you have an answer there,   I'm curious. But what would you ask of the rest  of us?

    What is the shared responsibility here?  

    45:36

    And how can we act in a way that would help make  the optimistic version of this more possible? My   favorite historical example for the AI revolution  is the transistor.

    It was this amazing piece of   science that some science brilliant scientists  discovered. It scaled incredibly like AI does  

    45:56

    and it made its way relatively quickly into  every many things that we use. um your computer,   your phone, that camera, that light, whatever.  And it was a it was a real unlock for the tech   tree of humanity.

    And there were a period in time  where probably everybody was really obsessed with  

    46:14

    the transistor companies, the semiconductors of,  you know, Silicon Valley back when it was Silicon   Valley. But now you can maybe name a couple of  companies that are transistor companies, but   mostly you don't think about it.

    Mostly it's just  seeped everywhere. in Silicon Valley is, you know,   like probably someone graduating from college  barely remembers why it was called that in the  

    46:34

    first place. And you don't think that it was those  transistor companies that shaped society even   though they did something important.

    You think  about what Apple did with the iPhone and then   you think about what Tik Tok built on top of the  iPhone and you're like, "All right, here's this   long chain of all these people that nudged society  in some way and what our governments did or didn't  

    46:53

    do and what the people using these technologies  did." And I think that's what will happen with AI.   Like back, you know, kids born today, they they  never knew the world without AI. So they don't   really think about it.

    It's just this thing that's  going to be there in everything. and and they will   think about like the companies that built on it  and what they did with it and the kind of like  

    47:12

    political leaders the decisions they made that  maybe they wouldn't have been able to do without   AI but they will still think about like what this  president or that president did and you know the   role of the AI companies is all these companies  and people and institutions before us built up  

    47:29

    this scaffolding we added our one layer on top and  now people get to stand on top of that and add one   layer and the next and the next and many more And  that is the beauty of our society. We kind of all

    47:46

    I I love this like idea that society  is the super intelligence. Like no one   person could do on their own, what they're  able to do with all of the really hard work   that society has done together to like give  you this amazing set of tools.

    And that's  

    48:03

    what I think it's going to feel like. It's  going to be like, all right, you know, yeah,   some nerds discovered this thing and that was  great and you know, now everybody's doing all   these amazing things with it.

    So maybe the ask  to millions of people is build on it. Well,

    48:19

    in my own life, that is the feel as like this important societal contract.  All these people came before you. They worked   incredibly hard.

    They like put their brick in  the path of human progress and you get to walk  

    48:35

    all the way down that path and you got to put one  more and somebody else does that and somebody else   does that. This does feel I've done a couple  of interviews with folks who have really made   cataclysmic change.

    The one I'm thinking about  right now is with uh crisper pioneer Jennifer Dana  

    48:51

    and it did feel like that was also what she was  saying in some way. She had discovered something   that really might change the way that most people  relate to their health moving forward.

    And there   will be a lot of people that will use what she  has done in ways that she might approve of or   not approve of. And it was really interesting.  I'm hearing some similar themes of like, man,  

    49:09

    I I hope that this I hope that the next person  takes the baton and runs with it well. Yeah.   But that's been working for a long time.

    Not all  good, but mostly good. I think there's a there's   a big difference between winning the race and  building the AI future that would be best for the  

    49:28

    most people. And I can imagine that it is easier  maybe more quantifiable sometimes to focus on the   next way to win the race.

    And I'm curious when  those two things are at odds. What is an example  

    49:44

    of a decision that you've had to make that is  best for the world but not best for winning? I think there are a lot.

    So, one of the  things that we are most proud of is many   people say that ChachiBt is their favorite  piece of technology ever and that it's the  

    50:02

    one that they trust the most, rely on the  most, whatever. And this is a little bit of   a ridiculous statement because AI is the thing  that hallucinates.

    AI has all of these problems,   right? But we have screwed some things up along  the way, sometimes big time, but on the whole,   I think as a user of Chachib, you get the feeling  that like it's trying to help you.

    It's trying to  

    50:21

    like help you accomplish whatever you ask. It's  it's very aligned with you.

    It's not trying to   get you to like, you know, use it all day. It's  not trying to like get you to buy something.   It's trying to like kind of help you accomplish  whatever your goals are.

    And and that is that's  

    50:36

    like a very special relationship we have with our  users. We do not take it lightly.

    There's a lot   of things we could do that would like grow  faster, that would get more time in chatbt   uh that we don't do because we know that like  our long-term incentive is to stay as aligned   with our users as possible. And but there's a lot  of short-term stuff we could do that would like  

    50:57

    really like juice growth or revenue or whatever  and be very misaligned with that long-term goal.   And I'm proud of the company and how little we  get distracted by that. But sometimes we do get   tempted.

    Are there specific examples that come  to mind? Any like decisions that you've made?

    Um

    51:15

    well, we haven't put a sex bot avatar in  Chbt yet. That does seem like it would   get time spent.

    Apparently, it does.  I'm gonna ask my next question. Um,   it's been a really crazy few years.

    You know, it  and somehow one of the things that keeps coming  

    51:32

    back is that it feels like we're in the first  inning. Yeah.

    And one of the things that I would   say we're out of the first inning. Out of the  first inning, I would say second inning.

    I mean,   you have GPT5 on your phone and it's like smarter  than experts in every field. That's got to be out  

    51:48

    of the first name. But maybe there are many  more to come.

    Yeah. And I'm curious, it seems   like you're going to be someone who is leading the  next few.

    What is a way, what is a learning from  

    52:04

    inning one or two or a mistake that you made that  you feel will affect how you play in the next? I think the worst thing we've done in ChachiBT  so far is uh we had this issue with sickency   where the model was kind of being too flattering  to users and for some users it was most users it  

    52:24

    was just annoying but for some users that had like  fragile mental states it was encouraging delusions   that was not the top risk we were worried about.  It was not the thing we were testing for the most.   was on our list, but the thing that actually  became the safety failing of ChachiBT was not  

    52:42

    the one we were spending most of our time talking  about, which should be bioweapons or something   like that. And I think it was a great reminder of  we now have a service that is so broadly used in   some sense, society is co-evolving with it.

    And  when we think about these changes and we think  

    53:03

    about the unknown unknowns, we have to operate in  a different way and have like a wider aperture to   what we think about as our top risks. In a recent  interview with Theo Vaughn, you said something   that I found really interesting.

    You said there  are moments in the history of science where you  

    53:18

    have a group of scientists look at their creation  and just say, "What have we done?" When have you   felt that way? Most concerned about the creation  that you've built?

    Um and then my next question   will be it's opposite. When have you felt most  proud?

    I mean there have been these moments of  

    53:36

    awe where uh we just not like what have we done in  a bad way but like this thing is remarkable. Like   I remember the first time we talked to like GPT4  was like wow this is really like this is this is  

    53:52

    an amazing accomplishment of this group of people  that have been like pouring their life force into   this for so long. on a what have we done moment.  There was I was talking to a researcher recently.

    You know, there will probably come a time  where our systems are I don't want to say sane,  

    54:14

    let's say emitting more words  per day than all people do.   Um, and you know already like our people are  sending billions of messages a day to chatbt   and getting responses that they rely on for work  or their life or whatever the and you know like  

    54:32

    one researcher can make some small tweak to how  Chad GPT talks to you or talks to everybody and   and that's just an enormous amount of power for  like one individual making a small tweak to the   model personality. Yeah.

    like no no no person  in history has been able to have billions of  

    54:48

    conversations a day and so you know somebody could  do something but but this is like just thinking   about that really hit me of like this is like a  crazy amount of power for one piece of technology   to have and like we got to and this happened to  us so fast that we got to like think about what  

    55:06

    it means to make a personality change to the model  at this kind of scale and uh yeah that was like   a moment that hit me What was your next set of  thoughts? I'm so curious how you think about this.

    Well, just because of like who that person was  like we we very we very much flipped into like  

    55:27

    what are the sort of like it it could have been  a very different conversation with somebody else.   But in this case it was like what is a what do  a good set of procedures look like? How do we   think about how we want to test something?

    How do  we think about how we want to communicate it? But   with somebody else it could have gone in a like  very philosophical direction.

    And it could have  

    55:42

    gone in like a what kind of research do we like  want to do to go understand what these changes are   going to make? Do we want to do it differently  for different people?

    So that it went that way   but mostly just because of who I was talking to.  To combine what you're saying now with your last   answer, one of the things that I have heard  about GBC5 and I'm still playing with it is  

    56:01

    that it is supposed to be less effusively uh you  know less of a yes man. Two questions.

    What do   you think are are the implications of that? It  sounds like you are answering that a little bit,   but also how do you actually guide it to  be less like that?

    Here is a heartbreaking  

    56:22

    thing. I think it is great that chatbt  is less of a yes man and gives you more   critical feedback.

    But as we've been making  those changes and talking to users about it,   it's so sad to hear users say like, "Please  can I have it back? I've never had anyone in   my life be supportive of me.

    I never had a  parent telling me I was doing a good job."  

    56:39

    Like I can get why this was bad for other people's  mental health, but this was great for my mental   health. Like I didn't realize how much I needed  this.

    It encouraged me to do this. It encouraged   me to make this change in my life.

    Like it's  not all bad for chatbt to it turns out like be   encouraging of you. Now the way we were doing  it was bad, but turn it like something in that  

    56:58

    direction might have some value in it. How we do  it, we we show the model examples of how we'd like   it to respond in different cases and from that  it learns the sort of the overall personality.   What haven't I asked you that you're thinking  about a lot that you want people to know?

    I  

    57:16

    feel like we covered a lot of ground. Me, too.

    But  I want to know if there's anything on your mind. I don't think so.

    One of the things that I haven't  gotten to play with yet, but I'm curious about is  

    57:33

    GBT5 being much more in my life, meaning like  in my Gmail and my calendar and my like I've   been using GBT4 mostly as a isolated relationship  with it. Yeah.

    How would I expect my relationship  

    57:49

    to change with GBC 5? Exactly what you said.  I think it'll just start to feel integrated in   all of these ways.

    you'll connect it to your  calendar and your Gmail and it'll say like,   "Hey, do you want me to I noticed this thing. Do  you want me to do this thing for you over time,   it'll start to feel way more proactive.

    Um, so  maybe you wake up in the morning and it says,  

    58:08

    "Hey, this happened overnight. I noticed this  change on your calendar.

    I was thinking more   about this question you asked me. I have this  other idea." And then you know eventually we'll   make some consumer devices and it'll sit here  during this interview and you know maybe it'll   leave us alone during it but after it'll say that  was great but next time you should have asked Sam  

    58:24

    this or when you brought this up like you know  he kind of didn't give you a good answer so like   you should really drill him on that and it'll just  feel like it kind of becomes more like this entity   that is this companion with you throughout your  day. We've talked about kids and college graduates  

    58:41

    and parents and all kinds of different people. If  we imagine a wide set of people listening to this,   they've come to the end of this conversation.

    They  are hopefully feeling like they maybe see visions   of moments in the future a little bit better. What  advice would you give them about how to prepare?  

    58:59

    The number one piece of tactical advice is just  use the tools. Like the the number of people that   I have the the most common question I get asked  about AI is like what should I how should I help   my kids prepare for the world?

    What should I  tell my kids? The second most question is like   how do I invest in this AI world?

    But stick with  that first one. Um I am surprised how many people  

    59:21

    ask that and have never tried using Chachi PT  for anything other than like a better version   of a Google search. And so the number one piece of  advice that I give is just try to like get fluent   with the capability of the tools.

    figure out how  to like use this in your life. Figure out what to   do with it.

    And I think that's probably the most  important piece of tactical advice. You know,  

    59:38

    go like meditate, learn how to be resilient and  deal with a lot of change. There's all that good   stuff, too.

    But just using the tools really  helps. Okay.

    I have one more question that   I wasn't planning to ask, but I just Great.  In in doing all of this research beforehand,   I spoke to a lot of different kinds of folks.  I spoke to a lot of people that were building  

    59:56

    tools and using them. I spoke to a lot of  people that were actually in labs and and   trying to build what we have defined as super  intelligence.

    And it did seem like there were   these two camps forming. There's a group of  people who are using the tools like you in this  

    00:15

    conversation and building tools for others  saying this is going to be a really useful   future that we're all moving toward. Your life is  going to be full of choice and we've talked about   our my potential kids and and their futures.  Then there's another camp of people that are   building these tools that are saying it's going  to kill us all.

    And I'm curious how that cultural  

    00:35

    disconnect has like what am I missing about  those two groups of people? It's so hard for   me to like wrap my head around like there are you  are totally right.

    There are people who say this   is going to kill us all and yet they still are  working 100 hours a week to build it. Yes.

    And  

    00:54

    I I can't I can't really put myself in the headsp  space. If if that's what I really truly believed, I don't think I'd be trying to build it.

    One  would think, you know, maybe I would be like   on a farm trying to like live out my last days.  Maybe I would be trying to like advocate for it  

    01:11

    to be stopped. Maybe I would be trying to  like work more on safety, but I don't think   I'd be trying to build it.

    So, I find myself just  having a hard time empathizing with that mindset.   I assume it's true. I assume it's in  good faith.

    I assume there's just like   there's some psychological issue there I don't  understand about how they make it all make sense,  

    01:29

    but it's very strange to me. Do you do you have an  opinion?

    You know, because I I always do this. I   ask for sort of a general future and then I try  to press on specifics.

    And when you ask people  

    01:45

    for specifics on how it's going to kill us all,  I mean, I don't think we need to get into this   on an optimistic show, but you hear the same kinds  of refrains. You think about, you know, something   uh trying to accomplish a task and then over  accomplishing that task.

    Um you hear about sort   of I've heard you talk about a sort of general  um over reliance of sort of an understanding  

    02:05

    that the president is going to be a a AI and and  maybe that is an overreliance that we, you know,   would need to think about. And you know, you you  play out these different scenarios, but then you   ask someone why they're working on it, or you ask  someone how how they think this will play out,   and I just maybe I haven't spoken to enough people  yet.

    Maybe I don't fully understand this this  

    02:25

    cultural conversation that's happening. Um or  maybe it really is someone who just says 99% of   the time I think it's going to be incredibly good.  1% of the time I think it might be a disaster   trying to make the best world.

    That I can totally  if you're like, hey, 99% chance incredible. 1%  

    02:41

    chance the world gets wiped out. And I really want  to work to maximize to move that 99 to 99.5.

    That   I can totally understand. Yeah, that makes sense.  I've been doing an interview series with some of   the most important people influencing the future.  Not knowing who the next person is going to be,  

    02:58

    but knowing that they will be building something  totally fascinating in the future that we've just   described. Is there a question that you'd advise  me to ask the next person not knowing who it is?   I'm always interested in the like without knowing  anything about the I'm always interested in the   like of all of the things you could spend  your time and energy on.

    Why did you pick  

    03:18

    this one? How did you get started?

    Like what  did you see about this when before everybody   else like most people doing something interesting  sort of saw it earlier before it was consensus.   Yeah. Like how did how did you get here and  why this?

    How would you answer that question?

    03:33

    I was an AI nerd my whole life. I came to college  to study AI.

    I worked in the AI lab. Uh, I was   like a I watched sci-fi shows growing up and I  always thought it would be really cool if someday   somebody built it.

    I thought it would be like the  most important thing ever. I never thought I was   going to be one to actually work on it and I feel  like unbelievably lucky and happy and privileged  

    03:56

    that I get to do this. I like feel like I've like  come a long way from my childhood.

    But there was   never a question in my mind that this would not be  the most exciting interesting thing. I just didn't   think it was going to be possible.

    Uh, and when  I went to college, it really seemed like we were  

    04:11

    very far from it. And then in 2012, the Alex Net  paper came out done, you know, in partnership with   my co-founder, Ilia.

    And for the first time, it  seemed to me like there was an approach that might  

    04:26

    work. And then I kept watching for the next couple  of years as scaled up, scaled up, got better,   better.

    And I remember having this thing of  like why is the world not paying attention to   this? It seems like obvious to me that this might  work.

    Still a low chance, but it might work. And  

    04:42

    if it does work, it's just the most important  thing. So like this is what I want to do.

    And   then like unbelievably it started to work. Thank  you so much for your time.

    Thank you very much.