How to build AGI: What's missing? | Demis Hassabis and Lex Fridman

πŸš€ Add to Chrome – It’s Free - YouTube Summarizer

Category: N/A

Building WordCloud ...

Summary

No summary available.

Transcript

00:00

So if we look at a AGI system, sorry to bring it back up, but alpha evolve super cool. So alpha evolve enables on the programming side something like recursive self-improvement uh

00:16

potentially like what if you can imagine what that AGI system maybe not the first version but a few versions beyond that what does that actually look like? Do you think it would be simple?

You think it would be something like a self-improving program in a simple one?

00:33

>> I mean, potentially that's possible. I would say um I'm not sure it's even desirable because that's a kind of like hard takeoff scenario.

But but you you these current systems like Alpha Evolve, they have, you know, human in the loop deciding on various things. They're separate hybrid systems that interact.

00:49

Uh one could imagine eventually doing that end to end. I don't see why that wouldn't be possible but right now um you know I think the systems are not good enough to do that in terms of coming up with the architecture of the code.

Um and again it's a little bit connected to this idea of coming up with

01:05

a new conjectural hypothesis. How like they're good if you give them very specific instructions about what you're trying to do.

Um, but if you give them a very vague highle instruction, that wouldn't work currently. Like, uh, and I think that's related to this idea of like invent a game as good as go, right?

01:21

Imagine that was the prompt. That's that's pretty underspecified.

And so the current systems wouldn't know, I think, what to do with that. How to narrow that down to something tractable.

And I think there's similar like, look, just make a better version of yourself that's too that's too unconstrained. But we've done it in, you know, and as you know with

01:38

Alpha Evolve, like things like faster matrix multiplication. So when you when you hone it down to very specific thing you want um it's very good at incrementally improving that but at the moment these are more like incremental improvements sort of small iterations whereas if you know if you wanted a big

01:55

leap in uh understanding you need a you need a much larger uh advance. >> Yeah.

But it could also be sort of to push back against hard takeoff scenario. It could be just a sequence of um incremental improvements like matrix

02:10

multiplication like it has to sit there for days thinking how to incrementally improve a thing and that it does so recursively and as you do more and more improvement it'll slow down so there'll be like a like uh the path to AGI won't be like a it'll be a gradual improvement

02:29

over time. >> Yes.

If it was just incremental improvements that's how it would look. So the question is could it come up with a new leap like the transformers architecture right could it have done that back in 2017 when you know we did it and brain did it and it's it's not clear that that these systems something

02:46

like Alphavolve wouldn't be able to do make such a big leap so for sure these systems are good we have systems I think that can do incremental hill climbing and that's a kind of bigger question about is that all that's needed from here or do we actually need one or two more um uh big breakthroughs

03:01

>> and can the same kind of systems provide the breakthroughs also. So make it a bunch of scurves like incremental improvement but also every once in a while leaps.

>> Yeah. I don't think anyone has systems that can have shown unequivocally those

03:16

big leaps that the the right. We have a lot of systems that do the hill climbing of the S-curve that you're currently on.

>> Yeah. And that would be the move 37 is a leap.

>> Yeah. I think would be a leap.

Um something like that.