The Co-Intelligence Revolution: How Humans and AI Co-Create New Value | Talks at Google

🚀 Add to Chrome – It’s Free - YouTube Summarizer

Category: AI Innovation

Tags: AICo-IntelligenceEducationInfrastructureRisk

Entities: Adobe FireflyGoogleIIT MadrasITCIthasa Research and DigitalJugal BandiKrishnan NarayananL'OrealNvidiaOpen EvidenceSharath BulusuThe Co-Intelligence RevolutionUniversity of MichiganVenkat Ramaswamy

Building WordCloud ...

Summary

    Introduction and Context
    • Sharath Bulusu introduces the session, highlighting his experience with technological shifts at Google.
    • The session features authors Venkat Ramaswamy and Krishnan Narayanan discussing their book 'The Co-Intelligence Revolution'.
    • The book explores the impact of artificial intelligence on people, organizations, and society.
    Co-Intelligence and Human Interaction
    • Venkat Ramaswamy explains the concept of co-intelligence as a synergy between human and AI intelligence.
    • AI systems now engage with humans in natural language, marking a shift in human-computer interaction.
    • Krishnan Narayanan emphasizes the importance of life experiences and the concept of the 'life expverse'.
    Infrastructure and Tokenized Digital Intelligence
    • The session discusses the role of shared digital infrastructure in enabling AI applications.
    • Tokenized digital intelligence (TDI) is likened to raw materials in the industrial revolution.
    • Examples include L'Oreal's use of TDI for personalized beauty products.
    Risks and Responsible AI
    • The speakers discuss the importance of managing risks and ensuring responsible AI development.
    • They highlight the need for transparency, privacy, and security in AI applications.
    • AI should be designed to enhance human capabilities, not replace them.
    Educational Implications
    • AI in education should focus on enhancing critical thinking and problem-solving skills.
    • Educators must design assignments that encourage interaction with AI rather than outsourcing thinking.
    • AI can personalize learning paths and improve student engagement.
    Actionable Takeaways
    • Embrace co-intelligence by integrating AI to enhance human creativity and problem-solving.
    • Design AI systems that prioritize human experiences and contextual interactions.
    • Develop shared digital infrastructure to support scalable AI applications.
    • Manage AI risks by ensuring transparency, privacy, and secure user interactions.
    • Incorporate AI in education to foster critical thinking and personalized learning experiences.
    • Utilize digital twins for real-time simulation and risk management in various industries.
    • Foster inclusive ecosystems by involving users in the co-creation of AI-driven solutions.

    Transcript

    00:00

    Good morning everyone. Uh welcome to talks  at Google.

    Uh I'm Sharath Bulusu. I work on   the Google Pay product in India.

    Um and I've been  working at Google since 2005 on different things,  

    00:19

    did stuff outside Google for a while as well, but  on and off. I've been associated with Google for   about 11 years now.

    And over the course of these  years, we've seen a lot of technological shifts.   I went from being a PM who did not have to worry  about small mobile screens to worrying about them  

    00:36

    uh not worrying about cloud to worrying about uh  things happening in the cloud. Thinking about AI   and ML that happens somewhere in a corner  to it being right in the middle of almost   everything that we do.

    Uh so with this context  I'm actually really really excited that we have  

    00:52

    two brilliant authors here today who've been  thinking about how the latest technological   shift changes the world. I have with me here  professor Venkat Ramaswamy and uh we also have   Krishnan Narayanan.

    Uh they're co-authors of a  new book called the co-intelligence revolution  

    01:12

    uh that came out just earlier this year. Um, and  it's fascinating that they've not only thought   about what does the change that's being driven by  artificial intelligence do, they're also thinking   about what does this mean for people, what does  it mean for organizations, what does it mean for  

    01:29

    society. So what I was struck by as I looked at  the book was not just that it addresses questions   like where could this technology take us and think  through some examples of real world impact that's   been created in different companies uh including  some of the time they spent here at Google  

    01:47

    uh during their research phase talking to Googlers  about how this stuff has come together but a   little bit about the authors before we before we  uh go ahead u professor Venkat is a distinguished   professor at the University of Michigan's Ross  School of Business. Um, award-winning author who  

    02:05

    uh in 2004 wrote a book about the future of  competition and since then his later books have   focused on the concept of co-creation and I think  it's it's very beautiful how that arc continues   now to him talking about not just co-creation  between people and organizations but also between  

    02:24

    people, organizations and machine intelligence.  Um Krishnan also is uh an award-winning author.   Uh he's co-founder and president of Ithasa  Research and Digital. Uh and he's been studying   how technology has evolved in India.

    Uh and one  of his books against all odds the IT story is a  

    02:45

    great recounting of how uh the beginning of the  IT industry and its progress in India shaped a   lot of what has happened here. uh many of us who  are involved with different aspects of uh digital   public infrastructure in India know what some  of the personalities from that space have done  

    03:02

    uh and so it's been fascinating to see that and  also his later book on empowering India. Uh today   we we're exploring this whole idea of what happens  if you bring humans and computers together in this   completely new paradigm.

    Um I want to start  off with a few questions for the authors and  

    03:17

    then we'll take uh questions from the audience as  well. Um I think the the first thing that I would   probably start about start talking about and  maybe a question for you but Krishnan please chime in as well.

    You talk about this shift  from artificial intelligence which is still the  

    03:36

    hot topic the the word that dominates a lot of  headlines to talking about co-intelligence in   your book. uh maybe it's helpful for our audience  to understand what you mean by co-intelligence and   that the premise of the book is centered around  that.

    So so maybe we start there. Sure.

    So first  

    03:53

    uh thanks Sharath and thanks to all the folks  uh at talks at Google uh for this opportunity   uh to share our thoughts based on the book.  Uh actually our starting point therefore is uh   natural intelligence which we humans are endowed  with. Uh and uh the way we approach uh the book  

    04:14

    is that uh you know humans uh have had enormous  creative capacities. If you go back in history   uh we have created amazing things right including  AI.

    And uh so for us the starting point is uh that  

    04:29

    creative capacity that we as humans have but also  the fact that we subjectively experience the world   around us. We are sitting here you know looking  at uh the room around us uh and uh that context of   that world experience which is highly subjective  uh and uh we have our own life world experiences  

    04:51

    uh that's very important to recognize because that  brings the kind of context uh in terms of how we   interact and engage the with the world right both  at an individual level you know psychologically   socially culturally right uh and you know there  are various uh you know aspects to that subjective  

    05:08

    experience. So uh what we find fascinating is  that uh uh we are now at a point where you know   as we are engaging with the world suddenly now  we have on the other side uh AI systems that can   actually engage with us.

    I just want to underscore  that part of engage with us because um this book  

    05:30

    really started postgenerative AI. I mean that's  where it's placed.

    uh that was the starting point   because where AI systems for the first time  could understand us in our natural language.   We think that's just amazing. It never happened in  humanity where you have systems of intelligence if  

    05:47

    you want to use that word. It's a different kind  of intelligence.

    That's the way we look at it in   the book. Um and so for the first time um you know  you didn't need to understand computer languages,   right?

    Uh we all grew up with learning programming  languages but in some sense now the programming   language we argue is the human experience. We can  just talk to it based on where we are in our you  

    06:07

    know lives like because what we want to accomplish  how we want to engage the world and bring that   sensibility and aspiration and desires and the way  in which we want the system to engage with us so   that it creates value in the way in which we think  about value. For a long time the value was created  

    06:26

    in our goods and services that were delivered to  us. If you go back like over 100 years right you   have factories that produce goods and services.  So we had this exchange paradigm where through the   process of exchange value gets created.

    We think  we're now into this new paradigm where value gets   interactively created. It is like enacted.

    It  is emergent from all of the interactions that  

    06:46

    we have in the real world, the physical world,  but where now these systems are engaging with us   uh both in the digital realm, you know, the  digital intelligence but increasingly getting   embodied uh in in in our physical world. So the  co-intelligence to now answer your question is  

    07:05

    this kind of synergy between these uh intelligent  AI systems which bring that different intelligence   but that is that can engage and co-create  with human intelligence. Uh so that's the   uh uh the coal that's very interesting as as you  were speaking I jotted down this phrase uh life  

    07:25

    experiences because you talk about that a lot in  the book uh and you also introduce this concept   of a life expverse. Now it's interesting that the  book at least my impression as I read through it   was I started by thinking it was about technology  and then I realized more and more and more that  

    07:44

    it was about human centric concepts. It is about  how the technology might affect us but more more   of the book felt like you know you're talking  about how does the ecosystem change because of   this humanentric thing.

    Now again life experts  is it's it's a phrase you have come up with.  

    08:02

    I'd love to hear from you both what what do  you mean by it and how should people think   about it as they're thinking about applying  this kind of technology for innovation. So   maybe what I'll do so I'll just take an example  and then invite Krishnan to you know jump in.  

    08:19

    uh so we actually kick off the book uh with the  example of something called jugal bundi which   uh is actually uh an Indian word which means  creative improvisation which actually beautifully   captures the the idea of co-intelligence and uh  just to give you the backstory on juggle bundi  

    08:36

    uh so about a week to 10 days after chat GPT was  released on November 30th 2022 uh some volunteer   developers in India in Bengaluru um some of  whom actually were working on you know language  

    08:53

    translation uh there was a government service  called Bashinhi so some of them had that expertise   uh basically took uh chat GPT and then along with  Bashini uh built an application inside of WhatsApp  

    09:10

    uh and invoking uh Azure open AI services  to create this application for farmers   uh and they did this pilot in a in a village in  India in Bwan in Hariana and what happened there   was that uh farmers could speak in the natural  language and this was essentially um grounded in  

    09:30

    the government's public uh benefit schemes which  a lot of the farmers aren't even aware that they   exist and whether they qualify etc. Now in India  for those that have been kind of following the   kind of the uh uh digital India story uh there's  a a digital public infrastructure that has been  

    09:48

    built over the past decade and uh as one of the  foundational layers there is an authentication of   your identity. uh so that's important because that  helps this application you know identify farmers   right and since then a lot of KYC uh know your  customer type applications are being built but in  

    10:07

    this context the farmer can just speak to it uh in  their own natural language and there are you know   over 22 uh official languages in India with lots  of dialects and so on and so that's where that uh   pashini came in but the point is a farmer can just  be himself and just say hey you know am I eligible  

    10:29

    for any benefit schemes I mean just literally talk  like that national language and then the system uh   has the intelligence to then interpret what he's  saying and then see whether he's eligible based   on where he is and so on and what is interesting  there is the system let's say says you're eligible  

    10:45

    and it actually came back to the farmer and said  yes you're eligible and he said okay so you know   how do I get the money now thanks to the digital  public infrastructure those rails already exist   in terms of you know if you qualify for the money  to be transferred but now there is a process here   where you have to fill out forms right so so go  ahead and fill these forms you know maybe in PDF  

    11:05

    and maybe the farmer doesn't even know what PDF  is right so the story actually is the farmer said   you know go do it for me now today we're in the  world of what we call agentic AI we have agents   that do it for us and actually this application  3 years later uh we'll do it for the farmer but   I think that's amazing because all of a sudden you  unlock uh at population scale just the ability for  

    11:27

    uh you know a farmer in a remote village in India  to now uh participate in this revolution on the   one hand but actually now the the the government  now can actually directly uh identify from their   perspective create a value for the farmer and then  just to finish the story uh it then evolved from  

    11:44

    this uh initial demo to a pilot to actually uh uh  a government app called PM Kissan Kissan meaning   farmer uh and then now uh there is an additional  set of apps that have come in uh which now provide   agricultural service to farmer. started with these  government public benefit schemes uh but now the  

    12:04

    let's say the farmer now has the money okay hey  I can now utilize that to enhance my uh income   and so there are now various platforms being  built on top of it especially by the private   sector and one of the examples we feature is  ITC it's agri business so this just unlocks  

    12:19

    uh all this value space uh which we really  haven't tapped into which is in this thing   we call the life exorce which is a neoism which  we basically say is uh a combination of physical,   digital, virtual realms, but it's situated in the  kind of natural, societal, economic uh ecosystems  

    12:40

    uh that you know we inhabit like this life world,  right? So uh thanks Sharath and thanks Google for   inviting us.

    So you know so I just want to give  you a little backstory before I come to this life   experts because you know I come from a from a  tech world in the sense I atasa we deal with uh  

    12:58

    AI research and so on and so so if you look at the  sort of the the demand side and the supply side of   the of this equation right I come from the supply  side the the AI side but then we've heard the   story about uh you know we have to put humanity  ahead of technology right and so I had that phrase  

    13:18

    with me as something which I knew people said but  then this process of uh of co-creation with wanker   it really and this book answers that question what  does that mean right and so life experts for me I  

    13:33

    mean you you take any any kind of uh situation  uh it could be uh uh you know beauty context   uh user wants some particular kind of uh product  foundation or a lipstick for a particular context.  

    13:50

    That's one life experience. Uh a a context it  could be a uh worker in a factory.

    Uh there's   some problem going on and in the flow of this  work at this moment I need some solution for this   problem. That's another uh you know element of  the life exit.

    could be a teacher in a in a school  

    14:11

    discovering that the child has not learned well  and now saying okay what can I do now to create   a unique uh learning plan for the for the child  that's another example in this life expverse and   so how can the uh the AI system now come in the  flow of the work not have the the human go outside  

    14:37

    away from the flow of the work so to speak,  right? And in the flow of the work, how do we   uh how do we now involve the the person as a  creative experiencer?

    And uh so that's the that's   the the the thing that I want to want us to think  when we think of the life exverse. No, this this  

    14:54

    is fascinating and I I think we'll come back to  this notion of life experiences and life explorers   uh as we go ahead in the conversation. But I also  wanted to touch on the fact that you you made   references to digital public infrastructure.

    you  spoke about the fact that there are certain types  

    15:09

    of systems that make this possible, right? I mean,  for something like PM Kissan to happen, number of   pieces had to fall into place, not just the AI  that was used, uh, communication medium and so   on.

    And talking about infrastructure, you extend  that concept. You talk about shared digitalized  

    15:30

    uh, intelligence and you talk about tokenized  digital intelligence. I want to understand a   little more about what you meant by that and uh  you know how we should think about that as we're   thinking about all the infrastructure that that  permeates.

    Sure. So uh let me get started and you  

    15:45

    know again uh Krishnan feel free to add. So so if  you go back to the story I I asked I I basically   shared the question you can ask there is like how  did this all suddenly happen right this chat GBD  

    16:01

    moment. So now we have to bring in another kind  of uh uh entity into the story and that's Nvidia.   So as we know you know it was because of Nvidia  and accelerated computing and actually uh Jensen   Huang going to open AI presenting uh the DGX1  uh uh as we started now uh building out uh these  

    16:26

    uh you know electronic neural networks right uh  which is which is part of the magic there right   in terms of these transform models which came out  of Google right so I think that's important for   people to recognize But I think the way in which  I think most people can try to understand it is  

    16:41

    that we are now for the first time also building  AI factories which kind of produce these tokens of   uh intelligence right. So think of it as you know  in the industrial revolution we used a lots of   uh uh raw materials to you actually even generate  electricity right which then spurred uh a lot of  

    17:02

    further innovations uh in the revolution. Now  electricity is kind of like the input and the   output are tokens.

    So we're using energy is  coming in and then tokens are coming out. But   these tokens which we experience right as text,  image, video, audio files or even uh uh in the   case of the recent Nobel Prize right you know  protein structures right uh alpha fall Google  

    17:22

    um that's very remarkable the fact that uh you  can uh essentially uh use floatingoint numbers   uh to actually build these kind of representations  which we find useful but the point is they are   like raw materials right now they are things  that we now bring into like we saying earlier  

    17:40

    our context of using it to bring value for us in  terms of our engagements with the world around us   uh in in this life experts. So uh I'm glad you  pointed out that the these tokens of digitalized   intelligence are very important because uh they  are the units of intelligence that we have to kind  

    18:00

    of compose with uh to create various new forms  of value and and therefore uh the infrastructure   the the AI infrastructure now uh which obviously  incorporates all of this what we're finding is   in that example uh it's very important uh to have  what we call a shared digitalized infrastructure  

    18:19

    because no one person is going to be able to  build it. Even if you're a private company,   uh uh we need to kind of if we go back to the  industrial revolution, you know, highways were   built uh and then you know the uh the public  sector participate in that process.

    So there's   a different kind of engagement between like the  public, private and the plural sectors that is  

    18:40

    happening like in that example which allows us  to build this foundational infrastructure not   just in terms of the foundational AI models but  what you mean is this intelligence infrastructure   which actually makes all this possible on top  of which all these engagements are taking place   and values getting enacted which in turn  drives lots of impacts right at speed at  

    19:00

    scale you know in various uh in in in various  arenas like Krishnan was mentioning all those   uh what we call use case applications right uh  and trying to do that sustainably because there's   also this question about we're using enormous  amounts of energy so to kind of summarize what  

    19:15

    we're seeing here is that we're we're unlocking  this new value space but absolutely that's being   driven by this ability to actually bring in this  tokenized digital intelligence which we have to   think about how does it parlay into various  offerings in terms of what we're creating but  

    19:31

    also how we create you know all across the value  chain and so that's like foundational to this   uh co-creation of value and if I can give one one  sort of example take the beauty thing that I that   I talked about okay so uh and and in the book we  talk about L'Oreal as an example and and so you  

    19:50

    know they have a number of offerings uh but let's  look at a couple of them there's something called   the beauty genius where somebody would have a  conversation a consumer may have a conversation   with beauty genius now there are words which they  describe saying look I've I've just come back uh I  

    20:05

    feel dry I you know whatever I I need something  now all these words get translated in some way   for the the system to make a recommendation  right and so that's one one aspect of that   uh how the TDIs get into action but there is  something more because TDIs also get into a  

    20:26

    an actionable intelligence if you will and so  they they have another product called Perso   Perso is uh uh you know actually a physical  factory if you will right the AI factory in   the back end is now capturing this information uh  it might even say look here's a picture of my face  

    20:48

    and so that's another piece of information that's  coming in there are some sensors it'll it'll sense   the humidity in the place so a user using pursu  in Bangalore versus using in Chennai the output   that you want, you you you require that to be  different because the humidity levels are are  

    21:06

    different in these two places. And so all these  information now are translated in the form of   TDIs for you know it could be we we kind of call  it the hydration TDI if you will but that's what   L'Oreal is now translating all this information  that is captured to now create a a specific  

    21:27

    foundation or a lipstick for for you. So that's  an example of a TDI in action for instance.

    No,   this is fascinating. you know as a product person  I approach this with a lot of optimism but at the   same time what I'm finding personally even though  I I see the positive story in the examples that  

    21:47

    you've given both of you across uh the last  few minutes of our conversation I also worry   a little bit about a lot of our intuition for how  products have worked and the impact they've had is   breaking right uh this happened at many different  technological revolutions right our our intuition  

    22:05

    for what happens when the speed of transport uh  for goods goes down the speed with which people   can move the speed with which I can communicate  the ease with which they can communicate breaks   in this case as you talk about co-intelligence  personally I'm finding it still quite hard to  

    22:22

    develop my intuition for the space and at Google  we talk often about a lot of the products that we   build how do we build them responsibly uh we are  aware that what we're building is very powerful   we care about it doing good for the world.  When you approach this, how how do you view  

    22:37

    the risks side of this? What do you worry about?  How do you how do you prepare for that world in   which you you do this more responsibly?

    Yeah.  So, so uh there are a couple of things. Let me   just unpack your question.

    So, the first one you  said you talked about intuition and so on. So,  

    22:56

    absolutely. So, I think this is why we we  call it the core intelligence revolution.   We use the word purposefully because there is a  revolution because in the traditional model if   you go back to the industrial revolution the model  was you had a value chain right which you control   and that gave us the quality revolution because  you know when I give you going back to the example  

    23:13

    a lot product right you give you what somebody  else we want to ensure they're the same quality   so we had quality six sigma all of that but we  control the quality of the product and the process   right because we our focus was on creating that  that offering that came out of this value chain  

    23:28

    Right? And so we could control that.

    Now we're  seeing something very different because now we   saying the starting point is is really not  here in the sense that yes we have we have   the setup right we have this when he said this  uh uh the L'Oreal the the physical factory what   he actually meant was that it is sitting in my uh  home right in the bathroom let's say right that's  

    23:48

    it's a little device it's got cartridges and so  on right um and now I'm actually talking to it   through this interface I have the L'Oreal has the  app and it understands kind of my needs Right and   like you said brings in all this information. So  there's a different kind of interface here where  

    24:05

    I'm interacting with it and it's figuring out  therefore you know what it should be dispensing to   me. Now there is still a value chain in creating  that but if you if you look at it now what I've   done is at this point of exchange now so just  me giving you that fixed product right and then  

    24:20

    I I I use it um now the product itself in this  context right uh changes as a function of how   the user wants the product that particular  day so it's a joint creation between the   human and this and absolutely this is uh very uh  difficult and challenging because all of a sudden  

    24:41

    uh I have just induced a lot of variability. If  you go back to the heart of the quality and six   sigma revolution, variability is the enemy, right?  You want to root out variability.

    But variability   in terms of the quality of the product or process  that still is there because you want you want the  

    24:58

    product to be of good quality. But what we  are saying here is a value is now a function   of the quality of the experience I create  interactively with it.

    Right? Uh so that space   uh Absolutely.

    It's it's it's highly variable  because everyone wants a very different kind of   personalized experience, right? And I can't it's  not something I control.

    In fact, it's the exact  

    25:18

    opposite. I need to embrace variability because  everyone will interact with it in various ways.   It opens up.

    It's almost like anti-ex sigma  in terms of the that interaction space. So,   we do need new new ways by which we can now think  about how the interaction will take place.

    But  

    25:35

    it's not clear to me. Therefore, the intuition  that served us well uh in terms of uh how we   uh how we actually went about uh uh creating  our offerings in the past still works because  

    25:52

    uh in in in the old world of the um uh the way in  which you um develop these products uh you brought   intuition through your own research. Maybe you you  got some you know input feedback from customers.  

    26:08

    You had your own things in terms of what you  want to bring to the table. What I'm saying   is uh we need to and that may still exist but we  need to kind of understand that we need to have   that intuition is going to be co-developed right  and so we need to actually build tools where the  

    26:26

    uh customers and the users uh can actually uh  continue to inform us right not when we want to   build the products but when they're having their  ongoing experiences so I think that's going to   be a challenge but actually we're now building AI  tools where you can analyze for example a lot of  

    26:45

    uh sorry a lot of uh uh the uh you know people's  voices at scale right so people can give   uh voice feedback uh like the farmer can give  voice feedback right the uh uh the person who's   using this consumer can continue to give feedback  but it's not like feedback to like market research  

    27:03

    questions we ask or you know it's just they can  just share you know what worked what didn't work   like the person can ask just like chachi pd saying  hey you know I made this for you yesterday what   was it like right and the person can come back and  share what worked not oh it was good can just have   that chat but we can extract from it and intuit it  from it right uh what you know what that all means  

    27:25

    and that's why I think AI can also help us into it  I think that's very important to understand along   with all of this stuff so it it is it does open up  this other space but absolutely in terms of risk   the last part to unpack your question uh we are  we are just creating more risk. So as we move into  

    27:42

    the space by definition we say risk is the other  side of the coin. So you have to create riskmanage   value is a phrase we use that is you have to now  think about what are the new types of risk form   of risk absolutely uh because before we control  all of them this is a whole new territory right  

    27:58

    uh and so what are the different types of risk all  the way from uh obviously privacy risk security   risk those are the obvious ones but also other  forms of risk that might uh uh uh now you have to   kind of think about uh like in terms of creating  that quality of the experience and in fact just  

    28:14

    just leading on from that from that point in fact  NIST NIST in fact identifies these 12 risks which   are particularly accentuated by generative  AI right and we and so we we we there's in  

    28:29

    fact an entire section about how we manage risk  how other companies manage risks and so on um I   mean take take something like uh Adobe's Firefly  okay now now the way we we featured that in the  

    28:45

    uh I'm saying the way uh way they managed the way  they trained the that uh text to image generation   engine was using licensed uh uh images right and  so the copyright aspect is one one example from  

    29:02

    there for for instance where as a as a end user  if you want to be assured of the copyright that's   that's a way to handle uh I mean we do talk about  Google buying mandandy in itself you know as a as   a means of uh managing the cyber security risks  and so on but this is another another place and  

    29:22

    I I'll just give one more example to to illustrate  this one of the ways of managing this risk is now   to you know the the the code is the law like Larry  you know like so an example would be like the depa  

    29:38

    framework in India where it's codified or in web  3 where uh smart contract contracts now ensure   that the uh you know you you do a deal and then  if the things are fulfilled the the contract is   automatically uh uh you know executed if you  will. So these are examples where of managing  

    29:57

    such uh such risks which are very dynamic and  emergent. Yeah, it was interesting but uh when   you were answering the first part of the question  you also spoke about the way people interact   uh with AI.

    Now the easy ones for people to get  are you know the basic modes of communication. I  

    30:18

    not only can type in a question, I can speak it.  I can give it an image or a video. But I think   the next higher order is as you're co-creating  though it is not just about you know I gave one   input and got some other regular output back.  How do you see human AI interaction changing?  

    30:37

    uh is that you know is the intuition that you  think about it as a replacement for some of the   things that exist today kind of meta is right uh  there's a risk that you're going to outsource your   thinking to the GPD right so let's take uh the  business that at least I'm in right uh education  

    30:57

    uh this is a huge risk right where right now  the current uh uh thinking about that risk   is well as students use GPT more and more  you Are they just outsourcing the thinking?   Because one of the things we want the students  to develop are critical thinking skills, right?   We want them to be creative. We want them to  collaborative.

    We want them to also develop these  

    31:15

    critical thinking skills. So that poses a very  interesting challenge because on the one hand,   you know, in the traditional paradigm, we kind of  delivered courses to them, right?

    But they were   fairly passive. They were not participating in the  value that gets created.

    But as you move from that   model to a learn learn learner based model right  uh where and how they want to learn uh you would  

    31:35

    imagine that hey you know chachi is great right  because it can be your personal tutor it can help   you learn and you know obviously we have to design  it in different ways we give the example of like   kigo for instance where they've designed it for  active learning it doesn't give you the answer   it it it kind of works with you until you get  the answer and the and the teacher might want  

    31:52

    to design it but you know in terms of how long  should that go before maybe you reveal the answer   sometimes s we also learn from the answers and  so on. So there's a lot of uh uh science and   art that goes into designing these these systems  and ways in which you know the student uh feels  

    32:07

    empowered to learn and it's actually engaged and  is learning. But that's a very different process   because in in the old model as a teacher it's very  difficult for me to personalize the way I want to   describe something so you understand it because  I don't know what will will click with you.

    But  

    32:22

    that's what the AI does very well, right? You can  ask it to explain something like give me a sports   analogy or you know help me understand it like  you know in this context.

    Uh so its ability to   uh actually uh change the way something is  described is phenomenal. Now now if you're  

    32:41

    if you're a teacher you know how do you look at  this? It's it's a risk.

    It seems like I'm going   to be replaced right? uh but if you actually see  the world that we're moving into we want to be   able to train students to actually be able to work  you know engage with the AI systems because in the  

    32:57

    workplace presumably now their task is to manage  design you know and enhance these systems even as   it enhance augments their own uh capabilities  right but if you keep that as the goal yes it   comes with challenges and risks but now it's very  interesting what it does is it forces you to think  

    33:16

    about hey how do therefore evaluate students  and to me and I've just started experimenting   in my class where they all use CH GPS is so so  so what what is my role now well actually I have   to design it in a way in which uh they enhance  the learning but the the focus shifts from my  

    33:33

    grading like answers because chibi gives answers  to things to like how well do you question so   do you you know how well do you craft questions  how well do you use the system to frame problems   individually working chach or you know, working in  teams, right? So, it shifts the nature of the way  

    33:51

    uh you think about assignments, right? And and  so because they're now interacting and so you're   evaluating how well they interact because in  the workplace presumably that's what they're   going to be doing in the future.

    So, in a way,  you're saying don't show me what you've done,   show me how you've thought about it. Yeah.

    Yeah.  But then you need to evaluate that, right? uh and  

    34:09

    so that is I think now that's where things are  evolving now uh in terms of being able to uh uh   allow them to learn you know at that pace in the  way in which they want to allow them to creatively   express their agency in the process. Uh but then  our role is to actually build this architecturally  

    34:31

    to actually facilitate that co-intelligent you  know co-creation of their learning experience.   I mean and once again just thinking out aloud  in terms of how and and presumably I mean in in   Google you must be working on these ideas but we  we briefly touch upon it in the book but things  

    34:48

    like a a world model does it understand like like  a baby understands from just looking at the world   and getting a sense of the of the world. How does  the AI understand the world model in in how it   uh you know responds to a situation?

    that could  be the next form of of interaction. some I mean  

    35:09

    I think even even for the AI system to say I  don't know I think that's the next level of   uh development right I mean I I like there is  no clear objective function right now I don't   know what to do here and so for the for the system  to come back to the user saying now tell me what  

    35:28

    you think and having that dialogue that's another  form of evolution if you will for the uh for the   AI system in in forms of uh you know potential  dialogue ogue that it could have with humans to   understand. So maybe just pushing a little more  in this direction.

    We've spoken a bit about how  

    35:46

    humans whether they are students or educators  or uh you know business professionals interact   with AI but I'm assuming as this goes along this  is going to have a lot of implications for how   our organizations are designed how they interact  with each other and these organizations are not  

    36:03

    necessarily only you know for-profit businesses.  This could have implications for governments,   for universities, for a whole range of  types of organizations. How do you think   about the impact of co-intelligence and that  becoming a more natural normal way of working,  

    36:21

    way of doing things? How does that impact the  design of organizations, how we manage them, how   we run them?

    Yes. So in in the book, we introduced  the idea of the organization being thought of as a   living system.

    So we call it a co-creative living  system organization, right? Because if you go back  

    36:39

    to kind of where we started, what we are saying is  that you know if you take our biological systems,   we are also very adapted, right? We are living  systems that we we interact and engage with the   world.

    So while the organization may have let's  say a digital brain, however the the organization   kind of processes a lot of this intelligence,  right? And uh engages back with people.

    The point  

    36:58

    is that uh we need to we need to build uh the  management systems in ways in which they function   like living systems. So what does that mean?

    Uh  clearly uh the systems have to be very adaptive   and very responsive at the individual level right  in teams. So we have already you know things like  

    37:16

    co-pilot and so on right in in in the flow of  work. Uh but that's just in terms of designing   workflows right and work artifacts and so on.  But if you were to step back and say and going   back to risk management, right?

    How do we govern  these systems? Uh so it puts a lot of emphasis   on governance not just strategy and executing  strategy.

    Uh so how do we build these governance  

    37:36

    uh you know architectures and management systems?  Uh and remember again going back to if you look at   product management systems right so now you have  to worry about experience quality. So what we do   in the book is to say depending on the context  if you're looking at supply chain management like   what changes there?

    If you're looking at product  management what changes there? If you're looking  

    37:53

    at marketing, sales, service, what changes there?  If you're looking at talent management, you know,   what changes there? So, I think if you look  at each set of like management activities,   we say, okay, how are the interactions changing  there?

    Because we're moving from just the the   activity itself to how people engage with AI  in that activity. So we need to think through  

    38:12

    at that kind of granular level at that micro  level to really say you know what what changes   need to be made because that ultimately gets then  reflected in the various management systems and   processes right so it's it's not like a top-  down approach as much as uh this bottom up uh  

    38:30

    redesign of management systems but in a very  co-creative way in fact the systems itself are   getting co-created and so the platforms that we uh  build right in terms of whether it's performance   management or various aspects of strateg Y  management all those uh will change and so   we take different types of management activities  in the book and we feature examples of people who  

    38:50

    uh with respect to that context are making that  shift in terms of redesigning their management   system and if you take like all of these  things then we're really talking about what   we call a co-intelligent enterprise of the future  right uh but it's it's it's not something that  

    39:08

    uh we can like define up front uh it it it it like  works with co- intelligence you know all across   uh uh its uh uh offering system its value  chain system and its management activities   uh so that's the way we see it right I mean I I  just want to add two other points to the to what  

    39:27

    Wret talked about uh for me one of the important  things in this new way of working if you will is   the is the the value creation in the flow of work,  right? Every moment of interaction becomes becomes  

    39:43

    important. And so the role of the manager as a  creative experiencer in in maximizing that value   creation in every moment of engagement.

    So you  need to give that capability to that that employee  

    40:02

    uh in that thing. So as an organization do you  have that kind of a a a core diligence knowledge   environment?

    So we we argue that you need to  think about it. You need to create that kind of   an environment.

    So uh supply chain manager you  you have now discovered that there is a flood  

    40:20

    in some some place and so now what do you do at  this moment? Is there a a knowledge environment   available for you to maybe do some simulations of  uh alternate uh availability?

    What kind of time   frames? what kind of you know and so maybe you  will now discover that look uh only uh 80% of the  

    40:42

    orders I can fulfill with these two but that's a  business decision that you can now take you can go   back to your you know your higherups and say this  is what I simulated and here's so I'm saying this   this ability for uh for providing that capability  to the manager uh not just as a as just a passive  

    41:02

    user but as a creative experiencer to participate  in that, engage with the system, share certain   inputs and then create certain, you know, simulate  some scenarios. That is one very important   uh requirement in this in this new world, right?  Are you creating that kind of environment?

    And if  

    41:18

    I could actually build on that, what you're saying  is we're moving to real-time value creation,   right? Uh so that's what you know, so so if you  actually take uh what he was describing, we have   now the ability he used the word simulation uh  to simulate like a digital twin of the entire  

    41:34

    enterprise. So one of the things we are seeing and  we feature in the book is that digital twins will   become very pervasive in this life experts  right because it's a representation of the   interactions you have in the real world with the  system environment but this time it's different   because these digital twins are not digital twins  have existed before you would go you simulate  

    41:53

    some things and then you make some policy changes  strate changes you implement that and you see how   you know what happens but these digital twins  are actually linked to the real world because   uh increasingly everything is becoming software  defined Right. So if you take a factory today   uh the intelligence exists in the physical factory  via all of the sensors and you know controllers  

    42:11

    and so on and we give the example of seammens  for instance and so the seaman's digital twin   uh is actually connected to that real world. It's  not just a mere representation and and you do a   simulation.

    So using tools like nvidia ombry words  and so on uh you can actually simulate changes  

    42:27

    in this let's say you know a factory that you're  building. So you have the AI factory, you actually   have a virtual representation of the factory built  on top and then you have the real factory.

    So   there are three layers but these are connected. So  as you make uh changes you can see what effects it   might have and go going back to your risk point  you can actually manage risks much better.

    Uh  

    42:46

    which we we feature in the book lots of examples  like that. But the beauty is once we have decided   this is the decision we're going to take looked at  all the pros and cons and you know making informed   decisions you can hit a button and you actually  see the uh real factory floor change accordingly  

    43:04

    because it's software defined because it changes  the uh software in the real factory. That is huge   uh because one of the benefits we found is that  in designing the factory things become much   more collaborative because you can invite people  because it's less risk and so they said there's  

    43:20

    also this thing called uh they're building it for  engineers and they call it immersive engineering   where now if I put on goggles and now if I bring  in the AR VR aspect into that my managerial life   experts in that moment uh I can engage with it  and I can speak in my natural language I can  

    43:37

    speak German you can speak French and so actually  for the first time you can really truly tap into   the power of this interactive collaboration  where we are not just having like a Google   meet or in a zoom call. We're actually looking at  the factory 4 and someone can move things around  

    43:55

    uh and we can see what effects it has and they may  have some opinions about it and we can really have   these very rich immersive conversations. uh  and going back to your point you made earlier   about uh intuition we can bring our collective  intuition right but I think the key here is we  

    44:12

    must have the customers part of this which is  what they were saying in that case the factory   flow of the customers are the people who are the  employees and the supervisors so they make sure   they are also part of it and not just the manager  level people so when you extend this to making it   more inclusive uh suddenly uh you are able to you  know create value that actually people feel is is  

    44:32

    getting realized for So that the shop floor person  has a better experience, right? The managers who   manage those systems have a better experience.

    So  in some sense we are saying that we can enhance   the experiences of everyone involved. That's the  that's what the opportunity is before us while we  

    44:47

    risk manage it you know effectively especially  with digital twins. This this is interesting.   When we were talking earlier, I was describing,  you know, when I was in college and studying   industrial engineering, this kind of simulation  that you're talking about to develop an intuition  

    45:03

    for what happens when you make decisions in  a factory was a mathematical simulation tool,   right? And I remember when using mat lab or  something.

    Exactly. And and these were built   using systems like that.

    And there are a few  universities that actually built like these   little what they called virtual factories but  basically they were mathematical simulations  

    45:20

    along they were not really they were not really  connected to a real world shop floor. I remember   in one of my early jobs in industry I used to work  for a railroad as an operations research analyst.   We tried simulating the operating plan uh to  try and understand where various risks lie right  

    45:37

    uh because the forecast has variance all of these  things but again like you said it's not it wasn't   connected to the real world. So I think in  in what you're describing there's a lot of   excitement in some ways because I could imagine  future generations of could be engineers could   be economists etc coming out whose learning even  though it's still in the academic space has a lot  

    45:57

    more of a deeper connection to reality. So yeah  absolutely this is absolutely fascinating.

    Um,   I'm also mindful of time, but I also think  that given that we've been talking about uh,   you know, co-intelligence and uh, you know, you've  written books on co-creation, this is a chance to  

    46:13

    co-create something. Sure.

    By uh, by throwing a  question back at me and Google while while I pull   up some of the questions that Googlers have  prepared. We were actually planning on doing   that.

    So, no. So, so Shahed, you know, last time  when we came and spoke to you, of course, we we   heard a little we heard a bit about your work when  you are an the architect of the the the GP story  

    46:34

    in India right now. So if you were to having read  the book and having had some discussions about uh   the co-intelligence if you were to sort of apply  that to the to the GP ecosystem in India fintech   ecosystem in India what might be some two three  two three new things that you that that you would  

    46:53

    like to do I mean at this point there's a lot of  stuff racing through my mind but I think the the   two things that I'll probably point out that that  have stood out from this talk for me that apply to  

    47:09

    this question are one this notion of understanding  life life interactions a lot better and the second   one is uh where you're talking about having the  customer involved in the co-creation as well that  

    47:25

    that L'Oreal example of the person almost being  able to uh to produce the specific kind of product   that they need if I try to find those parallels  in fintech What I would say is that we've up until   now generally been used to the idea of creating  products where we tell people look you're looking  

    47:42

    for a particular function or you're looking for  uh um a particular product and there's a process   that you go through to get it. So if you want to  make a payment, you go choose a method of payment,   you authenticate yourself, uh you say where  that payment is going, all of that stuff.

    Uh  

    48:02

    and there's a particular flow that you go through.  That flow looks somewhat similar for all of us,   right? Whether it is Google pay or many other  applications.

    Uh if you took something more   complex like a I want to get access to credit, we  put them through a longer flow. Say not only is  

    48:18

    it important enough to authenticate yourself,  but you know I need details like your name,   date of birth, some identifiers that let me  pull a credit report. If that isn't available,   give me alternate information that helps me  underwrite and so on.

    But imagine now that the   user comes in talking about what they want to do  and it is not just that we're able to tailor the  

    48:38

    process. We could actually even create the right  experience for them on the fly.

    So I could have   a user who from your example if it's the farmer  in let's say Maharashtra and prefers an interface   that works in Marathi is someone who has not  taken a loan before is not completely aware of  

    48:56

    various schemes they might be eligible for from  the government. We do a more elaborate simple   interface for them in Marati.

    But if it was one  of you who is a more tech-savvy individual who's   comfortable with financial products etc. we might  even speed up that process by introducing some  

    49:13

    of the jargon that brings efficiency because you  probably understand some of those things right and   therefore there's no point in putting you through  what would appear to be a very long uh experience.   I think between these two the the thing that I  don't have a good answer for and which goes back   to this question of risk is at the same time  to ensure that there's a level playing field  

    49:32

    uh that you know we're not discriminating against  people as they get into uh access to products uh   to ensure that uh because it's a financial product  we're making the right disclosures so that people   know what they're getting into how do I maintain  that baseline of making sure there is sufficient  

    49:50

    transparency in the process that your privacy is  respected that you have a secure experience uh and   that it happens all of this happens in a timely  fashion and and that is a harder one to solve for   and that is where I think you know I'd go back to  this question of life experiences because you need  

    50:07

    to have that intuition put yourself in that user's  shoes and say why are they trying to but if I may   if you actually think of it as a life experience  ecosystem uh then you know when you asked the   question we said fintech ecosystem but that's just  slotting you in terms of uh fintech. So one of the  

    50:24

    things as you're talking as reminded up was my  recent lived experience where uh you know since   I came in from the US and my mother lives here uh  she wanted to make my my my favorite dish for me   which involves some kind of spinach and so there's  a vegetable vendor uh whom she gets the spinach  

    50:40

    from and it's amazing how she knows you know what  kind of spinach and she brings exactly the right   quantity so she does demand forecasting very well  uh but she's got a limited uh physical space right   in the cart so I was Uh and and and and by the  way, so payment is so easy here. I just, you know,  

    50:57

    scan the QR code and use GPA. I'm done.

    And for  her, that's great. I was talking to her.

    She   said she doesn't carry cash around. It's much more  secure.

    And you get all the stuff. But then when I   was saying, hey, you know, if if uh you know, it's  like Aladdin in the magic lamp.

    If you had a wish,   you know, I could grab what would that be? It's  very interesting.

    Uh she wanted to have another  

    51:16

    car. She wants to grow her business, right?

    I  mean, she wouldn't use the words like growing   business. She said, "Oh, it'd be nice if I could  this because it's only so much I can do.

    I wish I   could have another card." But if you actually uh  so so going back to what you're saying, there are   two things. One is how how can she express that  intent?

    So if you think of it as just a financial  

    51:34

    app, okay, it's doing the transaction. No, but  what if you had something there, a chat saying,   "Hey, you know, is there something somewhere,  you know, how would you like to improve your   your your your life, right, your livelihood?" And  the person says, "Well, I would like to, you know,   have another cart." what she's implicitly  saying perhaps for you might be hey I I might  

    51:53

    need another loan to actually get a card but she  may not ask for a loan so that's one thing that   struck me the other the other one is as you start  then thinking about that what she's saying is hey   uh I need another card now are you in the business  of building carts not necessarily but it means  

    52:08

    that from your ecosystem perspective you could  create other kinds of services right that that   can kind of plug into it like going back to  the farmer example now you know ITC has built   this platform it's almost like a marketplace  place right where different people provide f   you know fertilizers pesticides you know other  things in a way in which the farmer wants to  

    52:26

    utilize them right in order to grow his income.  So similarly here it basically means going   outside of the kind of the the narrow confines  of the fintech ecosystem right it might involve   uh other interconnected ecosystems so I think  it also allows it brings a lot of opportunity   but we need to really broaden our notion of what  the offering is. Yeah, this this is fascinating.  

    52:46

    I think lifting it from the question of a user  who knows they want a loan to a user who may not   even be able to express it in that form or not  necessarily not able to but maybe hasn't thought   yet to the point of whether they want to do it  through a loan or something else but start from   the actual need of saying you know I want to grow  my business and the way I think about it is if I  

    53:06

    had one more cart that I could rent out to someone  else I've got a new uh new source of income. It's   it's fascinating to think about.

    There's a lot of  opportunity small and medium enterprises if you   were to just generalize with an example which is  totally untapped. Yeah.

    And and this also probably   ties back to what you're saying about shared  digitalized infrastructures because when you  

    53:25

    get these more general kind of you know needs that  people express. I may not have all of the building   blocks to be able to build the solution for them.  But if there is a way for me to connect other   pieces and build that and I think that's that's  part of the magic of what's happening in India  

    53:42

    with a lot of the digital public infrastructure.  Uh but I think if we can build on top of that then   it becomes a lot easier to components from other  ecosystems. Exactly.

    Exactly. Now this this is   fascinating.

    I I do want to uh also bring up  a question that someone uh from our team had  

    53:59

    uh had submitted um and they were looking at  your book and they went you know your premise   is fascinating. Do you regard co-creation as a way  out of the current human versus robot dilemma uh   that surrounds all the AI narratives that we see  today?

    And and what do you envision when you think  

    54:15

    about co-intelligence in the context of healthcare  and life sciences? Um I'm sorry I didn't capture   the name of the person submitted the question  but came from other Googlers.

    So I thought I   should bring that up. uh I mean if you could have  an interactive interaction with the person then   we could clarify the question but uh so I I don't  know if I'm interpreting the question right uh but  

    54:36

    yeah in terms of human and and and machine right  yes I mean broadly speaking I think yes because   uh we we are thinking in either or terms when  you say of uh machines replacing us humans so   what if machines you know what if AI is not  here to replace us humans but our premise is  

    54:53

    what is there if it's there to co-create with us  But that's up to us right uh in terms of how we   view the these AI systems. So I think it's it  in some sense there's a lot of the narrative   uh which is looking as AI replacing humans and yes  it will do several tasks better than us humans but  

    55:09

    then the point is uh in human history we have  always elevated ourselves in terms of how we   how we build these systems. I mean like I said we  have built these AI systems and granted now the AI   systems has the potential to selfbuild itself.

    Uh  but keeping that aside and as to when and how it  

    55:25

    might happen and and and also as humans we have a  role to play in ensuring uh you know in speaking   of risk and guardrailing it in ways in which  you know uh we can steer it in ways in which   it actually serves uh human humanity and not the  other way around us serving AI right so from that  

    55:44

    perspective absolutely uh co-creation is a way out  to go back to the question because uh it's both   in in creating these systems and also So creating  through it right uh how it affects and being able   to understand those effects faster uh like we said  right using all of the inputs from people's lived  

    56:01

    experiences of engaging with AI. So you need all  of these pieces not just that infrastructure and   platforms and focusing on the flow of engagements  but also once they've had experiences how do you   bring those experiences back uh which goes beyond  reinforcement learning where we are trying to   improve the system but actually letting the system  understand that hey there we as humans have a uh a  

    56:23

    lived experience of the world that is subjective  right which the the the AI system doesn't and I   think that's the role we have to play to teach it  uh what uh our lived life experience are all out   uh and so that going back to what he said  that world model right and representing  

    56:38

    that and getting it to understand the world uh is  important. The last thing I would say is that when   we think sometimes we conflate two things.

    The way  a robot perceives the world is not how us humans   perceive the world. Right?

    So I think sometimes  we conflate the two. Uh for instance, you know,  

    56:56

    when I came into uh for this talk at Google,  if you were to ask me, hey, you know, what is   the color of the wall you just passed by, etc., I  wouldn't know. But if if I was a robot walking in,   you can go and you can look further.

    It tell  you the color. It may even tell you where that   material came from, etc.

    But that's not our part  of our human intelligence. Right?

    So in some sense  

    57:14

    as we are we operate in the world and there's the  system one and system two of conam and one is you   know automatic and this and we're still trying to  understand the human brain right how we cognize   the world and engage with it. So I think all of  that will inform us in terms of what is unique  

    57:29

    about our perception of how we engage with the  world and so in some sense they're complimentary   right because that can give you a lot of detail.  uh and so if we can think from that perspective   then together we can we can co-create new value.  If I can just add one and I want to intentionally  

    57:46

    take a more sort of a a forceful view the see if  you if you accept uh that we have to move to this   experiencecentric world okay then co-creation is  is not like a choice okay we could do co-creation  

    58:01

    but we could also do other mechanisms we are  saying no like if I want to create a shhat   experience I cannot produce it beforehand and keep  it ready with me when Shhat wants I I I give it to   you. It must be co-created.

    It must be co-created.  So I'm saying the answer is it it is it has to be  

    58:21

    the in the experiencecentric world you have to  co-create and because it's lifecentric human is   at the at the helm of it and human co-creates with  the AI to deliver that experience and Christian   since you gave us some examples about different  companies whether it's L'Oreal or others um the  

    58:39

    second part of the question where this person is  also asking you know how does this co-intelligence   revolution affect specifically healthcare and  life sciences yes are there any examples that   we've studied that that true in fact let me  let me just offer a a couple of different uh  

    58:55

    scenarios I mean there is one that we talk about  as open evidence which I think in the US is a at   least 30 40% of the doctors have started using  that as a uh in a way it's it's a more fine-tuned   uh you know uh chatbot but but trained on uh  medical journals and and so on. So more authentic  

    59:18

    information impossible for humans to keep pace  with the amount of knowledge that exists but using   the open evidence the doctors can take a more uh  informed decision about uh treatment protocols and   so on. That's that's one example at the other end  of the spectrum in terms of medical research and  

    59:37

    so on. I mean there's a fascinating example that  we that we start off with at uh you know at IIT   Madras with the Sudha Gopal Krishnan Brain Center.  They have now created uh they've just released  

    59:52

    something called Dhani which is a database of  uh uh human fetal brain. Okay.

    So this is the   world's largest collection of human fetal brain.  So that means they are actually mapping fetal   brains the whole brains uh they have released it  now but this is all pabyte kind of information.  

    00:14

    So they are now forcing uh organizations like  Nvidia to come in and say how can we now use   uh AI in a much more uh effective way to deal with  such kind of information. I produced this data  

    00:30

    but how will now doctors use this to make certain  certain decisions. So that's the frontier in terms   of saying how it's forcing new kinds of uh uh you  know AI capabilities to be built to process this.   Now it's definitely fascinating and while this is  not my area of expertise I definitely see people  

    00:49

    at Google working on areas like Medma there been  people who have tried to map the connectto so a   lot of what you're describing but also in terms  of healthcare I think uh so this is on in terms   of medical research side I just wanted to also add  that there's this whole area and we give different   examples where you have lots of uh for example  helping nurse practitioners uh engage better with  

    01:08

    uh patients like who might have dementia  or people who who have mental health issues   uh those are very very complex but with these  things people can speak in the natural language   right so you can better understand what they're  going through right or what anxieties they have   what you know what symptoms they experience in  terms of pain or you know mental well-being and  

    01:27

    so on because those are all very soft  issues and they may express themselves   through words images and so on so so I  think there the the practitioners who are   taking care of them can better connect with  their lived experience I think that's where   uh also the of power so that then you can adjust  the way in which you know a certain uh treatment  

    01:48

    plan works uh because before that was missing  it was very hard for them to kind of share that   right in terms of actually affecting the protocol  uh in terms of the delivery of the healthcare I I   think we also have some questions from Googlers  here so I'll request someone to hand a mic there

    02:11

    so hi I'm Jasmine And my question is based on  managing risks in co- intelligence systems.   So what's one thing that you believe is often  overlooked uh like a subtle risk that's often   overlooked in uh organizations adopting AI?  So you're saying that uh so I understood your  

    02:30

    question right which is the risk that is often  overlooked. Okay.

    So that's a very interesting   question actually. Um so if I were to think about  all all of the uh uh risks I think one of the  

    02:45

    uh biggest risks that is overlooked is actually  uh the risk of not following through on uh you   know people's engagements with it because uh a lot  of the time uh we dismiss when someone let's say  

    03:03

    fails to utilize something in in a certain way  Right. It's it's it's it's almost like we have   to invert that and see it in a positive way.  It's like seeing the glasses half full.

    Right.   Uh and I think it's actually it's very important  here because when people engage with the system,  

    03:20

    for example, most people in the use GPD, they use  it in a one-shot way. They hit it, they think they   answer, oh, it's not.

    But they what they fail to  see is that they need to further engage with it,   right? Have that dialogue with it.

    But maybe  they're not they don't know that. And of course   now we give some prompts right uh and so on.  Uh so actually just understanding how people  

    03:40

    can better engage and if they don't engage why  they don't engage right and u or maybe it's not   evident to them how to engage with AI. So in  other words it's almost like I'm flipping your   question on its head and saying what are the risks  they're seeing right uh or what are they fearful  

    03:55

    of because of which they don't engage. So it's  almost like like that I'm calling as a risk that   is overlooked because if you understand that then  you can make it easier for them to engage with   uh you know it's uh it's a little bit more subtle  notion of thinking about risk.

    Yeah good question.

    04:17

    Hi this is Shanmuki. Uh the idea of this code  intelligence is very fascinating that like which   can answer the question does do AI replace humans?  I felt that uh but in present world uh especially   students are very much habituated to relying  on AI like using chat GPT to answer everything.  

    04:38

    So what do you think like how should students  approach this u foundational getting foundational   knowledge and critical thinking when AI can answer  everything that they want? Yeah.

    So that's a great   question being an educator. Let me tackle that  because uh I teach a course on innovation.

    So  

    04:56

    I've been now experimenting with the use of  child GPD. You pose the question in terms of   the students.

    I think the first the implications  for us as educators right first uh because uh so   yes from students perspective I think we talked  about the fact that you don't want to outsource   your thinking but that may happen right so in a  classroom setting right if I'm the educator the  

    05:18

    question is that how do I ensure that you're not  just outsourcing your thinking getting answers I   think that is also part of your question. So one  of the experiments that I've done is basically   saying if you use chat GBD and I I briefly talked  about it uh I really want to ensure that you're  

    05:35

    able to have that interactive conversation right  and build your skills in doing that but that's   not something that I can just teach put up like  a slide right it's something it comes through   practice so in some sense uh I have to spend  a lot of time which is what I found changing   the nature of my assignments so in the old model I  would have some assignments I'd give you something  

    05:53

    and some questions But then you may you can cut  and paste it into chat GPD and you'll get some   answers right and then you may work on that but is  that really what I now should be doing this world   like probably not right so therefore I have to  change like what the nature of the assignment is   if it is to improve your thinking then it should  be like how well I give you a situation how well  

    06:12

    are you defining a problem then you may use GPT  maybe somebody else in your team uses it well then   talk with each other hey what did what did you get  as outputs right what did you get oh what how did   you prompt it with or how did you approach it? So  it's almost like there is a conversation you had.  

    06:27

    So you may share your you know chat with the other  person. the other person may share the chat back   which by the way is what I asked them to do and  then I ask them to then share it and say hey you   know is there something that you learned from how  you interacted right so that they build those oh   I didn't know oh I didn't know I could ask it that  way I didn't know if I did this way it would come  

    06:45

    back in this way you know because sometimes when  it comes back with uh something that you think   is incorrect you you might just say hey you can  do better this is not correct and you'll say oh   and you know and here's why so that interactive  back and forth is very important but It's very  

    07:01

    important to design uh the assignments in that  way so that you can enhance that uh that capacity   of people right if I can add I mean I I do do a  lot of research in the space in AI and education   at the center for responsibility at IT Madras.  So I I'll answer this in two perspectives. One  

    07:21

    uh what I what I recommend to students okay so and  of course this will vary depending on the age of   the student and so on. I mean there's there's a  certain sense of maturity with age that you hope   uh they develop and they understand the risk.  But this is what I would say.

    I think you should  

    07:36

    you should keep using it on a daily basis to  get that habit right and as you use more you   you learn how to use it and engage with the  system better. Uh I do know do I do know that   uh that kids love to take the shortcut right?  There's there's this beautiful answer right in  

    07:53

    front of you. Question is already asked.

    there's  a tendency to just copy and paste it. Uh which   they will do.

    Uh so it's it's the responsibility  of the of the teacher to figure out what else   right. So we not just the answer but maybe they  we should ask them the question but that's not  

    08:09

    on the student. The student wants to make their  life easy.

    My my summation to them is play around   with it. Build habit uh and as you do I mean the  other very important thing is not to trust it.

    uh   despite despite a lot of efforts by by technology  companies to to get it right right I mean uh the  

    08:31

    agentic systems etc there is but I I would start  with a sense of don't trust everything that the   thing says you know you have to go and verify for  yourself and uh and build it that's on the student   on the uh organization that is the uh the school  or the the educational institution I think they  

    08:52

    have a a very clear responsibility to develop more  thoughtful systems which means you can't have you   know answers like you ask question it's spitting  out answers you can't design systems like that you   need to build guard rails in terms of uh uh you  know reducing the amount of uh uh hallucinations  

    09:15

    that it does so how how to maximize those systems  that's another so I think there are lots of   responsibility on uh educational institutions to  design the systems uh more thoughtfully. Yeah.

    And   just to add uh even in terms of like for example  what courses you should take. So I think sometimes  

    09:33

    we conflate the fact that you know the chachi is  allowing us to talk in a natural language that's   one part of it but the product itself right the  content so imagine if you're uh as a university   we provide courses students are still clueless  about like you know what courses you should take   because we haven't designed that interface in a  way in which uh it tries to understand you know  

    09:52

    what your career path might be right based on  that saying you know which courses might be   uh more suitable for you what you have taken what  your goals are and craft you know a very unique   personalized learning path for you and then based  on your experience of one course you know what you  

    10:07

    should take so I think right now it's like it's  it's very coursecentric you have dropped on box   different types of courses but right there you've  lost me because that's not how I want to engage   with it I want it going back to everything we've  been discussing like the the vegetable vendor you  

    10:22

    know here's kind of what my aspirations are what  I want to do like in that in that finance example   right and then saying use that intelligence to  go through all of the stuff there and say Hey,   these are the courses that uh you know you  might want to consider. That's very different,   right?

    Uh in that system helping you and then  of course within that what you're saying is  

    10:39

    then how does the learning take place? But I think  there's this whole other set of things in terms of   uh what does it mean to have like you know a  set of offerings that connect with you as a   student in terms of your uh uh career path.

    So we  jokingly in the book say like we have Corsera but  

    10:55

    you know maybe we should move to learn era or  maybe growth era right where it's about the the   growth of the individual and and the the these AI  systems are intelligent enough to take your inputs   and to be able to craft uh that that learning  path for you and I think there are lots of  

    11:13

    u startups and various entities uh having this  kind of frame of reference in terms of designing   these systems future. So when they get designed  I think inside of it the way learning will occur   will also be different.

    Thank you. This is this is  fascinating and I think we could go on for a long  

    11:29

    time on this topic but I I know we are at time.  I I did want to say that um my experience of uh   reading the book talking to you uh it definitely  feels like yeah this is not a point of evolution.   I I think that you're on to something when  you call this the fourth industrial revolution  

    11:48

    uh in the book and you said there's a massive  massive change that we should be better prepared   for the co- intelligence revolution. Yeah.

    The  co-intelligence revolution actually. Yeah.

    Um   and I I think the other thing that also stood out  was when you answered the question about uh what   is the subtle risk that that people overlook. It  almost sounded like you were saying look more than  

    12:06

    the risk there's the opportunity cost. Yes.

    The  risk is that we overlook the fact that there's   a huge opportunity cost to not being involved in  this at this stage. Uh I love the fact that as you   brought out some of the examples it's clear that  if we follow this path we actually end up creating  

    12:23

    a much more inclusive world because it is not just  that you know as the product manager working with   a set of engineers and designers and various  other professionals I create a thing that tries   as hard as possible to meet your need but still  doesn't do enough unless you can get involved in  

    12:41

    it and say hey this is what I want in the moment  and the entire product can adapt. to it and that   reality is in our reach.

    There may still be a  few steps that we have to get to to get there,   but it looks like that's the reality we're working  towards. And that that's almost very very positive   because if you think of it, more inclusion is what  we're shooting for as we're creating technology,  

    13:00

    putting it in the hands of more people, making  it cheaper and so on. And underlying all of this,   it sounds like you're also making the case that  this is not just about, hey, go build a product   based on these principles or go change this  one thing.

    Could be education, could be health,  

    13:16

    could be something else. You're saying there's an  entire ecosystem to build here that can that can   have all the components working with each each  other in an intelligent way.

    Um, and for me,   the the the last thing that stands out is a little  bit of that adage of, you know, the more things   change, they the more they remain the same. I  think the if we've always told people that the  

    13:38

    higher order skill that we want people to develop  is critical thinking. We've always said that as   long as you have empathy for another human.

    A lot  of the other skills of designing things will will   follow through. Those are things that can be  learned, they'll change.

    Uh but it sounds like  

    13:54

    the importance of critical thinking and empathy  for our fellow human in many ways are are things   that will still remain the same. And that's a big  part of what will help people become part of this   co- intelligence revol revolution in a in a  meaningful way.

    Absolutely. Thank you.

    Thank  

    14:10

    you. Thank you, Venkat.

    Thank you, Krishnan.  This was an absolutely fascinating fantastic   way that you summarized the the key learnings.  Fantastic. All the best.

    Thank you very much.