Langchain Tutorial For Beginners (2026 Guide) | AI Agents For Data Engineers

🚀 Add to Chrome – It’s Free - YouTube Summarizer

Category: AI Development

Tags: AIChainsLangChainReactTools

Entities: DuckDuckGo SearchLang ChainLLMPyanticPythonRAGReact AgentsSQL Database ToolkitSQLiteWikipedia

Building WordCloud ...

Summary

    Introduction to Lang Chain
    • Lang Chain is increasingly popular in the data domain, especially for building AI orchestration pipelines.
    • The course aims to make you a proficient Lang Chain developer, covering basics to advanced topics.
    • No prerequisites other than basic Python knowledge are required.
    Lang Chain Fundamentals
    • Lang Chain allows the integration of multiple AI models and tools, acting as a unified wrapper.
    • It simplifies the use of different AI models without needing multiple SDKs.
    • Lang Chain helps automate complex workflows by creating chains of tasks.
    React Agents
    • React agents in Lang Chain perform tasks by reasoning and acting using tools.
    • They are designed to handle tasks that require external data or actions not directly accessible to the AI.
    • React agents use tools to fetch data, evaluate it, and then generate responses.
    Chains and Workflows
    • Chains in Lang Chain are sequences of tasks, similar to data engineering pipelines.
    • Complex workflows can be broken down into smaller, manageable chains.
    • Parallel and conditional chains allow for more dynamic and flexible workflows.
    Structured Output and Messages
    • Lang Chain supports structured outputs using Pyantic for data validation and parsing.
    • Messages in Lang Chain include user, AI, and system messages to set context and tone.
    Takeaways
    • Lang Chain is essential for modern AI orchestration and pipeline automation.
    • Understanding Lang Chain's tool integration can greatly enhance AI capabilities.
    • React agents extend AI functionality by using external tools for data retrieval and action.
    • Creating structured workflows with chains can improve efficiency and manageability.
    • Mastering Lang Chain concepts can open up advanced AI development opportunities.

    Transcript

    00:00

    I am in the data domain. Why do I need to learn lang chain in?

    So in 206, every other organization is looking for the data engineers who know lang chain because they want you to build their AI orchestration pipelines as well. So I know now you will say hey I also want to learn lang chain but I do not have any

    00:15

    kind of resources any structure any guide nothing don't worry that's why I have created this complete 5 hours long beginnerfriendly lang chain full course which will literally hold your hand and will make you a pro lchain developer by the end of this video. So first you will

    00:31

    learn the fundamentals of lang chain and as you follow along you will learn much more complex areas such as parallel conditional branches penting integration tool calling react agents and much more. Not just this by the end of this video you will build your own SQL agent which

    00:46

    will take the user input in the natural language and will automatically fetch the necessary data from the SQL database and will return the insights back to the user. Wow.

    So on what are the prerequest required for this video? Nothing just basic Python is more than enough.

    The best part everything is explained

    01:02

    visually with cool graphics so that you can even understand the complex areas easily. Plus this video is designed in the form of chapters so that you can just follow along in a structured guide and all the notes and code examples are also provided with this video.

    In short, this is the one-stop solution for you to

    01:20

    master lang chain in 2026. So bro, just take out your laptop, make your coffee and just hit the subscribe button right now and let's master lang chain in 26 and let's crack the dream offer, dream job this year.

    I want to help you. Let's go.

    So what's up? What's up?

    What's up

    01:35

    my fam? Happy Sunday and this is this this this is the video that I wish I had when I was learning about these things.

    So without wasting any time, let me just tell you what is this video about. Obviously you have watched the thumbnail.

    Obviously you have watched the introduction. So this video is all

    01:51

    about lang chain and if you are still thinking to learn about lang chain and do you know what this is my personal analysis if you are someone in the data domain or who is trying to enter into this domain let me just tell you one

    02:08

    thing lang chain is that thing which is becoming more and more popular and which is in demand. So let's say you are applying for a position let's say data engineering position or any position like in the data domain.

    Okay. Now there are obviously hundreds or even thousands of applicants applying for the same

    02:23

    position. But if you add these kinds of frameworks especially lang chain because lang chain is one of the most mature agentic frameworks right now.

    Even if the first of all I know that lang chain is written the in the job descriptions everywhere. Even if langchain is not

    02:38

    written in the job description if someone is reading your resume and if you write that you have experience with lang chain that will add a cherry on the top of the cake. Make sense?

    So that means lang chain is a necessary stuff that you have to have to learn. And let

    02:54

    me just tell you one thing which is a good news for you. Lang is not just for like data scientist or machine learning engineers.

    It has nothing to do with all those domains. And I know why you didn't learn this technology because you were like thinking the same.

    You were still

    03:09

    thinking like okay this technology is all about machine learning. This technology is all about AI and all.

    No this thing is for data engineers. And let me just tell you one more thing.

    Lang chain is very very very similar for or you can say it will look very similar

    03:25

    to ETLs or those pipelines that data engineers build. And yes, this video is for everyone.

    But yes, if you are a data engineer, you should feel happy that you are learning lang because it is very easy for you to learn this. It is very very very very similar to all of those tasks that you do on a daily basis.

    03:41

    Simple. Okay.

    Makes sense. So now what is required for this video?

    what we going to learn in this video. I know you have all of these questions so that you can decide if you want to continue with this video because I know you are very important person and you you your time is like very important.

    I know that. But

    03:57

    yes, let me just answer all of those questions. So first of all, in order to continue with this video, you should have basic Python understanding.

    Basic Python means you should know how to write functions in Python especially some basic for loops and if conditions and I would say some basic level

    04:15

    understanding of oops such as classes and objects and that's it. If you have this understanding that means you are good to go and now what we will be covering in this particular master class and how the you can say agenda will look like.

    So let me just show you that as well. So agenda of this particular video

    04:32

    is very very simple. I will expect you have zero knowledge.

    That means you are totally a beginner. Literally zero knowledge.

    Zero knowledge means zero knowledge in the AI world as well. Okay, forget about these things like technologies and all.

    Zero knowledge in

    04:48

    the AI world as well. So you are totally a beginner who just knows basics of Python and that's it.

    This video has dedicated chapters. This video has dedicated notebooks.

    This video has dedicated sessions that will teach you everything from scratch. And we're going

    05:05

    to literally cover from scratch. Let me just show you my notes that I have prepared for this particular video.

    And by the way, everything is available in my GitHub repo. So let me just show you that as well.

    So this is the GitHub repo for this video. Okay.

    And as you can see there are like major three chapters. But

    05:20

    in each chapter we have so many notebooks. If you just open that, you will see like 1 2 3 and then chapter two we have these things.

    And then chapter three, we have these things. And yes, we'll be just building so much of stuff here.

    And yes, as I just mentioned the introduction as well, you'll be building

    05:36

    an autonomous SQL agent as well on your own from scratch. And regarding nodes, if you just click here, I have just put my live illustration notes that I have just used here.

    And everything will be illustrated and demonstrated live. But these are the

    05:52

    notes that you can just refer and you will literally see everything like everything everything is written in detail. So all the notes are provided here and we're going to just cover everything in detail.

    Make sense? So everything is there for you.

    You just need to actually sit back and relax and

    06:09

    just en you should enjoy this video. And as I just said everything will be covered from scratch.

    So no prerequisy and agenda chapters as you can see that we are just covering every single thing which is connected to lang chain. So by the end of this video you will become a

    06:24

    langchain developer. That means you will know like how to write code in lang.

    You will know how to build agent with lang. You will know how to connect dots with lang.

    You will be a confident langen developer and then you will be able to build your projects on your own if you

    06:40

    want to. And if you are um if let's say your organization is asking you to work on Langjen, you'll be easily able to work on lang.

    I'm so sure about that. Simple.

    I will also tell you some about about some resources that you can just use along with this video and everything

    06:55

    will be there in the video as well. Perfect.

    So I think now you have everything clear. Now you have the decision and I know your decision.

    So let me just quick like let me just quickly start this video and I want to start this video like today's session especially with

    07:10

    this positive positive positive vibe. So what are what is this what is this?

    So basically these are my lida family members who have recently cracked their job offers and it can be their dream job role, dream job offer, dream job company whatever but they are literally happy

    07:27

    and I am also happy because they are happy and this literally makes me feel proud. Okay and congratulations to all of you and congratulations to all of you.

    Congratulations to all of you. Congratulations to all of you.

    07:43

    Congratulations to all of you. Congratulations to all of you.

    Congratulations to all of you. Congratulations to all of you.

    Congratulations to all of you. Congratulations to all of you.

    So, I'm lit little literally happy. So, this is

    07:59

    the favorite part like one of the favorite parts of my video when I just say congratulations because see why are you learning? Why are you learning?

    There's a goal, right? So, when you achieve it, you feel happy.

    So, they have achieved their goals and they are happy and when they are happy, I'm also

    08:14

    happy. So you should also feel happy because you will also achieve your goals.

    Don't need to worry at all because you are landed on this right channel. And if you haven't subscribed this channel yet, I don't know what are you thinking.

    Uh because maybe you do not want to come into this list. If you want to just hit

    08:30

    the subscribe button right now because I'm literally putting the content which is made for your success. Make sense?

    Okay. And let me just quickly show you my handles as well.

    So these are my handles. If you just want to follow me, if you just want to check my content, you can just go here and you can just

    08:46

    learn more and more stuff as well. Okay, so let's quickly get started with this video because I'm literally excited for this video lang because I love lang.

    Let me just show you everything and let's start learning. So let's start this master class with our cool heading

    09:03

    heading heading heading aentic AI and frameworks basically because agenticare are incomplete without the frameworks and don't worry we're going to just understand everything from scratch and when I say when I when I say scratch

    09:19

    that means scratch okay so agentic AI and frameworks okay makes sense so let's say we want to talk about this this topic agentic AI in frameworks. First of all, I specifically mentioned that

    09:34

    agentic AI for data engineers, right? Why did I do that?

    Do you know what the best professional the best professional out among you can say all of the professionals? I think data engineer is a person who already knows much about agent.

    You will say no.

    09:51

    I will say yes because agenti is just the new name for everything that we are doing right now and data engineers are already wellvered about that. So if and and do you know what do you know what do you know the funny thing data engineers are the ones who are entering into the

    10:07

    AI world after all of those professionals that is the I would say a kind of pain point. Data engineers should be the first ones to enter into this particular domain and they are the last ones.

    I don't know why because they

    10:23

    think like data engineers do not need to learn AI and it is something else. It is something out of the topic.

    It is for data science. I don't know but that is why this channel is for you.

    This channel is just actually teaching you and providing you the knowledge which is required to be in that particular group

    10:41

    which is always in demand. Right?

    That is why by the way if you are not aware about this thing if you just go on any job portal if you just type AI data engineers you will see that particular job postings as well because currently industry has also accepted the fact that AI engineering is is incomplete without

    10:57

    data engineering. So they have just started putting out those labels called AI data engineers.

    So I think it is the high time and the best time to learn I wouldn't say best time it is the high time when when you will learn about agent AI and all. So let's start this particular conversation.

    So I told you

    11:13

    that I do not expect any kind of knowledge from you. So let's say you are this person who do not have any kind of knowledge about anything.

    Okay. Let's say this is you and your name is what is your name?

    Let's say Rahul. Do you know

    11:29

    what I just picked the random names and there was a real Rahul. He I think dropped a comment.

    Hey I feel so happy whenever you take my name. So shout out to Rahul whoever the person is.

    Okay. So let's say he is Rahul.

    Okay. And this person doesn't even know anything about

    11:44

    agentic AI frameworks. Like why do we talk about these things?

    So first of all, do you even know the difference between AI agent and agentic AI? Do you even know the difference?

    No, I I I know this. So we will first of all discuss about this thing because once you get

    12:00

    this concept only then you should learn anything else. Then you should learn the frameworks.

    You first need to understand the concept like what is agent and AI agent. Frameworks is not a very critical or you can say very uh complex thing.

    Framework is just like a kind of module.

    12:16

    It's just a kind of SDK. It's just a kind of package.

    We just call it as framework. Right?

    The main thing is agentic AI. What is agent?

    And in order to do that, we need to just understand this concept agent versus AI agent. So let's start with AI agent like what

    12:32

    exactly is AI agent? Let's say this Rahul.

    Let's let's let's say this is Rahul. Okay.

    And this person needs to know everything about agent and all. And do you know what this person for first needs to understand the AI agents?

    Okay. So now when I just talk about AI agent,

    12:50

    what is actually an AI agent? AI agent.

    What is this thing? Like why this term is so popular right now and everyone is talking about this thing AI agent.

    AI agent is actually nothing

    13:05

    about you can say is like nothing but a kind of framework in itself not the framework let let's not call it as framework you will be confused it is a kind of let's say real person what is agent forget about AI

    13:22

    what is agent if you just try to find out the word agent agent is someone who performs a specific task you would have um seen you can say insurance agents and then you can say

    13:39

    customer care agents and then you have flight agent like there are some agents like why do we use the word agent this particular thing this AI agent is actually aligned towards a specific task okay

    13:55

    specific task nothing fancy this thing so now we Have we have added the keyword called AI? That means this agent is not the real

    14:11

    person. Oh, okay.

    This agent is not the real person. This is AI performing the task of an agent.

    Okay. And in this case, it will be let's say LLM.

    14:28

    LLM, right? Okay.

    This will be LLM. When LLM that means large language model the entire AI industry is built on top of LLM right now makes sense like large language models.

    So LLM when

    14:47

    able to perform the task of a real human that will be called as an AI agent. But An Lamba can we say that all the LLMs are AI agents?

    No, I didn't say that here. Hear

    15:02

    me out. I said when LLM is able when LLM when this LLM let's say this is the LLM.

    Okay, and this can be like any normal LLM. It can be chat GBT.

    It can be anything. Let's say this is your LLM.

    15:17

    Okay, this is your LLM. Wow.

    Let's say this is your LLM. Now, this LLM can be anyone.

    Open AI. Let's let's take open eye because open is very popular.

    This is open LLM. This is

    15:33

    simple LLM that just knows how to generate the text. Simple.

    Now when we we developers add some capabilities to this LLM in order to perform a

    15:50

    specific task in or sorry in order to achieve a specific goal. this LLM becomes AI agent.

    Okay, that is clear. But what are those specific task or specific capabilities

    16:07

    that we add? Very good question.

    In short, in simple language, we add something called as tools. Okay, we add something called as tools.

    And now you will say what kind of tools? I will say any tool.

    Any kind of tool.

    16:28

    So when we add this tools okay so we have provided it who we this let's say developer Rahul Rahul knows how to develop the tools okay now these tools when these tools are integrated

    16:44

    with this LLM okay this thing and this Perfect. Now when we integrate this these tools

    17:00

    with this LLM, let me just write it for you. Integrated then it becomes this LLM.

    Simple. This is your entire definition, entire flow.

    And now this AI agent can perform this task.

    17:16

    Right? Now you will say um which task?

    I would say any task. Let's say reporting data, email, anything.

    Let's say this is a task. Simple.

    Let's say this agent is a kind of um customer assistant or basically customer

    17:32

    service agent which receives the emails and maybe filtering out the emails which are bad, which are right and categorize it. It's a kind of task simple that you would hire a real human for this task.

    Now you have AI agent for it. Now

    17:49

    this developer builds these particular tools. This is the main thing to understand.

    This is the main thing. This developer built these tools.

    18:05

    Okay. Good.

    Now a very good question can be like you can ask like so many questions but let's say you have asked this question anala. Yeah.

    Why do we need to create tools and why

    18:21

    LLMs? Because LLMs are so popular.

    Everyone is talking about LLMs. LLMs are like so big like they have raised billions of dollars.

    Why we need to create tools? Why we need to integrate tools?

    So very good question. Let me just tell you.

    Let's say you are using

    18:38

    um Chat GPD. Let's say let me open ChatG in front of you.

    Okay. And I hope you know what is ChatG.

    If no um then very good slow collapse. So let's say this is the chat GPD right.

    Chad GPD is nothing but a kind of

    18:54

    wrapper. It's just a kind of front end that is built on top of LLM.

    Simple very good. Now do you know what when Chad GPD was launched in its initial days if I ask something like this?

    Um my name is Anro.

    19:11

    Let's say I write something like this. So Chad GBD would be able to generate this text and that's it.

    And when I was asking Chad GBD, hey what's the temperature here in Canada?

    19:28

    Right? If I hit enter, if I hit enter, I will not hit enter right now because I need to explain this thing.

    In the earlier days, in the initial days when I would be asking this question, Chad GPT would not be able to answer this question.

    19:45

    Okay. Why?

    Because it doesn't have the capability to track the recent data because these LLMs this these large language models are trained on specific data sets, right? And they cannot train the data sets like they cannot train the model on data sets on a daily basis.

    It

    20:00

    is a very expensive task. So they do it I think once a year or twice a year.

    So they do not have like realtime access but you know what when I hit enter now just bait and watch Canada's huge oh it is very smart let's say temperature in

    20:17

    Calgary I'm not in Calgary by the way but yeah let's say in Calgary so now see see did you see I know it's cold -4° centigrade yeah so now the thing is how it is able to just get the right

    20:34

    temperature and it is actually -4° -15° I know this okay because here in my city it is 11° so yeah Alberta is like the coldest not coldest but yeah one of the coldest so now I know that the temperature is -4° centiggrade how it is

    20:49

    able to track that now you will say because LLMs are becoming smart not at all not at all now do you know what happened this chat GBD team built tools okay this chad GBD built tools and they

    21:05

    integrated those tools with this LLM original LLM. It can be let's say GPT5, GPT4, any LLM model.

    They built these tools and this tool can be let's say getting the u weather information and

    21:21

    did you notice when we just hit that particular question it showed that um thing called searching the web is that thing that is a tool call. Okay, so we build these tools on our own.

    These are nothing but the functions

    21:36

    and then we integrate these things to the LLM and then this becomes an AI agent like obviously we will be just doing so much of stuff to make it as an agent to make it autonomous as much as we can but this is the backbone of AI agent LLM

    21:54

    with tools that's it because can you think of an agent which doesn't have any capability so let's say we are hiring a real person right and person needs to write email, read email and do so many stuff. So that agent, that human also

    22:09

    needs some access, needs an email address, all those things, right? So it needs those capabilities.

    Same thing we will provide to this LLM and then it will become an AI agent. This is the definition of AI agent.

    Okay? Now what is the difference between AI agent and

    22:24

    agenti? Because this is clear.

    So always remember this thing. This is a kind of formula you can say or just a thing just a you can say way to remember it.

    So AI agent is built to fulfill a specific task. Okay is built to fuli fulfill a

    22:42

    specific task. Let's say you are hiring an AI agent basically building an AI agent.

    You are hiring a person same thing. So you are getting an AI agent for this task.

    Let's say now this email task is done. Now you need to add an agent in your team who

    22:59

    will do the marketing like person is reading the emails categorizing it and that's it. Now this particular agent needs to talk to another agent who is a marketing agent who will do the marketing stuff who will run ads on

    23:15

    social platforms. Make sense?

    And then that marketing agent needs to talk to let's say business agent because it needs to communicate with the business team or finance team. Hey, how much money do we have to run ads?

    Hey, how much money do we need to spend this month? Maybe

    23:33

    right. So when we need to enable the communication between the agents that becomes agentic AI that means orchestration of AI agents when we have multiple AI agents and do you know

    23:48

    what the fun part agentic AI is always is always aligned towards fulfilling the goal. Now we'll say AI agent is also inclined towards fulfilling a goal.

    The difference is AI agent is only fulfilling the sub goal basically the

    24:05

    task but agentic AI is fulfilling the entire you can say goal or you can say bigger project. So let's say you have a kind of social media managing company and you have those AI agents built in specific areas and that will be called

    24:22

    as agent AI. I hope it makes sense.

    Now let's say let's relate this analogy to our data engineering. Let's say you are a developer okay and you have built some tools to run the spark notebook let's say or let's say spark job in

    24:39

    pispark right if you do not know what pispark basically pispark is the framework that we use to process big data right so you have built this AI agent to run the task and task is running spark right running spark job perfect this is

    24:54

    the task now this is agent one right as we just mentioned this is agent one. Okay, this is agent one and let me remove this Rahul.

    This is very obvious

    25:11

    now because they have the tools. So this is only and only agent one makes sense.

    Perfect. Agent one.

    Now do you know what? Before

    25:27

    running this job, you obviously need to configure the spark cluster size like what cluster do we need. So actually this agent needs to communicate with another agent, right?

    Let's say this agent or you can say it can be like two

    25:43

    and four two and fro communication as well. So it's not a big deal.

    So let's say this is the this is another agent and what this agent does this agent configure the spark cluster size that what should be the cluster size on the basis of the data what should be the you

    25:58

    can say course number of coursees what should be the memory we should allocate to our jobs like it will give you the exact or ideal cluster size and then you can just run your job before that you do not need to run the job so we'll edit the task and we can say we are simply let's Okay, talking about

    26:17

    clusters basically these servers. Make sense?

    Make sense? Okay.

    So now the task is to give us the right server size. Basically the right cluster size and then we need to talk to each other.

    So now this

    26:33

    framework will be called as agentic framework. Why?

    Because they are communicating with each other and there's an orchestration. This is a very simple uh you can say a kind of orchestration where where we just have like two people talking to each other.

    In the real world, we build the complex

    26:48

    workflows where we have let's say 10 to 20 agents talking to each other. And the main thing is it is not linear.

    What do I mean? It can be branched out to many things, right?

    Let me just show you. So let's say this is agent one.

    Okay, this is agent one. And this agent one

    27:06

    can be connected to this one. Okay, let me just make it small.

    So let's say like this because I want to just show you the complex one. So that is why I'm just zooming it out.

    So do not need to worry. Okay, so let's say this is agent one.

    This is connected to this one

    27:23

    and let's use this one. Perfect.

    So this is agent two and then let's say this will be connected to this agent. Can be it's not a big deal, right?

    Let's say like this. And then we

    27:39

    have let's say two agents. Let's say like this.

    Let's say like this. Wow.

    Can we have these complex workflows? Obviously, we have it.

    So, let's say this is the thing, right?

    27:56

    And then we have let's say the final node or basically the agent like this. So, this is a complex agentic workflow.

    Let's say we have a conditional here. that we can use.

    28:12

    Perfect. So let's say this is the complex workflow.

    This is agenti. Now I hope that you have the clear understanding of agenti like what is agenti and what is AI

    28:27

    agent? I hope so.

    This is also aki by the way but this is a very simple one but obviously this was required to make you understand the concepts right you can say agent here

    28:43

    like here and don't worry I'll just provide all of these nodes in my repo GitHub repo so that you can just enjoy it agent I can simply say simple and then we have

    28:58

    complex as here let's write complex make sense perfect so now if you have this clear now you can just learn any framework because see now let's go back

    29:15

    to our main agenda agentic AI and frameworks so now we know the difference of agentic AI and AI agents obviously and now you also know why agentic is becoming more and more popular because we do not create just one AI agent and that's it we need the entire flow Now just tell me one thing. Come here.

    If I

    29:32

    just show you this agentic AI and not the simple one, complex one. Have you seen this kind of thing in your day-to-day activities?

    And I'm talking to you data engineer. Have you seen this kind of thing?

    I'm not asking you to think about LLM. No, no, no.

    Just take

    29:48

    about like I'm just talking about these things, these boxes. I'm not talking about the stuff that is built inside this just the nodes.

    Have you seen this? You will say yeah we have seen this in our pipelines.

    Exactly. That's what I'm trying to tell you that we are actually building the

    30:05

    pipelines and we are calling call calling it as agentic. Now yes I'm not saying that it is not agentic.

    It is agentic. Why?

    Because we have AI workflows basically AI activities within each node. But on the bigger picture side this is just a

    30:22

    pipeline. That's it.

    This is just a DAG directed asyclic graph that we build in our day-to-day activities. Make sense?

    So I hope now you are very much clear and now you should know why data engineers need to lead this industry and they are not leading the

    30:39

    industry right now because they are too late and they are not actually learning AI because this is the right time high time to learn it and don't worry I will just make it understand everything. Okay, perfect.

    So this is the whole concept and now let's talk about the frameworks

    30:55

    quickly. Framework is not a very kind of complex thing.

    Framework is nothing but just a word that we use to portray something about you can say a module. Let's say I'm building a Python package, right?

    I have so many classes, I have so many functions, I have so many variables and blah blah blah blah blah. When I

    31:11

    package them together, when it is ready to be reusable by anyone, it will be called as framework. Simple if you just want to understand it.

    Okay. And if you just want to learn the definition of framework, go to Wikipedia and learn it.

    This is the understanding of framework, right? So

    31:27

    the thing is now in order to build all of these things, obviously we can use our traditional Python language or any programming language, it's not a big deal. But in order to integrate with AI models, those LLM tool calling

    31:43

    capabilities, all those things, we need to write our own package, right? every time let's say if I am um if I'm just trying to build this particular logic I know that I will be just reusing my code again and again so let's build a package so I will build my package you will say

    31:58

    you you will say the same thing you will build your own package then third person will say the person will build the build his own package so now we have some solutions in the industry very popular ones are like lang chain okay we have one to two very popular more we have

    32:13

    llama index we have I think there's one to two more like there are so But these are the popular and trusted ones that you should use. So, Langchen is the most popular one right now that everyone is talking about because it is open source plus it is the most mature I would say

    32:29

    um framework plus it is well integrated with all of the tools and technologies. Wow, this is amazing.

    So now let's talk about the lang. Now what is lang chain?

    Lang chain under the hood I would say.

    32:46

    Lang chain under the hood. Okay, lang chain.

    I know the name is funny but it is really really cool. I still remember like when I was learning lang chain I would say years back it was like really really cool and it was really new and after that we have got so

    33:02

    so so many new new things. But yeah, let's talk about lang chain because lang chain was the early mover of the AI agentic or basically you can say agentic AI and AI agents industry.

    Yeah, stay hydrated even if it is cold.

    33:22

    So lang chain under the hood. Now what is lang chain?

    Now let's talk about lang chain. So let's say you have something within your code right you want to build this particular thing if you are a data engineer you

    33:37

    would have built something similar to this using your Python functions you would have built it right it's not a big deal because you are a data engineer you are a data professional you know already Python right let's say you built this okay where is Rahul let's bring that bring that Rahul let's say this Rahul

    33:56

    I'm making Rahul famous right So now we need to build a framework. Okay, I basically I need to build the agentic AI solution,

    34:11

    right? And you know what we need to build and let me just bring this thing as well like it will make more sense.

    Let's say I want to build this, right? Let's say we want to build this.

    34:26

    Our goal is this one build this Rahul. So now this Rahul can build this this entire thing.

    It's not a big deal for Rahul because Rahul is a pro coder even not a procoder but he can build this

    34:43

    using something called as let's say Python right because Python is like amazing language I love Python and you should also love Python if you're data engineer. So let's say you are building this thing right simple perfect so this Rahul can build this

    34:59

    thing using Python okay makes sense okay using Python but now what will happen what will happen all of these code cells all of these LLM tool calls um integration with the tools and then

    35:16

    just making it as an AI agent and then orchestrating it it will take a lot of time Right? Everything is manual.

    That means everything is manual. That means Rahul

    35:34

    needs to build all the classes. Now you will say bro why Rahul needs to build the classes?

    He will simply write the code because Rahul is working in an organization right and in in that organization there are lots

    35:49

    of developers who are working. So in order to use that code, we need to build the classes, right?

    And I hope that you have some basic understanding of Python like Python class, Python object. If not, bro, just go and just learn Python as soon as possible.

    And if you want to learn from me, if you want to learn

    36:05

    Python from me, I have a dedicated Python course on my channel. And you can just simply search on YouTube An Lamba Python full course and you will get the Python full course and just make your Python skills or take your Python skills uh through the roof because it is really important.

    So Rahul needs to build all

    36:22

    the classes from scratch. Right now I have a solution or Rahul can use a framework.

    36:38

    Rahul can use the agentic framework called langj. Make sense?

    Now in that particular framework all the

    36:53

    classes are built. Obviously there's not like very specific use cases as per your organization.

    Obviously you need to make so many changes like so many tweaks so many things but it is very much efficient. Why?

    Because all the test cases all the error handling all of

    37:10

    those things boilerplate code everything is gone. It will save us a lot of time.

    So Rahul will take this approach. Okay.

    Rahul will take this approach because Rahul is smart. So now lang chain is what langchain is

    37:26

    doing. So, Langjain has built the Python classes for your agentic solutions that

    37:41

    you can use anytime or you can say that you can use and modify as per your use cases. Simple.

    This is lang chain under the hood. Nothing special.

    Nothing

    37:58

    special. It is not magic.

    It is not actually no magic. You can write the exact code that Langchain has built.

    All the classes you can build on your own as well. But the good thing is Langchain has built all those classes for you.

    So that you do not need to build everything from scratch again.

    38:15

    And I hope it makes sense. It makes sense to me.

    Yes. Okay.

    So this is lanchin under the hood. So that means we are simply trying to learn lang chain from the Python point of view.

    There are like type I think TypeScript or JavaScript support as well

    38:31

    but we are simply learning from Python point of view. Make sense?

    So let's help Rahul to build all of these things and we know that Rahul doesn't know anything about Langchain because Rahul is new to Langchain. So we have a cool documentation about lang chain and let me just bring that lang chain documentation

    38:47

    and this is the official documentation for lang chain that they keep on updating and see python and typescript. So we have both capabilities.

    I will simply say lang chain python and this is the documentation of it. See we have everything here.

    We have installation,

    39:02

    quick start, change log and blah blah blah. And then we have agents.

    Very long documentation, very very detailed documentation and I love it to be honest and you can also go and check it out. But yes, initially Rahul wants quick overview, quick tutorial so that

    39:19

    Rahul can start building using lang chain and sideways Rahul can also explore the documentation but Rahul wants some guidance right some structured approach because obviously when I was also learning lang chain instead of directly going to the documentation I also like was studying through so many sources like books um

    39:37

    playlist courses and then blogs and then documentation like everything because I like I I'm not I will not say like there was no structured learning available or video available. Um I just wanted to learn on my own like how things are working and then I realized let me just

    39:54

    pass on this knowledge of all the sources. It is like a condensed form of knowledge which is you can say the integration or condensed um curated knowledge of so many sources like blogs, courses, videos, documentation.

    So this is like one-stop solution for you to

    40:09

    master lang chain and literally master. Let me be very honest because you will be able to just build anything using langun after this.

    Okay. So let me just start taking you to the practical lab session because enough concept enough thing is built and we will be just learning so many things in the meantime

    40:26

    as we go along with this video. Okay.

    And I will just provide this notes as well. Everything will be available.

    So do not need to worry writing down and all. If you want to write it down it's good but I will just provide it as well.

    Okay. So let's see how we can just get started with lank chain.

    Okay let's see. So now let's talk about the prerequests

    40:43

    not from the skill wise because it is like very very very basic and very beginner friendly. You just need like basic Python knowledge and that's it.

    I'm talking about the system requirements. So you need first of all code editor or basically IDE.

    It can be

    40:58

    VS code. It can be anti-gravity if you're using anti-gravity which is launched by Google.

    There are so many code editors. You can use whatever you want.

    Okay. And we would need Python as well.

    So let's

    41:14

    say system system requirements not system I would say softwares we'll be using simple. So we will be using some cool softwares.

    41:30

    Cool softwares. So software will be first of all you need VS code or anyone like if you like VS code I love VS code I love so many new new AI code editors code editors as

    41:47

    well because I keep on playing with so many code editors so I just keep switching okay you can also try a lot of code editors and then we will be needing Python. So if you do not have Python installed on your system that I would assume that you should install okay and you would need one more thing

    42:04

    get now why get because let's say you're building something and you need to push that code in your repo you would need git right so if you do not have these things available in your laptop first of all slow collapse and very well done tap your back very good developer you are so

    42:21

    you can just download all of these things first of all just go to any browser search VS code download and this is the visual called download and you should download it. Okay, this one, this one or this one.

    Then once you have this, you can download Python

    42:38

    and Python.org or you can simply say Python download just to directly land on that page. Perfect.

    So here you can simply download Python install manager. It will download the latest one which is 3.14.

    And here you can see all the active

    42:54

    Python release. By the way, this is cool.

    Like I was not expecting this. So currently most of the people are using 3.12 most of the people and you can see that this has a lifpan of almost like it has security around 2028.

    So that is fine. I am also using 3.13 but currently

    43:12

    it is under bug fix. So there are so many things that are breaking or that can break.

    So that is not a very good time period to try this. And for 3.14 it is totally new.

    So I will just simply recommend you to download 3.12. And in order to download this, you can just go

    43:27

    here and you will see all of these um packages installed. So you will see it here.

    No installers. Um you can just go specifically on Wait, really?

    Should you not this one?

    43:45

    Yeah, you can just search it here. So you can just scroll it here and it should be 3.12 something.

    Maybe 3.1212. Uh it also doesn't have any kind of installers.

    3.1 2.3. Uh do does it have any kind of installers?

    Like you can just check if

    44:01

    you have Yeah, it has installers. So if yeah, Windows 64 or Mac whatever.

    So you should download 3.12 Python because it is the most stable version available right now. 3.13 is also good but 3.12 is better.

    Then you need to download git. Simply say get install

    44:17

    and click here. And here you can just choose your you can say provider and you will simply download it from here from Mac from Linux and whatever.

    Download these three things and let's get started. So for this particular video I

    44:34

    will be using anti-gravity. Anti-gravity is a new code editor.

    If you don't know about it it is by Google and they have amazing amazing amazing free features available. So if I go here anti-gravity IDE

    44:50

    see this is the one you can download this on your system and you will love it okay because this is like a gentic code editor that you can just use within your codebase and it is amazing okay so let's get started with our setup on the system

    45:07

    okay let's see so welcome to my anti-gravity code editor and as you can see this is the interface very straightforward about very simple very similar to VS code and obviously everyone loves VS code so it is also having the same UI everything maybe it is built on top of VS code because VS

    45:24

    code is open source so here you will see this is the agentic area by the way you can collapse it by clicking here and we'll be using throughout the project or throughout the tutorial don't worry and this is the folder that I have created lang chain tutorial you can also create this folder and as you can see this

    45:40

    tutorial is empty this tutorial this this tutorial is not empty. This tutorial folder is empty.

    Okay. So, as you can see, you can also create it.

    Make sense? And how to do that?

    I think you you should know how to create a folder, right? And then you can just

    45:55

    open it. Go to file, click on open folder, and then open that empty folder.

    Simple. It will just pop up like this.

    Do not worry about these things because this can be different in your case because I have downloaded some extensions. Let's say DBT and there are so many extensions that I downloaded.

    46:10

    So, just ignore. So um I would say it is any gravity and this is not very very complex to explain you.

    So this is our uh you can say code editor that we are trying to use. Perfect.

    So without

    46:27

    wasting any time let me just initiate my particularly part particularly particular project. So I will open one terminal.

    In order to open a terminal in anti-gravity, you can click here in the terminal and new terminal or I can use

    46:42

    the um shortcut control shift and back tech. Okay, choice is yours.

    So this is my terminal. So what is the first thing we do whenever we just initiate anything for especially for Python?

    The answer is creating the virtual environment. Common sense.

    So now let's create the virtual

    47:00

    environment. And you know what?

    We will not be using um pip anala. We have to use pip.

    No, we do not have to. We have something called as UV.

    If you do not know about UV, I think you should know about UV because UV is the modern Python

    47:16

    package manager and we can just do everything with UV that we used to do with pip. How you can also get started with it in just one line.

    You simply need to say pip install uv and hit enter. In my case, I have already downloaded it.

    So it will simply say

    47:31

    requirement already satisfied. In your case, it will say like it is downloaded.

    Okay. Once you have pip like UV installed in your system then you can just start using UV and it is so convenient.

    I will just show you in order to initiate a project and creating the virtual environment. You do not need

    47:47

    to write multiple commands. You will simply say UV in it and hit enter.

    Perfect. It has created the entire project for me.

    dot get ignore python version main.py py pyro project.tomal automal readme.md

    48:04

    make sense very good now in this python version if in your system you already have let's say 3.13 installed or let's say 3.11 installed and you want to use 3.12 just for this tutorial you can simply write it here as I have written

    48:21

    3.12 like I have not written it u has written it for me but you can just change it here okay and what to do after that hold on I will just tell you just change it here for now just write 3.1 12 simple good now in order to initiate the virtual enrollment we do not need to

    48:38

    write any special command by the way we have a command in UV called let's say UVNV then dot VNV but I can also say UV sync this command is a very special command and the combination of commands it will simply sync everything if I have

    48:54

    written 3.12 or even let's say if I write 3.11 or whatever version it will take that Python version it will create the virtual environment for me. Two things and the third thing is whatever dependencies I have it will install all of those dependencies.

    But An Lama, do

    49:11

    you know what? You do not even have requirements dot requirements.txt.

    How will you install the dependencies? When we work with UV, we do not need to worry about requirements.

    TXT because we work with pyro.l.

    49:28

    So if you open this file you will see dependencies list. So whatever package you write here automatically it will add all of those dependencies.

    Okay. So first of all let me just write u sync and hit enter.

    Perfect. So now you can

    49:44

    see that it has created VNV. And by the way if you are not seeing this uh get folder it is fine because it is hidden by default.

    I have enabled it because I just like seeing it here. But it has no use because we should not touch this dotgit folder because it is a database

    50:01

    that it uses under the hood to manage our git. Okay, you should not touch it and open it and make some changes otherwise your codebase will be broken, right?

    So yeah, but I have just put it here you so you do not need to worry if you do not seegget. It has nothing to do with your git.

    Okay. And if you just

    50:18

    observe in the bottom side, I have the master uh branch as well. That means it has also run the command called get init automatically in the back end.

    When I write uv in it, it runs get init as well. So let me just define my branch

    50:34

    quickly. Let me just rename my branch and I will say get branch minus m main.

    If you do not have git fundamentals clear, do not worry. Just follow this command.

    It renames my branch. So now as you can see my branch name is main.

    Let's create the

    50:50

    initial commit. I will say get uh then add dot okay and then get commit minus m initial commit.

    Basically this is the initial commit that we are creating for

    51:05

    this branch so that it will be registered. Hit enter and that's it.

    Now we will not be touching this main branch. Don't worry.

    Makes sense. If you do not know about Git, don't worry.

    Just copy these two commands and that's it. Do not behave like oh oh what is this command?

    So now now we first of all need

    51:22

    to install the lang chain library in Python. Okay.

    And how we can just do that? Simply say UV add.

    Same way you say pip install. Here we say uv add uv add lang chain.

    Okay. Just hit enter.

    51:38

    It will add all of these libraries and dependencies automatically. If I open my tomal file, I will see all of these things are written here.

    See lang chain 1.2.0. Wow, make sense?

    Good. So, I have downloaded langchain library.

    Okay. And

    51:57

    I can also verify it. If I just go to do, if I go to live, I should see lang.

    See, I have all of the things installed in my virtual environment. Makes sense.

    Now, let me just enable the virtual environment.

    52:12

    dot when scripts activate VS code automatically uh en enables your virtual environment that is a plus sign but here we have to do it manually. So our virtual environment is enabled.

    That is a good thing. Okay.

    So yeah, makes sense. So

    52:29

    now what we going to do? We will be creating chapters.

    Okay. Let me create our first chapter and let me just name it as chapter one.

    Um yeah, simply chapter one. So in this chapter one, we will be creating our notebooks like right like notebook one, notebook two,

    52:44

    notebook 3 like this. So in this folder, let me just create my first notebook and let me just name it as basics dot ipy nv.

    If you're not familiar with ipynv, this is basically the IPI kernel. Okay, IPI kernel notebook format which is very

    53:02

    similar to Jupyter notebook. Simple.

    If you do not know about Jupyter notebook, I don't know what you are doing. Okay, you should know about Jupyter notebook, bro.

    You are a data professional. So this has created this particular IPY NB.

    In your case, it will also ask you to download or install IPI kernel. Simply

    53:17

    say okay and it will install it. And first of all, you need to click on select kernel and select the kernel Python environments and pick your virtual environment.

    Perfect. Because only then you'll be able to use the lang chain libraries.

    Perfect. Now first

    53:33

    thing first thing that you need to do okay is that particular first thing. Now you have all of these things ready.

    Now do you know what? What you need to do?

    You need to go to OpenAI. Do we have OpenAI

    53:49

    thing? Let's use this one.

    So now you need to go to OpenAI and OpenAI. Why?

    Because now you need to create a kind of API key because you'll

    54:05

    be using the hosted models, high quality models. We will not be using open source models because there are so many things that only hosted models can do and that you need to learn as well and that is why we will be just covering everything with openi models right so how you can just go to openai simply go to browser

    54:22

    and search openai API simple so this is API this is not chat GPD by the way this is openai official API platform okay click here and here you will be landing on this page click on login and you can click here and do

    54:40

    not select chat GPT select API platform make sense select API platform because once you land on this API platform then you also need to create the API key make sense API key okay let me just show you my account

    54:56

    so this is my API platform account okay so in your case you will land on the homepage okay and then you simply need to click on settings and then you need to click on API keys then you will land on this page and here you can create your new secret key okay click on plus

    55:14

    create new secret key and I can also create it right now and I will just show you okay not in front of you you will use my secret key obviously I will just revoke it I'm just kidding so yeah you can just simply say create new secret key so this will create a secret key like this sky- something something

    55:29

    something something you need to note it down somewhere else because it will be hidden after that okay once you have the secret key ready once you have thing you need to go to billing okay and you need to add some amount in your account it can be very minimal it can be five bucks

    55:44

    and yeah literally five bucks you can just add five bucks in your account what it will do it will just use that amount to make the API calls because whatever um you can say API call you are making it will use some amount right do not worry it will not charge you directly from the billing that you are using it

    56:01

    will simply deduct the amount from the amount that you have added in your account It is a kind of you can say recharge that you do in your mobile phone or you used to do before because nowadays we have just bells but before remember we used to do the 10 rupees 20 rupees 50 rupees recharge coupon we used

    56:17

    to scratch the number we need to dial it and then it gets added into your system into your mobile system right so it is s similar to that and you will say hey um is it enough five bucks are enough I would say more than enough because if you will be using um that particular

    56:33

    five bucks for making API calls to LLMs. I would say it will hardly charge 0.00 something.

    I will just show you the you can say model cost, model pricing, it is very very minimal. So you can just feel very very good that if you add five bucks, it will be like more than enough.

    56:49

    And not just like for this video, you can even practice so much of things with that five bucks. And I'm not kidding.

    I am not kidding. Like five bucks is like more than enough for you.

    Let me be honest. Okay.

    And obviously it depends like which model are you using but for most of the models it is like more than enough. Let me just show you the pricing

    57:05

    quickly. If I go to OpenAI models pricing.

    So API pricing and here we have GPD 5.2 which is the most advanced version right now. So it is having 1.7 bucks per 1

    57:21

    million tokens and 1 million tokens is a big deal bro and this is the most curated model. And then we have GPT 5 mini which is the latest one but a mini model.

    It is very cheap. Ju just 25 cents per 1 million tokens.

    And can you imagine per 1 million tokens just 0.25

    57:39

    bucks. So five bucks is like more than enough.

    More than enough. And we can use let's say GPT5 mini.

    We'll be using this one GP5 mini. Okay.

    The cheap model but it has like all the features available that we want to use. So you need to do this homework.

    Okay. Once you have that

    57:55

    homework ready then we are good to go. Okay.

    So now once we have done that homework now what we need to do we will simply go to our anti-gravity or your favorite you can say code editor create one file called

    58:10

    env and then just add your API key and you need to create a variable called open AI API key equals um whatever your key is whatever obviously I will also not show you but

    58:26

    you need to uh put that um value why it is showing like this SK or this is like demo value that it is showing SK something blah blah blah blah blah so it will be like looking like this because this is like kind of demo value that it generates like SK because open APK always starts with SK so you need to put

    58:43

    this value in your particular case make sense because we'll be using this environment variable okay perfect so here is my API key okay and this is like income lead because I'm not sliding towards right and I will just revoke it I after this video. So this way you need

    59:00

    to put your API key here. Why?

    Because we do not expose our API key in our code. It is not a best practice.

    We should always create an environment for our API key. Make sense?

    Okay. Very good.

    So let me just close it. And we have env

    59:16

    codebase in the root directory not inside chapter one. It should be at the root directory level.

    Okay. Makes sense.

    And we can even hide it or basically exclude it from our codebase. go to get ignore and simply say totenv what will happen the advantage is now it is not

    59:33

    the part of you can say your codebase your development now it will not be published whenever you will be pushing this code to github okay good these are the best practices that I'm just telling you in the meantime because if I'm teaching you something it's my responsibility that covers everything including best practices so first

    59:50

    chapter is basics now what we are just trying to learn in the basics let me just show you something and you will be like okay that's how it is done okay so now let me just say markdown basically this is the markdown cell where we can just create headings and don't worry all of these notebooks will be available in

    00:06

    my GitHub repo just for your notes but I want you to write everything code by code um you can say character by character so that you can also learn you can also build okay it's not like okay you will write everything we will simply read it no you are a coder you are a programmer you are a developer

    00:22

    make sense so let me Okay, LLM call. Let's simply try to build this simple LLM call.

    Okay, so now what are we trying to do? We want to make an LLM call that chat GPT makes behind the

    00:39

    scenes, right? And we'll be using lang chain for that.

    Now the thing is we can make this a LLM call very easily without lang chain as well. It's not a big deal.

    Let me just show you how we can just use or make API call to OpenAI. Okay.

    and how we can just use langen and why do we

    00:56

    need to use lang. So let's say let's go to documentation and I'll simply say let me close so many tabs bro uh okay okay okay okay perfect

    01:12

    so now let's say I want to make an API call open API Python okay uh I want to see the API reference perfect because they keep on changing the code not frequently but yeah So this

    01:27

    is where is that python python python um yeah they used to have one switch between the languages introduction it will be somewhere here but let me just check it openai or maybe I can just use this llm

    01:43

    this is good so this is the code that I use if I just copy it and if I go to anti-gravity if I write it here and obviously I need to pick code right so I'm writing import OS right and then from open AI

    02:02

    import openai and this is the exact thing that we are doing in case you do not have this open eye installed you can simply say UV add and first of all activate the environment so UV add openi

    02:19

    perfect now we have openi library installed make sense so now what we doing very simple python function not even function I would say I will I I'm simply creating a client so this is OS module simple and why OS module because we want to get this environment variable

    02:35

    as you can see I'm using the client using openi function which is built by openi not by me I'm passing this API key because without that we cannot work right because it needs to identify who is making the API call who is asking me to provide the answer it's you raul

    02:51

    right is making the API call so Rahul needs to provide that particular API key and where Is the API key stored? Now you can hardcode the API key value here.

    But an Lamba just told you it is not a best practice because you can just simply expose your API key. So we store all the

    03:09

    secret or you can say secret keys in the environment environ OS.inviron returns a dictionary of all the environment variables with value as its value. And then we use dot get

    03:26

    function. You can also use your simple square bracket um notation as well if you want to like this it works.

    But dot get is like the standard to pull the uh you can say environment file. So that is fine.

    Then we are simply creating a kind of variable chat completion client.comp

    03:43

    completion.create. Okay.

    And then we are simply writing a messages. Now we are saying hey we want to send a message to the LLM which is the LLM model we are using.

    I will simply override it. Don't worry.

    And the role is user. That means I am the user who is sending the message.

    Content is say this is a test.

    04:01

    No, I do not want to say this. I want to say bro, tell me a joke about or let's say tell me a fun fact.

    Okay, tell me a fun fact. I want to know

    04:18

    about a fun fact. Make sense?

    I want to ask this. Okay.

    And I want to use model which is GPT 5 mini GTP GPT 5 mini makes sense. I want to use this

    04:35

    particular model GPD5 mini. I can pick any model.

    I can pick um 40. I can pick 4.1.

    I can pick 5.2. There are so many models.

    I want to pick this model GPD5 mini. Right?

    And then I'm simply printing the content of the response.

    04:51

    Simple. Let's try to run this.

    Let's see what happens. Install requires IPI kernel.

    I told you yes, install it because it asks you to install IPI kernel in your virtual environment. So now let's see what will happen.

    Do you know what will happen? It will generate the response that will be created by

    05:09

    LLM. LLM means this language model, this large language model GPD5 mini.

    Okay, let's wait. Okay, see this is the response, bro.

    Fun fact. Wombats spin cube shaped piles.

    05:26

    What? They used a stackable cubes to mark territory and communicate.

    Ooh. Oh, bro.

    This is actually a fun fact. I still remember reading those fun facts written at the back or at the end of our

    05:43

    classmates notebook which was the most premium notebooks. So yeah it's good.

    So see we have just got the response the same way we are getting from the chat GPD like from here. See same thing I'm

    05:58

    just simply writing the message here and I'm getting the message back. That's exactly chat GPD has done.

    We are simply passing the message. They are simply generating the response and we have done the same thing.

    We are passing the thing. We are getting the response and we are simply building a kind of front end just to show the message.

    That's it.

    06:15

    That's it. So that means you have built chat GPT kind of kind of kind of.

    Okay. Now, now you will say an Lamba what is the problem?

    Like we didn't even use lang chain. So now why do we need to use lang

    06:30

    chain? Okay.

    Let me just ask you a simple question. And let's say you are Let's say why lang chain right this is your question right so let's say Rahul where is Rahul bring it bring him here

    06:48

    Rahul so let's say this Rahul is using openAI model right let's say this Rahul is using openAI model let's say this is open AAI okay GPD 5.2 right 5.2 to GPD

    07:06

    5 let's say not 5.25 to five. Let's be honest because we are using GPD5.

    So we are using GPD 5, okay, to make the API calls. Okay.

    And we are able to do it very easily. Okay.

    And we have got the response.

    07:22

    Okay. Perfect.

    And I'm just doing my work. Now everything is running fine.

    And to make these API calls, I'm not using lang chain. I'm using the official Python SDK software development kit provided by the OpenAI model.

    Perfect. So I'm using

    07:38

    open SDK. Perfect.

    Okay. Just to make this API call.

    Now suddenly I realized that OpenAI models are very good to generate analysis to

    07:54

    generate reports to generate research whatever but they are not very good in terms of by the way no offense to open AA models. I'm just talking it as like you say hypothetical situation and we all know that entropic clawed models are very very good in

    08:11

    terms of coding everyone knows that right so let's say in my project I want to use openai yes but for specific tasks I also want to use let's say special model called

    08:27

    entropic or let's say claude And if you do not know about claude, you should know about claude bro because claude is very very famous. Now if I want to use claude as well.

    I want to use claude as well. What I will do?

    Same thing. I will make the API call and I

    08:46

    will use claude or basically entropic SDK. I think entropic is a parent company.

    Claude is the product. Okay, makes sense.

    So I'll be using claude SDK. Now let's say I want to include one more uh you can say model.

    09:02

    Let's say I want to use um Gemini to generate images. Let's say I want to use nanobanana probe whatever right?

    Let's say I want to use this thing. Let's say this is Gemini Gemini.

    Right? Let's say I want to use

    09:19

    this. What will happen?

    You will say um same thing. We will simply write um our code using SDK.

    Simple. Perfect.

    And which SDK you'll be using? Obviously Google.

    09:37

    Google SDK. Perfect.

    Now just tell me one thing, bro. How many SDKs will you use?

    Let's say I want to use 10 more models. Oh, so what if I say what if I just tell you that you can use this model GP5 okay

    09:57

    GP5 use this model okay you want to use claude okay use it okay you want to use Gemini use it and here

    10:12

    you have all the models right all all the models let's say 20 plus models 30 plus models whatever let's say all the models are here okay now if I say Rahul come here instead of writing so many different

    10:28

    different SDKs because every SDK has its own syntax every SDK has its you can say on code whatever right so now this Rahul can use something called as lanching okay

    10:45

    now this Rahul can use let's say something called as lang chain. Now this lang chain what we'll do this lang chain is a kind of wrapper

    11:01

    this is connected to it this is connecting all the models are connected to it so lang chain will handle all of the SDKs and we just want to use or need to use lang chain SDK just tell me one

    11:19

    thing is it better to use just one SDK to use all the other models or is it better to use different different SDK case obviously this is better obviously this is better so we do the same thing we just use

    11:36

    this thing as lang chain we do not use different different SDKs when we didn't have agentic frameworks let's say lang chain lang graph um all those things we used to use different different SDKs and it was a chaos and then now we have only

    11:51

    single SDK which is lang chain and life is sorted We do not need to use different SDKs. We can access all the models.

    Wow, that is really cool. Yes.

    So now instead of using this, I will use lang chain. So how we can just do this?

    For that I need

    12:07

    to install one more library. It's called lang chain open.

    So I will say UV add. Let me just open this one.

    Okay, perfect. UV add.

    Make sure you have this on. You have this virtual

    12:22

    envirment turned on. And I have already shown you how you can just turn on VNV scripts activate.

    That's it. Okay.

    So UV add lang chain openai add this. Now what it will do?

    It will

    12:41

    simply call it. So I will simply say from lang chain openai and I want to import a function called chat openai.

    Okay. Chat openai.

    Okay. Now simply say LLM

    12:56

    open equals chat open AI model equals GPT 5 mini. GPT5 mini that's it and temperature equals zero.

    Now what is this

    13:12

    temperature? If you do not know temperature is basically the creativity of a model.

    If you set it to zero that means model will not be creative. model will simply generate the answer with the best possibility.

    But if you set let's say 0.9, 0.8, 0.7, model will just play

    13:29

    around the output and it can just generate some random output as well. It depends like what is your use case.

    If you are just generating very very serious and you do not need to play around with output, you you should only say temperature equals to zero. So lm openi equals t open i.

    Okay. And then you can simply say if if now if you just

    13:46

    want to ask something from this model you will simply say llm openi dot invoke and then I will say pro tell me a fun fact

    14:03

    make sense and this will generate me the output and do you know what this time I will not curate the output I will just show the real and row output how it generates the output let me just show you. So let's wait.

    14:20

    Perfect. This is the output.

    We always get the output in the form of AI message. There are three different types of messages.

    AI message, system message and human message. We will just talk about messages as well very very soon.

    But this is the raw output. But we can

    14:36

    just obviously curate it. We can simply say dot content and that we do every time.

    But this is a raw message. AI message.

    That means it is sent by AI. And this is the fun fact.

    Let's read it. Bananas are technically berries, but strawberries aren't.

    Wow. What?

    A berry

    14:54

    comes from a single O. Okay.

    While strawberries are an aggregated accessory fruit made of. Wow.

    This is actually a fun fact. Okay.

    Now, if I just run this now, I can simply get the content. Only the content, not the row output.

    So this is a very basic use case

    15:10

    of langchain API. Now what is the advantage of it?

    Now let's say I want to use entropic model. I will simply say from langchain entropic import chat entropic and I can make llm

    15:29

    entropic as well. Simple like this.

    And I can use claude 3.35 sonet. By the way we use 4.5 set.

    Yeah. But yeah, but I will not use entropic because I uh I do not want you to create multiple API keys because for this you would also need to create an API key with in uh in the

    15:45

    entropic platform. But I just want to show you the use case of it.

    Make sense? Okay, makes sense.

    There's another way to create this particular thing which is recently added. It is called init chat model.

    So you can simply say from langchen dot models import

    16:04

    init chat model or I think from just models. Yeah, I think just maybe from lang chain because they have recently added from lang chain.

    Let me just check the init chat model class.

    16:20

    Init chat model langchain. Oh chat models.

    Perfect. So we have chat models chat models and import init chat model.

    Perfect. So in this

    16:36

    init chat model we do not even need to define like if we are just trying to import openi models and thropping models. We simply need to put the model name and that's it.

    I will simply say llm openi equals init chat model and then model name or let's say model equals GPD 5

    16:54

    mini. This is recently added.

    Okay. Even if you use any function, it is fine.

    So, not a big deal. And let me just invoke it.

    Let me say hello, how are you? And it will say, I am doing well, well, thanks.

    How are you? How can I help you today?

    So, it is just making all of these API calls using your API. Make

    17:11

    sense? So, these are the ways to make API calls.

    Basically, LLM calls. Simple.

    Good. Now, let's talk about the messages because we just saw the type of messages.

    Okay. Now I want you to make aware about the messages that we can use

    17:27

    within lang chain or basically in the agentic framework that we use because in the every agentic framework we use messages and messages has a great role to play. Okay.

    So let me just bring it here. Uh let me just explain you the messages like why messages is a big

    17:44

    deal. Why I would say you will know the reason why.

    So messages if you want to communicate to anyone let's say Rahul where is Rahul yeah come here bro let's say this Rahul this Rahul

    18:02

    needs to communicate to anyone right anyone let's say this Rahul need to communicate to any team any human anyone let's say this Rahul needs to talk to these users make sense

    18:17

    so obviously In order to communicate to these people this this this you can say two-way thing is very important make sense because this message this

    18:33

    Rahul's message is you can say Rahul's message right let's say you are just hearing to a transcript of a call by the way you should not hear to someone else call transcription but let's say you are hearing it Okay, let's let's say you are

    18:49

    just hearing the call transcription of your best friend, okay, with their whatever. Okay, you know what I mean?

    So let's say you are hearing to that transcription. So what you will hear?

    You will hear

    19:04

    Rahul's message and in return you will hear this person's message. Okay, let's make it as let's say purple.

    Let's say this person's message or let's say users. Let's say this Rahul is listening to or basically talking to group of

    19:21

    people. Okay.

    So I will simply say let's let's create one person. It will be fun.

    So let's say one user. Perfect.

    Okay. This Rahul is talking to who?

    19:39

    H let's say Siman. Who is Siman?

    I don't know. just in a hypothetical just a hypothetical situation and hypothetical person.

    So this Rahul's message and this message would be called as Simon's

    19:55

    message right by the way why are you laughing two people are talking to each other what's the big deal so now let's say these two people are talking to each other so now if I want to hear to this person I will say Rahul's message and

    20:12

    this is the content tuck tuck tuck tuck tuck this is Siman's message tuck tuck tuck tuck tuck so this is the communication the way of communication ation between two people, right? We have like identity of each message like who is delivering which message and who is

    20:28

    receiving which which message. Simple, nothing fancy.

    Very good. Now let's talk about the real stuff.

    Let's say this Rahul is talking to an AI. Okay.

    What? So let's say this Rahul is

    20:45

    actually talking to AI. Okay.

    Now like real AI. Okay.

    So now let's say this Rahul is talking to AI. Make sense?

    Now just tell me one thing. This Rahul's

    21:01

    message is this. I know that.

    And this AI message will be this one. Simple.

    This is the AI message.

    21:18

    Okay, that AI is generating basically LLM is generating. Okay, makes sense.

    Okay, simple. So this is also done.

    So this Rahul's message will be called as user message in terms of technical

    21:34

    language because Rahul is a user. Okay, Rahul is a user.

    So this will be considered as user's message. simple like I know it is new but it is

    21:51

    not very much complex this is user's message okay this AI message is obviously like AI message simple now there's one more term it is called system message now what is a system message this can be confusing okay let me just take you to the previous example

    22:08

    now let's say siman we we need to set the tone for siman I know Siman is talking to Rahul. Okay.

    And Rahul is also your friend. Siman is also your friend.

    Okay. And now

    22:26

    you need to tell Siman that you need to talk to Rahul but you need to talk you can say softly or you do not need to um say anything rude to him. So you are setting the tone, you are setting the

    22:42

    environment like how they should just talk to each other, right? So you are guiding Siman like how you should talk to Rahul.

    I will say um talk nicely and be a little

    22:59

    polite. Okay, this is the tone that I have set for the environment, for the conversation, for the responses.

    So now do you know what? All of Siman's responses will be polite

    23:15

    and will be very nice towards Rahul. Same thing you can do it in LLM as well.

    You can tell LLM that hey talk nicely to me and be a little polite. Why does it

    23:30

    matter? For example, let's say you are asking LLM to write an email to your friend.

    Email to a friend. Let's say let's say you're living in '90s.

    You're writing the letter. So you're asking LLM to write an email to your friend.

    So

    23:46

    obviously it should be a little polite. If you are just writing an email to your manager, it should be formal.

    So you need to set the tone. That tone that environment is called system message nothing else.

    System message. So this system message

    24:05

    is your this thing simple. I hope you understood with the example.

    I am so sure. So this is the system message because see I need to take example so

    24:21

    that you can understand it simple. So this is system message.

    So that means only user message is in your control. AI message is in AI AI control.

    System message is for LLM but you control this because you set the tone.

    24:38

    Okay, make sense? So this is all about the messages.

    This is like real real world analogy analogy whatever you want to say and this is the real use case of it. Now I hope you understood it with the help of example between Rahul and Siman.

    So now let's try to code it. Now

    24:55

    Rahul just focus on coding. Forget about Simon and now let's talk to LLM, right?

    Because see now let's create a new notebook. This is like very basic notebook because I want to provide this particular master class in the form of structured chapters so that anyone can

    25:11

    go to chapter chapter 1 2 3 within that we have topics basics and all those things. So it will be very handy and let me just rename it.

    Let me just say one lm call simple. Now let me create a new

    25:26

    notebook. I will say second messages messages do ipy andb.

    Perfect. So let me just talk about the messages and you know the messages like why do we need messages?

    Why do we use messages? So now let me just write the code.

    So in

    25:42

    order to use the message we have like a specific library that we need to install. It's called lang chain core.

    Langchain core. And by the way, I will be referring to this particular lang chain migration guide as well

    25:57

    because they have recently upgraded the lang chain to 1.0 and earlier we were using 0.3 0.6 they just changed almost all the syntaxes all the classes. I don't know why I like does that make any sense?

    If you want to add something

    26:13

    make new functions in the existing classes why you need to change the classes names you like what what what what will you get to break the code basis of the people okay so this is basically the guide so

    26:29

    the thing is I am also exploring this particular new you can say version of langin um this is the older version not older version this was the you can say traditional way to use those particular classes simple but this is you can say

    26:44

    latest way to use the classes it doesn't mean that you cannot use this method but I am thinking like if I'm just teaching you something so I should just use the um you can say most recent and latest version that's it okay so currently I'm not seeing anything for messages so we can use the messages as is okay makes

    27:00

    sense okay um okay we have a new thing uh langchain agents no this is same messages is the same. Okay, makes sense.

    Okay, so I'll be just um switching between these particular things so that I I will just provide you the latest code as well. By the way, you

    27:17

    can use the older version as well. It is not old.

    It is just like it is the previous classes that we used to work we used to work with. Okay.

    Nothing is there to make you worried. Okay.

    So let's say

    27:33

    messages. So what I will do?

    I will simply add the library and I will say UV add and it will be like lang chain core. Perfect.

    Lang chain core. Did it use

    27:49

    lang chain or lang chain core? Uh langchain.

    Where are the messages? Okay.

    Lang chain do messages. We use lang chain core.

    Okay. Now we can just use lang chain dot messages as well.

    But

    28:06

    yeah that's fine because if it is outdated we will see the error. Don't worry.

    So I'll simply say from langchain code. Oh we need to select the kernel.

    Perfect. From lang chain code dot messages.

    We

    28:21

    will import human message. And do you see all of these code completions?

    And do you know what this is not pilance? This is not something an extension.

    This is called anti-gravity's agent mode which is available for free. If you go to anti-gravity settings you will see um

    28:39

    snooze button. This is for AI agent right?

    So they are running the free AI models. Okay the pro models for you for free because all these code completions are not free.

    If you use VS code you have to purchase it. It is not available

    28:54

    for free. Anti-gravity has made it available for free maybe for shorter period of time because they have just recently launched anti-gravity but yeah if it is for free you should just take the advantage of it so it will be like human message and then it will be let's

    29:10

    say system message and we want to import AI message perfect and once we have all of these messages I will start writing my messages I will say my messages simple

    29:25

    My messages will be this. I want to say list.

    I will say human message. Okay, human messages.

    And then I will say content equals and whatever I want to say. I will say bro

    29:42

    tell me a fun fact. Simple.

    But this time I will not just send this particular message to LLM. I will also set the tone.

    Remember system message. And it is saying you are a helpful assistant.

    Right? This is a tone.

    This is telling LLM that you are a helpful

    29:58

    assistant. I will say you are a Gen Z assistant.

    Yeah. Gen Z assistant which or let's say who always answers in a fun

    30:13

    way. Let's say we want to just set the tone set the tone for LLM, right?

    Wow. Can we do that?

    Yes, you can do that. So now what I will do?

    I will simply say llm. Oh, we need to import the model as well.

    This one remember we just create the model

    30:30

    like this. So I can also create the model first of all.

    Perfect. So we have the model ready.

    Now I will simply say llm dot llm dot openi dot invoke. Now I will pass the entire list of messages.

    System message human

    30:46

    message. Perfect.

    Now let's see what we get. I will say dot content.

    So now you will see that this LLM will try to answer in a fun way in a genz way which will actually

    31:02

    you can say impact the response obviously because you are just setting the tone for that LLM and this way you can ask LLM to behave like anyone you can even say hey behave like let's say this person not like this person obviously system would should know about

    31:18

    this thing and why it is taking so long connecting to kernel VNV. Yeah, like see this is anti-gravity.

    This is new thing. So there can be so many bugs.

    So do not need to worry at all. So you can simply re rerun it and you can just cancel it and you can

    31:34

    maybe refresh it. So let me just reopen this anti-gravity.

    Okay, let's see because see that's why they just provide you these stuff for for free in the beginning so that you can just detect the you can say bugs and all. could not register service work

    31:52

    blah blah blah open in text editor are you serious let me just open this open text editor so I think something is broken in anti-gravity that's why it is not able to open anything ipy and we could not initialize web view and blah blah blah so if it happens with you you do not need to worry at all you can

    32:08

    simply close everything and just refresh your or basically terminate this particular application anti-gravity and just start it again it will work after some time because something is broken maybe on the IPI kernel Right? And nothing is broken on your in your code.

    Your code is running mine. And you can

    32:23

    also switch to any other code editor, let's say VS code, which is more stable. But yeah, so I have just restarted my application like completely by ending it or terminating it through the task manager.

    So now let's try to see and it has detected our kernel this time. Very

    32:39

    good. Slow claps.

    If anti gravity you are watching this, this is a bug for you. Fix it.

    Call your developer development team. Okay.

    So now let me just invoke it because we want to see the output. Perfect.

    Let's wait.

    32:56

    And perfect. Now I got the response.

    Bro, fun fact. Same thing.

    Bananas are berries but strawberries aren't. But this time if you see the tone, it is also using let's say emojis or all those things that a jenzi will say.

    33:13

    Make sense? See wild, right?

    So this is actually talking like a Gen Z. This is actually talking like a person who is a Gen Z because you have set the tone.

    That way you can modify not modify actually you can

    33:28

    direct the responses in a certain direction. Let's say you want to build an LLM who is a kind of comedian who is a kind of you can say um write posts about some um laughter or who generates

    33:45

    a content about you can say around laughter. So you need to set the tone for it.

    You can simply say you are a comedian who talks about this thing. So you need you can set the tone.

    So now do you know what when you send when you just set this tone whenever you will be just invoking this thing with this particular system message message

    34:02

    with the system message it will remember this thing this is a very basic use case of system message. Okay.

    Now, we have some built-in messages as well. Okay.

    What is built-in messages? Built-in messages or I would say prompts because

    34:18

    this is a very basic thing, right? I'm simply writing human message and system message.

    This is a very standard way. This is like open AI style messages.

    Okay? Because system message, human message is only available in OpenAI.

    If you want to use let's say any other model, it will not be like system message or human message. It will be

    34:34

    something else. So for solving this problem, Langchin provides us something called as prompts.

    We have dedicated library for prompts. What does it mean?

    Basically, let's say I want to pass a prompt because see whatever I'm passing here is a prompt,

    34:50

    right? This is a prompt.

    This is a prompt, right? So we have a dedicated library for prompt.

    Why? Why?

    By the way, very good. you can say um question like why.

    35:07

    So in order to answer your why, let me just ask you simple question. Okay.

    And by the way, I have a quick question. We we are not importing our API key.

    How it is able to call it? How it is able to call it?

    35:23

    Oh, makes sense because I think it has automatically uh loaded everything in our env. So let me just add that thing as well because we are not loading all the dot dot dot env functions but I obviously this is a

    35:39

    smart code editor. If you face anything then you need to import this particular environment variable.

    Sometimes if the editor is smart they can automatically import everything in the environment because that makes sense. It is created in the root directory.

    But if you want to do it you can do it like this from

    35:56

    not lang chain. This is not related to lang chain.

    This is like pure Python import OS and then we have a function called load env. And for that you need to import one library.

    It is called it is called let me open this terminal.

    36:14

    Uh this terminal this terminal. No terminal has that activated.

    Oh yeah this one. Perfect.

    So now the library says UV add load env. Perfect.

    So this is the particular

    36:29

    library load load load say load load load env. So how we can just say we will simply say from load load or import

    36:48

    load envy need to run the function called load dot n. What this function does this function um doesn't um do anything fancy.

    What this function will do it will simply

    37:03

    load everything which is saved in this particular file. If you have API keys, secrets, variables, everything, it will load that thing into this particular notebook.

    Why we didn't need to write this? Because obviously this particular code editor is smart.

    It would have already loaded it in our system. So it

    37:20

    is fine. But usually you should add this.

    So I'm simply adding it here so that if you see the error, you can simply refer it and you should do that. Let me just run it again.

    Pick the virtual environment and that's it. Make sense?

    So this is loaded. Now in order to set this particular environment

    37:35

    variable, you can also write one more command. It says OS dot by the way there's no need to set it but I can just show you in order to test it.

    So I will simply shift this

    37:51

    code cell to here. Now I want to make sure that my variable is loaded.

    How I can just make sure? I will simply say if OS.Eviron environ open that means if it exist print

    38:09

    bro API key variable exists because obviously I do not want to show you but this is a confirmation if you see this message that means the variable name is created okay and it is taking a lot of time

    38:25

    so as you can see it says bro API key variable exist that means this function function has loaded and created all of the environment variables. I can also say else value error open key gain not found.

    Perfect. So I hope you understood this concept.

    This is like a very small

    38:40

    thing but it is very important. So again this function this function will let me just write it for you.

    See I'm doing so much for you. This function will load all the variables from the env

    38:57

    file and will and will make them available in the OS. Environ dictionary that means environment variables.

    Perfect. So it will automatically create environment

    39:13

    variables for us for all the variables which are there in the uni file. Perfect.

    And you should add this code in your all the notebooks. Yes, in all the notebooks.

    Let me just make it available here. And I know like why we didn't see any kind of error because we loaded everything in our

    39:30

    env root directory. But it is a good thing to do this here.

    Perfect. Perfect.

    So now we have added this thing. So we are good because I was just looking at the code.

    I was like why we didn't create this thing and it is still working because thanks to integravity to some things and bro there are some bugs.

    39:47

    So you need to fix it. So now I was just talking about prompts.

    So let me just write prompts. Oo prompts.

    So whenever you just talk about prompts now basically what are prompts?

    40:02

    Prompts are the messages that are sent to the LM. Simple thing.

    So that means messages and prompts are same in a nutshell. Yes.

    So why do we have two different categories messages and prompts? Messages are like more static way of

    40:18

    sending the message to the LM. Prompts are more dynamic way and more you can say user friendly.

    Let me just show you one thing. So let's say uh let me just write it for you.

    Prompts are um more user friendly than messages.

    40:36

    Perfect. So now let's say let me just share the use case.

    I want to send a message. I want to send a message to what to lm and I want to say llm dot open app

    40:54

    dot invoke and let me just create the my messages first of all let me just copy it from here perfect this time I want to say tell me a fun fact about about let's say

    41:10

    honey I want to know the fun fact about running. Okay, simple.

    Okay, I will simply say invoke content and I will get the message. System message is not defined.

    Very good. We need to run this as well.

    Okay, let me run this.

    41:27

    Let me run this. Okay.

    Okay. Perfect.

    So, if I just run this, I will get the message about honey. Okay.

    Makes sense. Makes sense.

    Okay. Very good.

    Perfect, bro. Fun fact, honey basically

    41:43

    never dies. Oh, really?

    Archaeologists have found jars of honey in ancient Egyptian toms thousands of years old that Oh, bro, that was still edible. That's because honey's low water content and high acidity keep bacteria and microbes from

    41:59

    growing. Wow.

    Wow. It takes about 2 million flowers visit.

    Oh man, honey is mysterious. So, now we have got the fun fact about honey, right?

    And let's say we do not need to set this system message. We are just okay with that.

    But this is the

    42:15

    thing. We want to set the human message dynamically.

    If let's say you are building something, okay, and you want to get this message from a user, right? So how will you get it?

    Every time will you hardcode it? Will you go to your back end and you will change the code?

    42:30

    Obviously no, right? So what I can do?

    I can simply get the input from the user. Okay?

    I will get the input from the user. Let's say input equals input function or let's say

    42:46

    user input equals to input. What you what do you want to ask?

    Simple. I can create this thing.

    Now user input will be the topic of the fun fact. Now I want to make it dynamic.

    43:02

    How? I can just make it.

    Now I will use it something called as prompts. So this is this thing you already know right?

    I can just delete this right. So I will simply say from langchain core.

    Now again core because core is like very important. core.prompts

    43:20

    import prompt template. We have something called as prompt template.

    Now what is this prompt template? Prompt template is a kind of special you can say module that we have in um langen core class which we can build which we can use to

    43:36

    build our prompt dynamically really dynamically okay so now what I will do I will simply say um input dynamic input let's say

    43:52

    dynamic input or let's say dynamic prompt because everything is prompt I will say prompt prompt template dot from template. prompt template dot from template.

    Okay, then I need to write the prompt. I will say write

    44:10

    a fun fact about now I don't need I I do not know the topic. This is dynamic.

    Have you ever worked with frings in Python? It is very similar to that.

    It is very similar to that. But obviously we do not use fing directly because

    44:26

    people can use this same class for so many other purposes as well. So we simply write write a function or you can say write a fun fact about topic.

    Now this topic will be coming from the user. Make sense?

    Okay. So I'll simply say dynamic prompt.

    Okay. And prompt

    44:42

    template dot from template write a fun fact about topic. Now what is this topic?

    Topic is user input equals input enter a topic for fun fact.

    45:02

    Perfect. Make sense?

    So this will bring the topic from the user in the runtime. Okay.

    This is the dynamic prompt. Prompt template dotprompt template.

    Write a fun fact about topic. Simple.

    Now I need to inject user input into this thing. User

    45:21

    input into topic. Make sense?

    Okay. User input into topic.

    Okay. So now what will happen if I say dynamic prompt?

    That's it. If I just show you the dynamic prompt, do you know what will happen?

    Do you know what will happen

    45:41

    if I write dynamic prompt and I simply write this? Let me just run it.

    And it is asking me to put the fun fact. I will simply say this time let's say flowers.

    Okay. Hit enter.

    Now it has created this function automatically. Prompt template input

    45:58

    variable topic which is the topic variable. Input types.

    We do not have any kind of type. Partial variables.

    We do not have any kind of partial variables. And it has injected this particular thing automatically.

    See template write a fun fact about

    46:13

    topic. This has injected this variable but not the value of the topic but topic variable only.

    Topic variable only. Now if we need to inject the value as well.

    I will simply say dynamic prompt dot invoke and then

    46:33

    you can say dictionary topic is user input. What did I do?

    I simply invoked it. Okay.

    And I passed a dictionary. Key is topic which should be exactly same like this.

    And user input which is this one.

    46:50

    User input is what value I I will be passing. Now let's try to run this again.

    You will see flowers. Hit enter.

    Now see it has created this prompt for me. Text equals write a fun fact about flowers.

    Wow. And now we can pass this

    47:07

    value to my LLM. I will simply say ready prompt because this is a ready prompt which can be used in llm.

    I will simply say llm.invoke.content. Perfect.

    Now let's run this again. I

    47:23

    will simply say this time let's say flowers. Okay.

    Hit enter. Now this time what will happen step by step.

    I have just provided the u message about flowers. Right?

    It will inject that thing using invoke method and it will

    47:39

    create the ready prompt which is this one which doesn't have any kind of variables because it has injected it using invoke and this ready prompt will go inside the llm simple and now we have got the fun fact what looks like a single sunflower or daisy is actually

    47:55

    cluster of hundreds sometimes thousands of tiny flowers what called fluids I think I read about fluids in 10th standard I think. So florate can produce a seed.

    So aa sunflower head is a whole bouquet. Oh wow.

    Okay. So now I

    48:12

    hope that you have got the context like why do we use template? Now you will say anula do you know what everything is fine but that thing is missing.

    Which thing? This thing this this thing is missing.

    This this talk nicely and be a

    48:29

    little polite. How we can just set the tone?

    How we can just set the tone for our conversation? Bro, this is this is required bro.

    We have to tell Simon like talk nicely to Rahul. We have to tell it.

    Okay. If you want to tell tell LLM.

    48:45

    So what you can do? You can use something called as chat prompt template.

    There's another class. So if you do not want to set the tone, you will use prompt template which is very similar to your frings.

    If you want to set the tone the same way you set here in the messages we have a class as well.

    49:02

    What we are you the part of langchain? No but yeah not now maybe soon.

    So yeah so let's say from langchain.core or core.prompts I will import chat prompt template. This

    49:19

    is special in this you can just set the tone as well. So I'll simply say now my prompt look will look like this.

    prompt template. I will say chat prompt template.

    Now I will say from messages. Here we say

    49:35

    prompt template dot from template. Here we say prompt chat prompt template dot from messages because we are creating a messages list.

    Right? Messages.

    Now this is a list. Now this is very important.

    Now we do not use system message, human message, uh AI message,

    49:52

    whatever. We simply use a kind of tpple and we simply say user.

    User means me me Rahul. Rahul user means Rahul.

    Okay. And it is the request.

    Write a

    50:10

    fun fact about topic. Right now the second thing will be system.

    This is tone that we are telling Siman. right system.

    You are a

    50:27

    polite um assistant. Simple.

    Perfect. Makes sense.

    So this is the list that we need to create. But here we do not need to write system message um human message

    50:44

    and all those things. No chat prompt template class will automatically do it for us.

    Do you want to see? Let me assure you.

    prompt template. Let me show you the ready ready prompt template.

    Ready prompt or let's say ready prompt dot invoke. Let's say we want to write a

    51:00

    fun fact about AI. Do you know what it will create for us?

    What it will create? It will create the same thing like this.

    But we do not need to do hard work like this. We can simply say generate this prompt for us using chat prompt template.

    See chat prompt template value messages system message. See we didn't

    51:16

    write system message. We simply write system and we simply write our message and it automatically passed or created this particular string system message content equals to this and blah blah blah blah blah.

    See I can even display you the better results. So I will simply say

    51:33

    dot messages. See this is a list.

    This is the same list we passed here here but this time we have more control because we can just inject the variables in the runtime and we can get the message make sense

    51:50

    okay make sense and obviously here I'm just hard coding it but we can just ask the user for it user input input enter a topic

    52:06

    Perfect. Makes sense.

    And then we can pass this particular prompt to our LLM, right? I will simply say ready prompt.

    Yeah, you can just pass the ready prompt. You do not need to cut down the messages and all.

    It is fine because this LLM is uh has some good

    52:22

    understanding of the context. It can just handle that thing.

    But if you just want to be pro, you can simply save messages. If you just want to save some tokens if you are a pro developer.

    Yeah. So let's try to run this.

    And this time I want to run a topic about let's say code editors let's say or let's say

    52:39

    Google let's say let's read the fun fact about Google's name is play on the word Google one followed by zero 100 zeros chosen to

    52:55

    reflect the founders's mission to organize a huge amount of information and before that the search engine was actually called as backup. Oh, this I didn't know this.

    The search engine was actually called backup. Okay.

    53:10

    Okay. H okay.

    So, this was all about your templates, your messages, whatever we use. Make sense?

    And I hope that you have clear understanding. Now, the good thing is you can create as many variables as

    53:27

    you want to in your messages. Make sense?

    What do I mean? Because let's say you want to define multiple.

    Let's say you want to um choose the system um tone as well on

    53:43

    your own in the runtime. You are a tone assistant.

    That means I will set the tone as well. So if I just ask this now, I will simply say us a tone enter a tone.

    Right? So I will simply add this variable as well here.

    So I can

    54:01

    add as many variables as as I can. It's not like I can only add one variable while just creating this prompt.

    So I can just run this now. It will ask me topic.

    Let's get the fun fact of Microsoft.

    54:18

    Okay. Tone is let's say funny.

    Let's see what we get. Let's see.

    Let's see. Let's see.

    Microsoft started in 1975. Microsoft.

    54:35

    Yes. With hyphen.

    Oh, okay. And it was founded in Alber, New Mexico, not Seattle.

    Okay. Proof that one of world's biggest tech companies literally grew out of the desert.

    Oh, wow. And outgrew its punctuation.

    54:50

    Yes, this was a fun fact. Okay.

    So this was all about your chat problem templates and templates that you use. Make sense?

    Make sense? Makes sense.

    I hope that you have a clear understanding. Very good.

    Now we are going to talk about a very very very important topic. It's called structured output with LLM.

    55:07

    If you are aware of something called as pientic, then it will be a piece of cake for you. If you are not aware about pientic, let me just tell you what is that first of all because if you do not know about pientic, you are missing very important

    55:24

    concept, okay, in python especially. So let me show you what exactly is pyek.

    So let me just open my browser and let me just write pantic. Basically pantic is nothing but a kind of um parsing plus data validation library.

    Both parsing

    55:39

    plus data validation. See pantic validation observability.

    Let me click this. Click on this.

    Oh, they have changed their UI. Nice.

    Earlier it was like very pinkish. Now they have just changed it to purplish.

    Okay, that's cool. That's cool.

    That's cool. So now,

    55:57

    by the way, where's the documentation? Mhm.

    Logifier enterprise blog documentations. Okay.

    Py techch validation. Yes.

    Okay. This is perfect.

    So this is basically their official documentation that you

    56:13

    should read. And don't worry, I will also tell you about pyentic very quickly because you'll be just directly using pyntic with lms.

    Basically, as I said, pyic is nothing but a kind of library which helps us to validate our data, validate our output and parse it as

    56:30

    well. It is both parsing and validation both.

    Okay. So, this is a code that we write from py techch import base model.

    base model is a class that we use or inherit into our class and then we just create our schema and we can do a lot of

    56:46

    things and you can just read this as well like this is the kind of schema that they have created and they are saying why use pyche power by typends speed JSON schema strict and lax mode data classes type dicts so if you are familiar with data class as well if you

    57:03

    are then it is like very very simple for you okay if you're familiar with type dict as well then it is also very familiar with you, familiar for you. If you are not familiar with anything, don't worry.

    I will just make you explain this. Make you explain what I'm saying, bro.

    Wait, it's really cold here. So, I don't

    57:20

    know. So, let me just take you to my notepad.

    So, basically, let me just tell you about this thing because this is very important. Pentic

    57:38

    So now what is pyntic? What exactly is pyic?

    Pyntic is nothing but a kind of data validation and and data parsing library. So let me bring Rahul.

    So Rahul let's say

    57:54

    Rahul is building a let's say function a normal Python function right and that function okay that function should have some schema. Schema means key value pairs anything right let's say

    58:10

    this Python function it can be like literally anything but let's say Python function so let's say he is writing this Python function and he expects the output specifically in the form of

    58:27

    schema that he wants okay fixed schema fixed schema schema or I would say yeah fixed schema schema is a uh you can say right word so he wants fixed schema make sense okay now we know that whenever we

    58:45

    create our own python functions okay we define return what we want to return but let's say you are using llm okay you are using llm to do that work now llm is what llm is llm lm can

    59:02

    generate the output in that particular specific format and sometimes it will not sometimes it will not so in order to direct LLM hey LLM listen to me hey Siman listen you need to reply with

    59:20

    this schema you need to say key this value this for example if I bring my anti-gravity here so if you observe what is that particular key it is using nothing literally nothing right so let's say I

    59:38

    will say user message is this one write a fun fact about topic return the result with um in return the result in key value pair in a JSON or let's say in a key

    59:55

    value pair with key equals to response with key as fact and value as fun fact. Perfect.

    Let's say I'm directing LLM. I'm providing some information to this LLM.

    00:11

    Okay, let's try to run this. Let's see what happens.

    Enter topic. Let's say honey tone.

    Let's say funny. And let's try to run this.

    Perfect. Now, can you see the

    00:26

    difference? Now I have got the response with fact as my key and value is the response.

    So what I did I directed LLM to generate the response with specific key and value pair. What is the benefit

    00:42

    of it? Let's say my downstream.

    Now you say what is advantage of it? Let's say the downstream of this particular function is using that key.

    Make sense? Like maybe they are dependent on this thing.

    00:58

    We have another Python functions. Okay.

    Dependent on this. So we have dependency.

    Cool. So that means we need to make sure

    01:14

    that this Python function basically actually LLM. Okay.

    Because we are using LLM. So this LLM will generate the fixed schema and it should not generate something on its own.

    01:31

    Simple. This is our goal.

    This is our end goal. This is Rahul's end goal.

    Make sense? Fix schema dependent on downstream function.

    It can be anything. It can be anything.

    We do not know. Maybe data scientists are using it.

    Maybe data analysts are using it. Maybe

    01:46

    someone else is using it. We need to make sure our schema is intact every time.

    So one way of doing that is you have already seen directing LLM in the prompt but obviously we cannot direct it in the prompt like we can as you have

    02:02

    seen that it is it is successful here right let me just write it for you let me just remove it from here let's create a new notebook so that you will have a better understanding let's create three

    02:17

    structured output do it by MB and this is my code or let's say let's create markdown. First of all we will look at um prompts basically

    02:34

    guiding in prompts. Guiding in prompts make sense.

    This is like one way of doing it because we have so many ways. One is guiding in prompts and what is the way to do that?

    First of all, we

    02:49

    need to load this thing here. Okay.

    Attach this environment variable. Then we can load this entire thing, right?

    03:06

    Perfect. And I can attach it to this one.

    Okay, makes sense. And I can remove this thing because I don't want to apply this thing.

    And I will simply say topic simple about let's say honey

    03:23

    make sense okay or I can even remove this thing because now we are focusing on llm so I do not want you to be confused on prompts and all okay so I'll simply say llm openai dot invoke

    03:40

    tell me a joke let's say tell me a book. Perfect.

    So, we have this. Let me just try to run this uh lang core.

    Okay.

    03:56

    LLM open is not defined. Okay.

    We didn't define the model. Really?

    Okay. We can define it.

    Not a big deal. This is the model.

    04:13

    Okay. Tell me a joke.

    Now I will guide this LLM model, okay, to generate the output in a specific, you can say schema. Make sense?

    I will say tell me a joke.

    04:30

    um generate the output in key value pair format with the following keys setup and punch line. Make sense?

    So it

    04:45

    will it will be having two keys setup and punch line. Okay, let's try to see the answer.

    It should return a dictionary and let me just say result and then I can say result dot content.

    05:04

    Let me just run it. Okay.

    So I have setup and I have punch line. See so it has actually generated the output in the key value pairs.

    That is fine and I loved it. Setup.

    Why did Scar

    05:21

    Scarecrow win an award? Because he was outstanding in his field.

    Okay. Wow.

    Nice job. Very nice job.

    10 out of 10. 10.

    Okay. Now, one thing is clear that we can guide LLM in the prompt.

    But it is very simple when we have like a very

    05:37

    simple prompt. Tell me a joke, tell me this thing, tell me that.

    But when we have let's say a very detailed prompt very big prompt dynamic prompts when we are just running complex workflows we cannot guide the LLM to generate the

    05:52

    output in a specific format right then we have to follow a structured approach that structured approach is called pyntic let's try to discuss that and you will understand pyic don't worry even if you do not have any knowledge with that I will just make you understand it

    06:08

    structured I put using let's say pyic using pyentic models. Now what is pyic model okay and you will also see the difference between pyantic and typic just stay with me now using

    06:24

    pyntic model. So now we need to make sure that we are guiding LLM in a more formal manner to generate the output in a specific structure.

    Right? So we use a library called pyic

    06:41

    import base model. Now what is this base model?

    Base model is a class that we use to create our schema. Right?

    And how do we create our schema? We simply create a class.

    And I can say let's say llm

    06:56

    schema simple and I will inherit this base model in my normal class simple now within that I need to define the key value pairs not key value basically the schema like all the columns that I need

    07:12

    all the keys that I need everything I need just two things one is let's say um what was the schema here let's use the same one setup so that you can relate setup right and what is the data type I want? Oh, that means we can even control

    07:30

    the data type of this thing as well. Exactly.

    I will say string I want my setup key is of string type. Then I will define punch line and this will also be of string type.

    Make sense? Very good.

    Now this is my

    07:48

    class. Let me just run this model.

    That's it. So this class is created.

    Do you know what? Now if let's say I want to use this particular LLM basically use this class use this schema I cannot I

    08:04

    cannot use any other thing. What do I mean?

    Let's say I want to create a class. So I will simply say object equals llm schema right and I want to just create an object of this pyente.

    So I'll simply pass a dictionary. I simply

    08:20

    say let's say setup right setup will be setup and punch line will be let's say punch line punch line will be let's say punch line or let's say some setup right and punch line will be let's say some punch line

    08:36

    okay perfect and now let's unpack this list like this dictionary and let me just try to show you the output obj so what did I get I got this pentic object object if I want to now use this

    08:51

    object I will be using like this obj dot let's say setup I want to use a setup key within this I will write like this I will get this some setup got it what is the advantage of it I told you that this is a kind of data validation library

    09:08

    if I pass some wrong thing let's say I do not pass setup I pass let's say ketchup wow what playing anala no nothing is so let's say catch up some setup now if I try to create this object

    09:25

    it will throw the error why validation error it will say bro you cannot do this you cannot do this because I can only accept either not either basically both I can accept setup and punch line if you

    09:42

    will not provide me this thing I will not allow you to create an object wow That means if LLM generates an output and LLM does not follow our schema, it

    09:58

    will not allow LLM to generate the output and it will protect us to just deliver that pseudo schema or let's say broken schema to our downstream workflows. This is an amazing thing.

    This is a kind of data validation check.

    10:14

    Very good. An lamba does it only check on the keys or it does something else as well.

    It does this data type check as well. For example, let's say I pass some integer value here by mistake 1 2 3 then also it will fail.

    See validation error

    10:31

    again because I am expecting of string type. So this is called validation.

    Okay. So data validation is done.

    Now what about data parsing? So this step is called data parsing.

    when you are parsing the values and it is creating

    10:47

    this pyic object. Let me just correct it.

    If I say sum setup. So what I did I simply passed this particular class and I passed my arguments setup some setup punch line

    11:03

    means some punch line. Make sense?

    And then only I got my object. Wow.

    This is literally wow. And do you know what?

    Now let me just show you with LLM. So I will simply say LLM structured

    11:19

    output because I'm creating a new LLM object, right? I will simply say my normal LLM which was OpenAI and then I will say with structured output and now I need to pass the schema.

    And what is the schema? This is the schema.

    LLM

    11:35

    schema, right? Pass this LLM schema.

    Perfect. Now if you use this model, Now if you use this model, this one,

    11:53

    this one, this will generate every time the output in this particular structure and I will not even tell LLM that generate uh the final output in the structure. This is a power of it.

    What? One pro tip that whenever whenever you

    12:10

    are defining your class, you should always define a kind of description. Why?

    Because LLM is not magic, LLM will use your information as context. Now just tell me one thing.

    If you are a human, if I will say you need to um

    12:27

    create a joke, okay? And I know you are a great comedian.

    You will just create it like this. So let's say you create the joke and I say you need to generate the output in these two key value pairs.

    Do you know like what should be the setup and what should go in the punch line? Obviously no, right?

    So I will

    12:45

    tell that what should be go in the setup. I will simply say field that I need to first of all import.

    Perfect. And then I will write field description setup for the joke and joke.

    setup for the joke and then punch line

    13:01

    for the joke. Make sense?

    So these are description. These are very important because LLM are LLMs are all about context.

    Simple. More and precise context you will provide better applications you will be able to build.

    Simple formula. So our this thing

    13:19

    is done and let's try to test it. If it is working fine, I will simply say tell me a joke.

    That's it. I will not tell LLM to generate the output in any structure.

    I will simply say generate a

    13:36

    joke. Let's see what it generates.

    I will simply say dot content. Let's see.

    Okay. LM schema object has no attribute content.

    Makes sense. So I can remove this content.

    Makes sense because that is a pyantic

    13:52

    object. That is a silly mistake.

    Perfect. Now what I got?

    I got the response in the form of LLM schema which is this class. Then setup one key its value one key its value.

    14:08

    Perfect. This is not just a generic string.

    This is not just a generic string. This is a pidantic class object.

    I can even show you if I say result and if I say type of result.

    14:24

    Let's see what we get. See this is an LLM schema object.

    If I want to use any key value pair now I can use it easily because now I'm sure that LLM will generate the output with those keys. I will simply say result dot let's

    14:42

    say punch line. Perfect.

    Let's see. Perfect.

    That's how you guide LLM to make a better AI agents because remember

    14:57

    remember our discussion our main agenda is to build this kind of workflow. So let's say this output output of this node output of this node is actually like this this node is dependent on this one right this is the upstream for it.

    So if

    15:15

    it generates something else this whole workflow will be broken. So we need to make sure that we pass the exact schema that we should pass.

    We should not tell everything LLM please generate this. LLM please generate that.

    Build LLM object

    15:34

    smartly. This is the concept of Pentic.

    If you are learning Pyentric for the first time, take some time to digest this information because this is new. If you already know Pentic, then it will be like piece of cake for you.

    You are simply implementing its you can say feature. That's it.

    Make sense? I hope

    15:51

    like it was easy. Now let's talk about the third type which is not very recommended but people still use it.

    Let me just tell you what's that. It is called as type.

    Okay.

    16:10

    Using type dict. Perfect.

    Now what is typic? type dict is exactly same as pyic.

    Okay, exactly same. Then why are we using two things?

    16:27

    Okay. Okay, let me tell you.

    So this is exactly same as pyantic. But piantic is like the principle of principal uh you can say principal of the school.

    Okay. And typic is like the class teacher.

    16:44

    Wow. Oamba just solve this puzzle that you have just created.

    Okay. Principles are like very very strict right they will not listen to you they will simply do this like in my school like I got this so they will not listen

    16:59

    to you they will not correct like they will not listen to you that why did you make any mistake if you made the mistake t so they are like this class teachers on the other hand if they are good they will listen to your mistake even if you make a mistake they will simply say okay

    17:15

    do not do it next time exactly same thing happens here in pyic models and typic. So you define the schema for sure.

    Let me assure you. So I will import the typect from typing from typing import

    17:34

    type. Okay.

    And here I will create the class same way I have created there class. Let's say llm schema TD that means type dict and I will add type dict class same way everything is same then again same things name age or

    17:52

    no not name age let's say setup string and punch line string same right see exactly same now if I just show you let's create this object okay now what will happen if I just create an object of this object equals llm schema TD and

    18:16

    llm schema TD and if I pass a dictionary with same thing setup let's say some setup why it is by again again writing tier

    18:32

    down tier down let's say punch line and let's say some punch Right. Perfect.

    Let's just sum it up. If I do this, what I will get?

    I will get

    18:47

    dictionary, not an object. Dictionary.

    Simple dictionary. Okay.

    If I want to use any object, I can use it like this setup or I can use dot get option as well. Simple dictionary.

    This is not an object. Simple dictionary.

    19:03

    Okay. And do you know what is the advantage?

    Why I just said like it corrects your mistake. Let's say you wrote it here catchup.

    What happened with pyic? It failed, right?

    Let me show you. It won't fail.

    It will simply say catchup.

    19:19

    What? What?

    That means you made a mistake. You wrote catchup.

    But still it will not fail. Why?

    It will create the output using your mistake and it will simply give you the error on the typing part. Basically

    19:36

    these are for type hinting. Type hinting.

    So when you're let's say typing something wrong, you will get the error while typing it. It is not runtime error.

    It will not fail your code. Okay.

    So this is like a class teacher. She will say okay do not make a mistake

    19:52

    next time. But if you made a mistake in front of your principal, he will say come here bro.

    Come here. Come to my office.

    And I used to just get those announcements like I was in I think class 8th or 9th or 10th and we we had a

    20:08

    mic we had like uh cameras in all our classes. Okay.

    And if someone is like doing something crazy in the classroom a principal used to watch everything in the cameras and if you made a mistake bro you're gone. So on I would say every

    20:25

    week there was an announcement used to happen in our whole auditorium whole floor I would say an Lamba come to the principal office. I was like oh bro.

    So yeah crazy stuff but I I like I used

    20:42

    to go to the principal office with all of my friends. I never went to the principal office alone.

    Alone because whenever I go there like whenever I used to go there I simply take names. I used to take names.

    See I was not alone. I was with this this this

    20:58

    this this this and then all of those people would also come down to the principal office and let then let's have a group discussion together. Okay.

    Okay. Crazy stuff.

    So now I was saying that we can create this object like this. Simple.

    Now you will say

    21:15

    Anlama then what is the benefit of it? There's a benefit.

    We can still use this particular class to provide the context because we simply need to provide a context to LLM, right? We simply need to provide a context.

    I can simply say llm

    21:32

    structured or let's say type dict right and I will simply say llm open aai dot bit with structure output perfect and this is my llm schema TD now

    21:50

    if I ask the same question here this thing I will get it I will simply Please say result. Perfect.

    Llm OP. What is LLM OP?

    22:05

    Where is L? Oh, open AI.

    Uh, I think we just need to use type deck. Perfect.

    22:20

    So here you will see that you will still get the answer in the form of dictionary but not in the form of pentic object. not in the form of pyic object.

    Wow. Very good.

    Make sense? Make sense?

    Very very very good. But what is the advantage?

    If

    22:38

    let's say your workflows are not very very errorprone and you are not very much dependent on the same keys, then you can simply use typic for just as a context. But when your workflows are very very strict and you do not want them to create any random key on its

    22:54

    own, you can use pyantic. So it's totally depends upon the pentic and do you know what's my philosophy if you're using a specific schema right and if you want to just use a specific you can say guide to your LLM just go with pyentic

    23:09

    go with pyic for the final output and if you if you just want to provide some context to your LLM then simply go with typic simple I hope it makes sense like when should we use pide and when should we use um

    23:25

    your typed it make sense okay so now let's try to cover this important topic and I would say this is one of the most important topics what is this topic chains and obviously we are studying

    23:42

    learning lang chain so as you would have already guessed that this is really really important chains basically what are chains or basically what is a chain in lang chain if you are a data engineer you can better relate this concept with

    23:57

    your something called as pipeline. So being a data engineer chains are very similar to the pipelines.

    24:15

    And what are pipelines? Pipelines are nothing but sequence of tasks.

    If you would just define it in one line, sequence of task. So let's say where is Rahul?

    Rahul will become so popular now. So let's say this is a data engineer.

    Okay. And now this data engineer needs

    24:32

    to build the entire pipeline. So how this pipeline look like?

    Let's say this is the task number one maybe then task number two maybe like this maybe like this maybe

    24:48

    like this like very simple pipeline right and if you're data engineer you would know like how to build this. So let's say this is one, this is two and then this then this then this very simple pipeline right

    25:05

    basic pipeline. So now now what we need to do being a data engineer let me first of all remove this.

    So now let's say this you you have this pipeline

    25:20

    in data engineering language we just call it as a pipeline and in AI engineering language or basically in lang chain language we just call it as a chain okay chain remember one thing this is my personal

    25:35

    tip um when we just create pipelines we create small small pipelines right listen to me carefully so let's Say this is only one pipeline and this is a very simple straightforward pipeline and if

    25:51

    we want to create complex pipeline what do we create? We create dedicated nodes.

    So let me just make you understand this concept with our previous example. So let's say we have this particular lang chain

    26:06

    right? So now I am talking about only this part.

    See so let's say you want to build something like this. What is this thing?

    This is a pipeline but smaller pipeline. Perfect.

    What is this pipeline in itself? Smaller

    26:23

    pipeline. What is this smaller pipeline?

    And when we combine all of these things, this become our parent pipeline or basically the workflow. Make sense?

    So what we are learning currently? We are learning about chains.

    So chains can be complex but if you want

    26:40

    to become um you can say efficient developer with lang chain do not create long and complex chains always create smaller smaller chains and then you can combine those chains. Are you getting my point?

    So let's say one chain like this

    26:57

    okay one chain like this you can just create only one node for this chain only one and then you can just connect or let's say you have let's say similar chain like this here you can connect it make sense so instead of creating the whole thing in one go you can just have

    27:14

    this one chain first of all second chain then you can just connect both the things do not make everything available within single chain it will be very hard to debug this This is my personal recommendation. It's not like hard and set rule that you have to have to break down the chains or you cannot build long

    27:32

    chains. These are like best practices.

    Again, the choice is yours. If you are a developer, you can just do whatever you want.

    But this is the concept of chain. Make sense?

    So this sequence sequence of tasks is basically a chain. Make sense?

    Very very good. Now you will say an

    27:48

    Lamba like in um data engineering world we just use maybe airflow to build the chain basically to build the pipeline we also use Azure data factory we also have let's say AWS glue and all those things what do we use in lang chain because we

    28:05

    have just covered how we can just make wrappers on top of our agents prompts and everything the good answer is or you can say the good point is we can just use everything within the lang chain itself. You do not need to combine it with that third party tool or whatever.

    28:20

    We have like inbuilt functions available. Let me just show you.

    Let me just create a new folder. Let's call it as chapter 2.

    And let's create our first notebook. And I'll simply say basic chain or basically first chain.

    First chain. IP by NB.

    Okay. Perfect.

    28:38

    Now I have already talked about what are chains. Chains in lang chain.

    Okay. Perfect.

    So now let me first of all import that boilerplate code. So this is the code.

    Let me just bring

    28:54

    it here. Perfect.

    And let me just select the markdown um this kernel and make it as Python. Perfect.

    Let me run this. So this is our boiler plate code that we

    29:12

    have just copied. Now let's wait if our LLM openi is created because see this anti-gravity is really new and I have been using for I think almost a

    29:28

    month now. Um I started like using this last year and I personally feel like it's a great tool.

    Obviously it is free. Second thing is it has like so many bugs like sometime it takes so long to just run some cells and we have to just close it

    29:43

    and open it again. So there are some bugs and that's why they have just kept it for free.

    I don't know. Okay.

    See now it is also taking so much of time like it should not take much time. If I would be just running the cell in my VS code editor, it will be like this.

    VS code is like very stable and very good. So yeah,

    30:00

    let me just close it and open it once again. Let's see.

    So I have closed the integravity and I have just ran this again and now it is done. Sometime it takes like this much of effort.

    Okay. That is our LLM open instance is created.

    That means we can

    30:16

    now just call the models. Now let me just show you a basic um you can say chain that we'll be creating and this chain is very cool.

    So now you know that how do we um connect with LLM and how should we connect with LM? We should

    30:33

    always create a prompt first of all right obviously. So we will be creating a prompt.

    So let me just write um first chain right. So in this first chain I will first of all create the

    30:48

    prompt. And how do we create the prompt?

    You should know now. You should you should know now.

    You should know by now. So now I will simply say prompt template because first of all we create the template right?

    31:06

    prop template equals chat prompt template and let me just import it if it is not imported. Uh no it is not imported.

    So let me just import it from lang chain core.prompts

    31:23

    import chat prompt template. Okay perfect.

    So we have chat prompt template for messages and system message UR helpful assistant simple and human is input because we do not know like what will be the input okay and it we are creating a

    31:40

    you can say chain which can take any value like basically anything it is not just a you can say joke generator or fun fact generator it can take anything so that is why I'm not guiding anything in the human message I'm simply saying input sorted good now this is my first task

    31:58

    Okay, let me just write it task one. Task one prompt.

    I want to create task two. Let me just first of all run this.

    Then I will say task 2. And this task two will be lm, right?

    Because I want to

    32:15

    send this message to the llm. So I'll simply say um llm open.

    Okay, let's let's actually automate this as well. So we already have llm open a here.

    So I will simply repeat this step. That's it.

    32:31

    Perfect. So this is our LLM.

    Third step will be I want to use a parser. Now what is parser?

    If you remember if my previous notebook if you'll see you will see that every time I use something called as content. If I want to fetch

    32:47

    only the content value see dot content dot content dot content like every time we see dot content because if we do not write dot content we will get the unnecessary things as well such as um parameters um those things let's say

    33:03

    token size and all and we do not need to show that thing right so one option is we can create like we can use dot content right or we can use a built-in function in lang chain which is called string output parser So what that output part uh but what that output parcel will

    33:19

    do it will take your generated response and it will only fetch the content out of it and behind the scenes it is doing the same thing it is writing dot content but it is automating that thing and you do not need to write again and again. So I can simply say from lang chain

    33:38

    dot I think parsers uh let me just check here what did we use output parser let me just check the class for it let's try using first this

    33:57

    dot parsers import string string output parser. Yeah, let's try this.

    No, let me just check the library for this. Um, string output parser.

    This is our lang chain.

    34:14

    String output parser lang chain. What is the library, bro?

    What is the library? Yeah, it's called lang chain core dot output parser.

    Okay, makes sense.

    34:32

    output parsers. Okay.

    Okay. Okay.

    Perfect. Yeah.

    So, we have a function called output parser, right? See like this.

    Now, I can just use this. How?

    Let me just show you

    34:51

    task three string parser. So, I will create an instance of this function.

    I will simply say string parser equals string output parser. Simple.

    So how many tasks

    35:09

    have I created? Three.

    Perfect. If I will be just manually running all these tasks.

    I know how to run this. First of all, I will invoke this template by feeding the value inside this.

    Then I will pass this prompt template into this

    35:25

    into this lm openi. Let me show you.

    So first of all let me just say manual uh manual thing force chain let me just write it for you manual invocation.

    35:44

    So manual invocation will look like this. So first of all I will simply say prompt template.

    So I will say template equals prompt template dot invoke some

    35:59

    value let's say and what is the key input okay what is the capital of France then I will say llm dot invoke

    36:15

    template and then I will say final output is final result equals uh this dot Let me just store it in the result and then I will say result dot content. Right?

    This will be my manual invocation

    36:31

    of every single step. Just imagine that you want to run this particular chain and you are just manually invoking every step.

    That's how you build data pipelines. No.

    So how you can just create it? So I will simply comment it out because we are not manually running it.

    36:46

    If you want to just run it, you can. But I am not running this at all.

    Okay. So I will say chain invocation.

    So I can invoke it through the chain and in order to invoke this in the form

    37:02

    of chain I will be creating a chain obviously. So I will simply say chain equals whatever value you want to use and there there are basically two ways to create the chains.

    I like the I personally like the first one. So what do we need to write in this particular method?

    You simply need to use a pipe

    37:18

    operator which is also called as LCEL lang chain expression language. So what do I need to write?

    I just need to display or basically write the function in the order I want to run it. So first of all I want to run prompt right prompt template.

    Then I will use pipe operator.

    37:35

    Then I will simply say llm open my second function task basically. Then this then pipe operator.

    Then I will simply say string output parser basically string parser. Perfect.

    string parser. Perfect.

    So this is my chain now. And

    37:52

    you know what? I just need to say chain.invoke chain do.invoke and I can pass the dictionary input.

    Let's say what is the capital of France? Whatever I want to ask.

    Okay. So

    38:09

    now do you know what will happen? It will first of all invoke this thing first function.

    Then it will invoke lm open and then it will invoke string parser automatically. Let me show you the result.

    38:24

    Let me just run this. See you are just getting the final answer automatically.

    You didn't run this manually. You didn't run this manually.

    You didn't run this manually. You are simply getting the final response as is.

    The capital of France is Paris. Really?

    Yeah. Okay,

    38:41

    makes sense. Now you will have so many questions in your mind right now.

    I'm so sure because I also had so many questions when I was learning about lang. How this flow is working, how input is going, how the data is flowing.

    I know I know I know wait I will just explain you everything. So how

    38:57

    everything is going on? So let me just open my this thing.

    So let me show you how the things are happening. So first of all we have let's say three tasks, right?

    So first of all we have um template

    39:16

    right right first of all we have template then we have lm openi let's say l lm llm just lm okay then we have string

    39:34

    output parser perfect so we have this chain simple sorted and you know what will happen first of all you will Rahul will send the input now input can

    39:50

    be anything right so input will go like this um input and he asked what is the capital of France

    40:05

    just focus there because this is very important and this will cover are all the fundamentals and Rahul so sorry I'm just removing your name because I want to just put the input perfect so this will go here as the input simple now this template

    40:24

    this template this template will receive this as input okay let me just write it here input so this is the input.

    40:39

    Okay, let me just change the color as well. Let's say red.

    Okay, perfect. So, this is the input.

    Okay, what is the capital of France? Because this requires this dictionary.

    If you just look at the template, how do we invoke it? We invoke it like this, right?

    Like this. Perfect.

    40:56

    Now, we will we will generate what? How our output will look like?

    Output will have obviously the template object human message system message. Remember

    41:13

    we discussed it in the fundamentals here in the messages like how we just use the templates. Remember remember like this.

    So now I think we discussed here as well like

    41:29

    how do we just create the prompts here? See when we invoke it what do we see?

    So now what will happen? This will generate the AI message, human message, something like that.

    41:45

    Simple. Let's say it generated.

    Let me just invoke it. You will say like, bro, we forgot blah blah blah.

    So let me just run this particular thing. And perfect.

    This ran fine. And let me just run this.

    And let me

    42:00

    just run this prompt thing. And I will not invoke this function.

    I will simply show you what ready prompt look like. Ready prompt content is very important.

    Now let's say what is the capital of France.

    42:18

    Now let me show you the ready prompt. What is the capital of France?

    Okay this is the thing. This is the thing.

    This is the

    42:33

    ready prompt. Right?

    And if you are using let's say chat message let's say chat prompt template then obviously the message will look like something else like if you are using chat prompt template like here if I just invoke this as well just for you

    42:49

    I can show you ready prompt and let me just comment it out here as well topic um let's say honey and tone let's say funny so this is the value that I will see see round value

    43:06

    messages is equals to this and whatever values do we have here perfect so I'll be having values like this make sense so this will be my output yes this will be my output and obviously

    43:22

    it is too big but yeah this will be my output make sense now I have this output this output this one this output will flow will actually flow from here and it will go to here as input.

    43:40

    Okay. Okay.

    It will go here to as input or I can also move it here so that you can understand it better. So this will be my input and this will go here.

    This will be my input. So output of one task

    43:56

    will exactly be the input. Why it is so important?

    Why I'm stressing so much here? Because the next topics are dependent strongly dependent on this concept.

    So this output will become my input. Whatever it is, we cannot weak it because it is automated.

    Whatever it will generate, we have to accept it. V

    44:12

    means LLM node. Make sense?

    Now this LLM what it will generate? It will generate its own content like AI message and quarks and whatever.

    And this will become its output. This output will become now input of string

    44:30

    input parser basically function. Make sense?

    This will become its input. Now this parser will generate what only content because it will simply say dot content and whatever we already know like what it will do.

    So this is the basically the flow of it like how does

    44:45

    it work make sense. So I have just rearranged this with the values so that you can better relate with it whenever you will be referring to this particular nodes.

    So I hope now you know what is the intent. The intent is you need to be very sure while creating the chains

    45:01

    because the output of one node basically task will be the input for the next node or basically task. If you are expecting here something else and your previous task is generating some something else then it will not work because it will

    45:19

    simply mess up with the um you can say requirements because let's say you are here requesting a kind of um different um input variable and you are giving it something else. So it can actually break it.

    So this was a very simple example

    45:35

    that's why it worked extremely fine and we didn't need to make any changes and by the way you were just talking about we have like two different ways to run it. Yes.

    So we can also run it with the help of runnable sequence. Now what is this runnable sequence?

    Um this is basically the kind of function that we

    45:53

    can use in order to create the chains and basically run it. I personally like this pipe operator but if you want to know I can just show you as well.

    So we'll simply say optional um from lang chain core

    46:10

    dot runnables I guess import runnable sequence so you can simply say chain two or let's say chain one equals runnable sequence and in this particular list we just need to pass all of the functions all of the functions

    46:27

    functions means like all of the tasks that we have created Makes sense. Makes sense.

    Makes sense. Okay.

    So now I can simply pass like renable sequence and let me just say um prompt template and then lm open ai

    46:50

    and then we have output parser. Perfect.

    So we can also write it like this. L open ai is not defined.

    Wow. LM opi output parser very good very good

    47:06

    very good code awareness whatever model this anti-gravity is using so see we also got this value but I'm a big fan of using this because it makes um the chain more readable we can easily define the um you can say dependencies and it is

    47:22

    very much similar to defining dependencies in airflow we use like this operator in airflow tuck tuck And here we are using piper operator. It's fine.

    So yeah, choice is yours if you want to use this one because some people would like this one. So I'm not uh like restricting you to just use one method

    47:37

    but you have the options available. Okay, perfect.

    So this was a very basic example of chains. Now in real life we build chains which require some custom functions as well.

    What do I mean by custom functions? So basically let's say you want to build a chain where you want

    47:54

    to basically add some functions between this. For example, we have this chain template llm parser.

    Parser means the final output. Now I want to add one more um you can

    48:10

    say task, right? One more task and this task will pass this value to a different LLM.

    Okay. And what this LM will do, this will behave as a kind of

    48:26

    post generator. Let's say you want to first of all write a joke, right?

    And then you will simply get the message and then you will just um get the content out of it. Now in the second task or basically in the second chain what you

    48:42

    can do or basically second part of the chain what you can do you can now send this particular output to a different part where it will evaluate and it will make it in a form that you can just post it on your social media.

    48:57

    Let's say you just want to create a post on LinkedIn for this particular purpose. So what you will do you will then add a function in between.

    Why? Why?

    Let me show you why. Because if you do not do that,

    49:14

    if you let's say, let me just delete this. Let me just delete this because you already know what the output of this.

    So let's say you want to send this thing to the LLM and how do you send it? Obviously, we use this node, right?

    We

    49:29

    those we we use this kind of node or basically task. Now just tell me one thing.

    If I want to send this message to LLM, what's the best practice? Obviously using template, right?

    So let me just bring template here first of all before even LLM. So

    49:47

    when this output will go to the input, does that make any sense that I'm just sending this input like this? Yeah, it makes sense.

    Does it make sense? Think about it.

    Like why did I explain everything to you regarding the output?

    50:04

    Because now here the output is just generating the text. And how do we pass the input to any template?

    We pass it in the form of dictionary. Oh, so where is the dictionary?

    Where's the dictionary?

    50:19

    Makes sense. And where's the key for that dictionary?

    Because we just create key value pairs, right? So we need to create one node which will be called as let's say custom node or basically custom task.

    50:35

    Okay, I will simply say custom. So this custom function what will do?

    It will take the output of the previous task. It will modify it in such a way so that we can pass this value to the

    50:50

    template in the form of a dictionary. So this will be dictionary maker.

    This task will be dictionary maker. Now can we just write our custom Python function as well?

    Yes, you can even convert your custom Python functions into the form of

    51:05

    tasks. Wow.

    And how do we do that? It's very simple.

    You just create a Python task. Basically, Python functions and then you use one for um use one function.

    It's called add the rate I think just forget about add the rate just use simple one runnable lambda. You

    51:21

    can just use runnable lambda and you can convert any function into a runnable. Runnable means this task.

    Wow. Yeah.

    You can even convert this thing into a runnable lambda. Okay.

    And do you know what? You can even

    51:36

    convert this whole chain as the lambda. I will just talk about it.

    Don't worry. Well, but but first of all, let's cover this custom functions.

    And I hope you got the requirement, right? So, let me just write it here.

    Now, we need to pass the input

    51:52

    and input is here. here input is let's say here perfect input is here right now this input will send this message to uh to

    52:11

    the template and then to the lm right and then at the end we will use the parser and this parser will send back the result to the user make sense this is our workflow that we want to create basically our chain.

    52:28

    It's a simple one. Don't worry.

    It looks a little bit like tricky but it's very simple. Make sense?

    Okay. Let's try to build this and you will learn like how can we just create this kind of task.

    Let me just show you. So let's try to create this particular

    52:46

    pipeline chain whatever you want to say. And I have just mentioned this quick note so that you will understand like why are we creating this custom task and what's the need of that custom task.

    Make sense? Okay.

    Perfect. So now in order to do that I will simply create a

    53:02

    new notebook and I will say um chain with custom runnable dot ipy and b. Okay perfect.

    So now I'll

    53:17

    simply say let's try to import it. What do we need to put in the heading?

    It's fine. Perfect.

    So let me just run it

    53:37

    and you got the requirement right. We need to just create that particular chain.

    So I will simply copy all the tasks of the chain first of all and I will say

    53:52

    chain with custom runnable. Okay.

    So here we have chain with custom runnable and this is our task number one. And what is this list?

    Okay,

    54:11

    perfect. And this is our task number two.

    And this is our task number three. Perfect.

    And

    54:28

    perfect. Now we do not need to invoke it because we want to create more tasks.

    So here I will write task number four which will be my custom task. custom function or basically custom runnable.

    Runnable is just a fancy name. Runnable is something like which you can run in

    54:43

    the sequence. Okay.

    So I'll simply say df my custom function that I write. I will simply say um dictionary maker.

    Perfect. Dictionary maker.

    Now input can be text of string format and it will

    54:59

    generate dictionary. Okay.

    Now whatever I want, whatever I want, I simply want to return it like this. Return dictionary text key.

    Okay. And this can be customized.

    This will be

    55:14

    as per your requirements. This will be as per your this requirement.

    Like what kind of key are you expecting in the template, right? We are simply um expecting the text key.

    That is why I am using text. If you want anything else, you can just customize it.

    I will simply say text. Simple.

    This is my dictionary.

    55:32

    Perfect. Now in order to create this like or convert this into dictionary I will import something.

    I will say from lang chain core.trunnables import runnable lambda. Perfect.

    So now this is not converted

    55:48

    into a runnable. Now this is just a function.

    If you want to run it in a sequence we need to convert it into a runnable lambda. So I'll simply say dictionary maker dictionary maker runnable

    56:04

    equals runnable lambda and then I will pass my function. Now I can use this object to create a chain.

    Are you getting my point? Because we cannot directly use a Python function in the lang chain.

    Basically chain we need to convert it or basically apply a wrapper on top of our function. So that chain in

    56:21

    lang chain will treat our function as a runnable. Simple make sense.

    Okay. This is our task number four.

    Let me just run this. Perfect.

    Now the task number five is very simple, very similar basically and simple obviously because you are on this channel. So this is like template

    56:40

    for post that I want to post, right? So I'll simply say um prompt or you can say post equals prompt template dot chat prompt template

    56:56

    and here I will say un message um helpful assistant okay or I can also say you are a social media post generator simple okay and text.

    57:14

    Create a post for the following text following text for your favorite social platform. Let's say for LinkedIn, right?

    So, system and human message. Make sense?

    Now, this is my uh prompt post.

    57:32

    So, now if I want to invoke it, let's say I want to invoke it. How will how will I invoke it?

    Prom template. Let me just import it if it is not imported.

    Uh where is that prompt template? I think we have here prompt template.

    57:48

    Oh, I see what is the issue. We using prompt template.

    We need to use like this chat prompt template dot from messages.

    58:03

    Perfect. So now let's say this is my prompt.

    If I want to invoke it, obviously I'll be writing like this. Um let's say prompt post this auto suggest some sometimes irritates

    58:18

    prompt post dot invoke and then I will pass a dictionary right dictionary will be looking like this text and let's say hello world simple so I can only invoke

    58:36

    it with this dictionary If I try invoking it like this without this dictionary and if I simply pass this text, this will work. But we need to just make sure that text is replaceable here because if it it works because we just have one variable here.

    If we have

    58:51

    multiple variable, how it will know like which variable will take which value. So in that particular scenario, we have to mention this dictionary otherwise it doesn't work.

    So that's why I'm creating that particular function called dictionary maker. And then once we have this prompt template ready, we can simply create another task which will be

    59:07

    lm post generate generate the post and this will be my lm lm open as we all know right this one

    59:23

    and I want to use the same model. Perfect.

    Perfect. This my task six.

    I can even um ignore this writing because we can just use the same object right lm open but I'm just simply writing it again so that you can understand it otherwise like see

    59:39

    it is the same thing why I'm writing this here so that you can actually understand and relate it relate to it more so and our next or basically last task is task seven

    59:56

    output parser as you know string parser And we also do not need to define it again because we have already defined it. But I'm again defining it for you so that you can understand it.

    Perfect. So let me just run all the things now.

    And

    00:13

    now let's create the chain. And our chain is here.

    So now how we create the chain? Now you already know LCEL lang chain expression language, right?

    So I'll simply say chain equals and all the tasks in the uh

    00:31

    you can say sequence. So first task is prompt template and then we have llm open air.

    Then we have string parser.

    00:48

    Then we have custom runnable which is dictionary maker and runnable. dictionary maker runnable.

    Oh man, this dictionary maker runnable which is the

    01:03

    runnable instance of our function. Then we have again prompt post, right?

    Prompt post. Then we have lm open again because this prompt will go to llm openi

    01:19

    and then string parser. Make sense?

    So this is our chain. Now let me just hit this what is the capital of France and let's see what is the output.

    The output should be a post that we can just post it on LinkedIn right and this whole

    01:35

    chain is now running. See quickly geography refresher the capital of France is Paris and it's more than landmark field city.

    So this has created the entire post for me and it has done everything. So what it has done the same thing that I have just demonstrated here.

    So it will simply create this thing like let's say fun fact about

    01:52

    honey or um it is just creating the capital of France. It will go to the input for this custom function.

    So what it will do it will create a kind of dictionary simple and then this dictionary will take this output and it will send to the

    02:08

    template and the template will fill the variables simple. Then it will go to lm then it will go to parser and then it will go to us.

    This is the chain that we have just built and that's how you can also build your chains. Make sense?

    Okay. Very good.

    02:27

    Now let's talk about this important concept called parallel chains. Because see we already know the concept.

    We already know the fundamental thing right linear chains as we have just discussed this like how we can just create the linear flow. But obviously in your complex workflow as you're building some

    02:43

    more complex and more advanced level um you can say uh flows or chains you will need this thing which is called parallel chains. What do I mean?

    So let's say where is that Rahul? Okay.

    So let's say

    02:58

    this is a developer. Okay this Rahul.

    Yeah same person. This developer now needs to create a flow which looks like this.

    Let's say it has a kind of template okay which is a starting point which is

    03:14

    starting point for every chain I would say every chain and after this it has LLM for sure perfect now it has maybe parser like parser is like optional it's not a very big thing so let's say it has like two nodes right two tasks but now the thing

    03:32

    is now going to be interesting okay how so now we do not have like linear step we have parallel step what do I mean so let's say now we need to

    03:47

    we need to create this particular task this can be anything just to take an example let's say um you want to analyze the summary of your data that you're processing and it can be anything

    04:04

    let's say you are getting the CSV file and you are just processing the data. You are summarizing data and that's it.

    Cool. Okay.

    Now on the basis of that you need to do two tasks. One you need to draft an email or basically send an email to your manager regarding the

    04:19

    summarization that you have received because that is a kind of financial report, right? You need to do that thing.

    You need to send an email to your manager. And second thing, you also need to draft that particular thing in your documentation folder as well.

    or let's say you want to send a message on teams

    04:35

    to your group chat make sense so in short I want to perform two parallel things okay and whenever we just perform anything in land chain we have series of steps right so that being said this will be a kind of

    04:53

    chain in itself right let's say this is chain number one parallel chain and just for the simplicity let me just add an arrow So let's say this is a task. This is the task.

    Okay, perfect. So this

    05:08

    is like chain tasks and this is obviously the initial chain that was started. Perfect.

    So now we have this chain number one. Perfect.

    And I can just create a box. Perfect.

    So this is my

    05:24

    chain number one. And similarly I have one more chain for the teams messages as well.

    One is for um email, one is for teams message or you can say one is for email and second one is for personal use. You want to draft it, you want to save it or you want to send it to let's say your director or whatever.

    Basically

    05:41

    you want to perform parallel task. So what will happen here?

    You will be creating these two parallel chains. Parallel chains.

    Wow. So this will be my chain number one.

    05:59

    Chain one and this is chain two. Perfect.

    Now you will say anala is it a difficult thing to do? Please tell us before.

    Okay. What if I say it is a difficult thing?

    What will you do?

    06:14

    Nothing. We will be just prepared.

    Okay. So the thing is it is very easy.

    You first need to understand the concept. You have understood the concept.

    Now it is easy. Concept is everything.

    Concept is everything. Perfect.

    So now let's try to see how we can just build this thing. And just to demonstrate this, we will

    06:30

    use an example. And yeah, let's quickly create a clone of this.

    And let me just say three and

    06:45

    parallel chains. Perfect.

    So this cell will remain the same. Let me just run this with this virtual environment.

    Perfect chain with custom runnable. So as you know that this is our first task

    07:02

    if you look at the this thing template. So now you will say obviously we do not have any kind of data for now.

    We will just discuss those examples as well. But for for now let's say I want to just do this thing right.

    So first of all just for the understanding

    07:18

    we will just use a hypothetical situation hypothetical example then we will just also looking at some real examples as well using data engineering but first I want you to understand the thing okay so that is why I'm not directly pulling the real world example you will be like totally off first let's start slow right and then we're going to

    07:35

    just grow okay so now the thing is the requirement is very simple this template this template what it will do it will simply Ask LLM to summarize a movie or just ask LLM to write a quick summary about a

    07:52

    movie. Right?

    So you are a movie summarizer. Okay?

    And here I will simply provide the input as please summarize the movie in brief because we

    08:11

    do not want to spend much tokens and This is the input or let's say movie name whatever you want to do. Okay, input.

    So this is my prompt. Okay, this is task number one.

    Perfect. Let me run this.

    This is my task number two. Perfect.

    And this is my task number

    08:27

    three which is like string output parser which is like very common. Perfect.

    Now here comes the thing where we need to create the parallel chain. So let me just remove all of these cells because these are not required at all.

    Perfect. So now

    08:43

    now there are two options. Okay.

    Now there are two options. I will just show you both the options because it's my responsibility to give you to give you all of the knowledge.

    Option number one is to create a dedicated chain for this

    09:01

    task. Dedicated chain means like dedicated chain like it will run 1 2 3 and all.

    Okay. like it will have it its own tasks and it will do everything on its own creating the chain dedicated chain makes sense.

    It is very handy not a big deal.

    09:19

    The second option is very simple and it is much more scalable and it is easy to manage as well. What is that option?

    See if we have let's say just a few tasks okay it is easy if we just create

    09:34

    a dedicated chain it's fine but if we have like multiple tasks we can even create a function okay remember I told you that we can also create a function for the entire chain as well right so we can even

    09:50

    create the function for the entire chain okay so what I will do so that you can learn both the ways for chain number one I will use the normal chain method I will create all the tasks for the chain for chain number two I will not create the chain I will only create one

    10:06

    function and I will hard not hard code basically I will just write the code for every task okay that way you will learn both the ways simple simple okay let's do it so first of all let me just use the method number one so I will say

    10:23

    uh markdown parallel chain one. Perfect chain with parallel

    10:40

    chains. Perfect.

    So, parallel chain one. What will be the uh you can say step number one?

    Basically, task number one. So I'll simply say task one which is obviously prompt

    10:56

    right which is obviously prompt and I will say and what what type of prompt is this? So let's say we have the summary of the movie.

    What we going to do after that? Once we have the summary of the movie, we will ask LLM to create basically a

    11:14

    LinkedIn post, right? And in the second chain, we will simply say create an Instagram post because both the platforms are different.

    Um the tone for both the platforms are different. So let's try to do this.

    Okay. And in the real world, what we what we will be doing?

    Sending an email to manager and

    11:29

    sending a team's message to a director or whatever. Okay.

    Okay. But for the simplicity, I'm just taking an example like hypothetical example so that we can quickly learn the thing that we're trying to do because once you know how to do it, you can just implement that solution anywhere.

    Right. So let's say I

    11:45

    will write prompt. Let's say LinkedIn prompt.

    Perfect. You are a helpful assistant.

    Not helpful. You are a movie summarizer or I will say you are a

    12:00

    post generator. LinkedIn post generator.

    Perfect. And I will write this input.

    Create a post for following text for LinkedIn. Perfect.

    This is my task number one. Let me just create task number two which is

    12:17

    LLM. And task number three is this one.

    Perfect. Make sense?

    Simple. This is my chain number one.

    Let me just run this. Perfect.

    Three things. Perfect.

    This is my parallel chain number one. Okay.

    Now if you will

    12:36

    closely observe this is this is and we have just covered this thing that's why I covered that topic in detail. This is a kind of template right like not template like this is a kind of

    12:52

    input that depends on template right. What it needs?

    It needs a dictionary, right? It needs a dictionary.

    So, what we need to do, we need to add one small step here. Custom um you can say task

    13:08

    which will create a dictionary for us. Make sense?

    That's why I covered that topic in detail the very previous one. So, I will add one task here.

    I can add it in the pallet chain one, but it will be a redundant task because we would need we would need to do it in both the

    13:24

    chains. It's better to just create one task before both the chains, right?

    So let me just say and let me just grab that function from here. Dictionary maker um yeah here perfect

    13:39

    task 4 and let me just write it here. Perfect.

    Custom runnable dictionary generator. Perfect.

    Let me just run this as well. So now we are good.

    Perfect. So it will simply generate text and the other thing.

    And here we have this particular chain

    13:56

    which will which will expect text right and we are simply returning the text dictionary. So it needs to be same exactly same.

    If it is not same it will not run fine because you know the output of one is dependent on the previous one right like the output of one will be the input for the

    14:13

    next one. So next one is dependent on the previous output.

    Okay I hope it is clear. Perfect.

    So now it is done. Simple.

    Now I will be creating parallel chain two and just see how I will create that.

    14:30

    So for parallel chain two so if you just look at this parallel chain one I think it is incomplete right? How just just let me know in the comments how because these are just task we have not created the chain for it.

    So I will simply say chain LinkedIn

    14:46

    equals LinkedIn prompt and llm openi and string parser. Perfect.

    So let's run this. Perfect.

    Now it is a chain. Now instead of creating chain like this, what I will do?

    Do you know I will simply create a function. I will say def um insta chain.

    Okay. And

    15:07

    here in this function I will write all of these three things like usually what we do in the function right like normal function. Exactly.

    That's exactly you need to do task one prompt. Right.

    15:26

    Perfect. And here I will simply say insta prompt.

    And here I will say insta Instagram. And here I will say insta post.

    15:41

    And here I will say insta. By the way this is not required.

    So we can just leave it as is. Perfect.

    So now as you can see that we can define everything in the function. Wow.

    Literally. Wow.

    Because whenever we create function we have more control.

    15:56

    Why? Because I know the output of this the output of this will go inside this.

    If I need to make any changes I can do it in the function itself. I don't need to depend on like hey this will generate the output like this and hey this will generate the output like this.

    No no I'm

    16:13

    open to do anything. Make sense?

    So I will get the text like this. Um text is of string type and I think this function will receive a kind of dictionary because we pass a dictionary and this will generate

    16:31

    yeah perfect. So text is of dictionary type.

    Perfect. So now I will simply say text equals text of text right symbol dictionary.

    We are getting this key value. And now we are simply passing it

    16:47

    here in the text. And now I can invoke it invoke it within the function itself.

    I don't need to go to like wait for it to execute. So I will simply say um insta um prompt final basically

    17:02

    or like this is one way of doing it or you can even call the entire chain in the end like at the end like choice is yours. Choice is yours.

    You can even call the entire chain at the end because you know like how to do that. So I'll simply say chain insta and this thing

    17:19

    and you will simply say chain do.invoke chain insta dot invoke and it will simply say text and you will simply return chain install not this one I will say result

    17:35

    because you just need the result. So so this is another way to doing it.

    So what it will do? It will go to this function.

    It will create the chain. Everything like everything is exactly same.

    If you just compare the code, everything is same. We are simply creating a function on top of it.

    It is easy to manage

    17:52

    because let's say you want to import this function as a as a utility, you can do it. It's just like that.

    And second benefit will be you can make the changes in between because if you do not like uh if you do not write like this you can simply make some changes here and you

    18:08

    are not actually dependent on this particular thing. Make sense?

    So let's say even if I comment it out you can run all of these things in the steps right and you can just make those changes and if you create functions like this you do not need those functions like dictionary maker and all because you can make the changes in line. Again, it's personal

    18:27

    choice, but my agenda is to make you aware of all the possible things. You do not need to stick to everything written in the documentation.

    You need to be creative as well. You need to know like what are the things we can who is this?

    We you need to know like how are the

    18:43

    things we can do, right? Make sense?

    So now what are we doing? We are simply invoking it from within the function.

    Okay? And then we are simply returning the result.

    And what is that? The string.

    That's it. Okay.

    Okay. Makes

    19:00

    sense. Now we will simply create a lambda function for it.

    I will say uh insta chain runnable equals runnable lambda. Insta chain.

    Right. Let me just run this.

    Perfect. So these two are done.

    19:17

    Very good. So now finally we need to finalize the final orchestration.

    That means we need to now arrange task one, task two, task three, task four and then these two parallel chains. Right?

    Let's try to do it. Let me just write

    19:33

    parallel chain one, parallel chain two. And now it will be final orchestration.

    Perfect. Now final orchestration is very very important.

    I will simply say final

    19:49

    chain equals first of all we have I think prompt template then we have lm openi lm openi string parser dictionary maker runnable and then we have let me just use that

    20:04

    braces so that I can just show you better. Okay, we have this, we have this, we have this and then we have this and then we have parallel chain.

    Now is the thing because now we need to create two

    20:20

    chains, right? So we have something called as runnable parallel and let me just first of all import it and I would need to import it here from langchen core

    20:37

    dot runnables import runnable parallel and import runnable lambda.

    20:58

    Perfect. Okay.

    Runnable lambda runnable. So here we will be using a special function called runnable parallel.

    Make sense? Runnable parallel.

    21:14

    Perfect. Now what is this function?

    This function says now after this particular task called dictionary maker you need to create two parallel things. You need to create two parallel things.

    So what are the things? What are the task within that?

    Now I will define a chains that I

    21:30

    have created. One chain is this one.

    Very simple chain linked. Very good.

    I will simply say I I just need to pass a list basically not list. Um I can also do like one more thing.

    It's called branches because there are like two branches. Let me just show you.

    So these

    21:45

    are the two branches one two right so we have a built-in function as well like branches we can just use it and I think this is a better way to manage it manage it as well because I will just show you the output as well so when we say branches then we can define the branches like what are the branches I will simply say hey branch one you can name it

    22:04

    anything okay branch one is let's say LinkedIn and second branch is Instagram okay so LinkedIn branch is chain LinkedIn yes but For Instagram, we have a function, right? We have a runnable.

    So, I will do this. Make sense?

    So, this

    22:20

    is my final chain. Okay.

    And why it is red? Uh, okay.

    Make sense? I know why it is red because this is missing.

    Yeah, now it's fine. So, let me just invoke it now.

    Final

    22:36

    chain dot invoke. Now, just tell me your favorite movie.

    Um because we just need to write the movie name, right? In the first template, we just need to write the movie name.

    Yes. So let's say I want to

    22:52

    write a quick brief about KGF. I love that movie.

    Who is saying pushpa? That movie is also good.

    But KGF is like KGF. By the way, I love Push by as well.

    But

    23:09

    KGF is like KGF, right? KGF is KGF.

    So let me run this. Let's see what we get because I want to show you the final output.

    It will be like very interesting. It is taking time because we have so

    23:25

    many steps involved. Uh let's wait.

    String output parser this this. Okay, perfect.

    So it took 25 seconds. Wow.

    So

    23:42

    first of all what happened? Let me just show you what happened.

    So if you just look at this thing first of all template we created a template prompt template for movie KGF. Then we asked LLM, hey write a brief um summary

    23:57

    about KGF. Simple.

    Then we use string output parser to just fetched out to just fetch out the string and then we used string to create the dictionary because I know we have these templates right. So we use those things.

    Now we

    24:14

    need to pass that template to create the Instagram post and LinkedIn post. Make sense?

    That's what you want to do. So here we can just do this thing with with the help of these two chains.

    This is chain number one. That is a traditional

    24:30

    way to do it. This is chain number two with function.

    Like now you have both the ways. Now you can just do anything.

    And I have also given you the option that you can even ignore this chain definition because you can call everything within this function with the changes that you want. That is another

    24:45

    way to do it. Make sense?

    Okay. Now, now this is the final chain that we have created.

    Make sense? Now we invoked it.

    And if you just looked at the output, output is not very straightforward.

    25:01

    output is saying that you have branches key and within that branches you have a dictionary with two values LinkedIn and Instagram and you know from where I just got these two values from here because I defined that I want these two branches like this

    25:18

    H and this is my LinkedIn post and this is my Instagram post like content of it that LLM has generated makes sense and see both are both are different.

    25:34

    Okay. So that's how you build the parallel orchestration.

    The just a bonus tip I want to also show you one more thing. It's a very quick one.

    Let's say you want to highlight or basically you want to um display this thing in a better way. Right?

    So you can even like

    25:51

    this is a final chain. This is a chain in itself.

    You can also treat this chain as a runnable. Is it possible?

    Yeah, it is possible. For example, let's say you want to create a chain like this.

    Let me just write it here.

    26:09

    Chain as a runnable. Okay.

    Chain as a runnable. So let's say once you have this chain, this final chain, now you want to extend this chain, right?

    Now you want to

    26:26

    extend this chain and you want to now connect let's say one more function and what this function returns it beautifies the output because we have the dictionary here and we do not want to present it like this we just want to beautify it. So what I will do?

    I will create a function. I

    26:41

    will say task one, right? Task one will be my beauty fire function.

    Okay, I will say df beautify

    26:58

    and I will simply get the output and output can be let's say it it is a dictionary. We know that because it will return the dictionary.

    Again, fundamentals. This output will go as the input to the next function, next task,

    27:13

    right? Fundamental thing.

    So now this this is a dictionary. So now we will just create our function accordingly.

    We cannot just expect anything any any string format. No, we are just treating it as a dictionary.

    So I'll simply say let's say final response, right? And it

    27:31

    is of a dictionary type, right? And I also want to return dictionary but not a dictionary I would say um yeah dictionary but in a better way.

    Yeah we can say that or let's say text. Let's say I want to return string let's say or

    27:47

    let's say dictionary because these are two different things. So usually we should just return dictionary best practices.

    Okay. So I will simply say final response equals to this.

    Um I will say LinkedIn response equals to this. Yeah, perfect.

    28:05

    So I have created this function LinkedIn response Instagram response and I'm returning this dictionary with a beautified version. Make sense?

    And now obviously I need to create a runnable for it and you already know how to create that. So I'll simply say beautify runnable equals to runnable lambda.

    28:21

    Perfect. Now this is done.

    This is like task number one. Now if for the task number two final chain you already know that I have a final chain so I'll simply say final chain equals final chain like whatever we have here perfect like this this is the final

    28:38

    chain right yeah so what I will do I will create the runnable lambda for this like I have two options I can even simply write final chain like this like I if I write like because chains are also runnables by default if you do not know this I know this is confusing but this is important and see I have two

    28:55

    options. I can only tell you just scratching the surface level things or I can just make you understand the things while going deeper.

    So I will just prefer the second option. So I know it is like confusing but it is good for your growth and I know you are uncomfortable.

    That's where you grow

    29:11

    right? Okay.

    So I I'm with you. Don't worry.

    Don't worry. I know it is confusing but it is fine.

    So let me just tell you so let's say let me just take you to the diagram. So let's say this is the final chain, right?

    This is the final chain that we have built. Uh let

    29:27

    me just move it a little bit up. Perfect.

    So let's say this is a chain that we have built. Perfect.

    Now this entire chain, this entire chain is

    29:44

    irrunnable. Is a runnable.

    Okay. Okay.

    It is runnable.

    30:00

    This is the nature of it. Make sense?

    So that means you can just use this entire chain as your runnable and you can connect it to the next task.

    30:16

    So let's say you have this task, right? and you want to now connect it using a normal edges you can do it it's not a big deal which is let's say beauty file

    30:33

    I hope it is clear so it is the property of it whatever we create in the chain it is a runnable by default so I can literally write now my final chain is like final chain as you all Okay. So now I will

    30:50

    simply say h final chain. Okay.

    And I can comment it out because we do not need to recreate it because it is created here. Now I will simply say beautify chain

    31:08

    beautify beautified chain. And now if I simply create let's say beautified chain equals to final chain.

    and then beautify runnable. See, now I can literally use the entire chain as my runnable.

    31:25

    So it is a by default behavior. You need to understand this.

    Yes, you need to explicitly convert your Python function as runnable. But if you're creating a chain, you don't need to convert into runnable.

    That is a runnable by default. Perfect.

    Now let's try to run this

    31:42

    beautiful beautified chain. V and this time let's say pushp let's see what do we get

    32:02

    it will I think take 10 more seconds and I hope now this will give you a richer understanding about managing the chains because it is very easy to just show you wow we have an error wow wow What? What is error?

    LinkedIn output is

    32:17

    truncated LinkedIn. Okay.

    So it is saying error is at LinkedIn. Here we have LinkedIn.

    Oh, I see. I see.

    Basing mistake. So now

    32:32

    we have final response in the form of dictionary. Right?

    Do we have LinkedIn key available? No.

    The parent key is branches. Inside the branches we have LinkedIn.

    So I will simply first of all add the parent key which is branches and

    32:49

    then we have Instagram or LinkedIn whatever. Make sense?

    Makes sense. Makes sense.

    Makes sense. Let's use single quotes.

    Perfect. Perfect.

    Now let's run this.

    33:09

    Yeah. So I was saying that this will give you a richer understanding because it is very easy to build those simple chains okay where I will simply say show you hey this is prom this is llm this is string out parser who wa that's it that's it that's it no no no you are becoming a good developer here and you

    33:24

    need to understand all of those things which are complex makes sense push is taking more time why kgf took just 25 seconds wow we have fire So as you know now we have the

    33:41

    beautified version right now I can literally use these two key value pairs to perform my thing let's say email and um teams message whatever but we took just hypothetical situation just for fun so that you can just enjoy while learning it but you know like you can literally apply this thing anywhere and

    33:58

    don't worry I'll just show you one quick example as well even if you do not have any kind of real data but we can just show you make sense okay let's see so now let's talk about the conditional chains. Now what are these conditional chains?

    So now we know like how parallel

    34:13

    chains run in parallel. But now every time we do not want to just run everything in parallel.

    Now if you just look at the example above it's it is exactly the same. Right.

    Right. It is it is exactly the same.

    But now the difference is this is a

    34:29

    conditional chain. That means both of these chains will not run.

    This chain will run if the condition is yes. this chain will run if the condition is no.

    So that means we can even create conditional chains as well. And again in

    34:46

    the real world examples or in the real world scenarios we use conditional chains as well because you do not need to run everything. You need to keep so many things autonomous so that you can decide and that is the entire I would

    35:02

    say u backbone of autonomous agents autonomous workflows. Autonomous are nothing but just fancy if else statements and that's it because there is something which is actually evaluating the decision.

    35:18

    In simple terms it is simple if else condition. Okay, great.

    So now let's see how we can just implement this thing and architecture wise it is very clear. So now I'm not just explaining this from scratch because you know all of these things.

    Now only thing is how we can

    35:34

    just create these conditional branches. Right.

    So let's go to our anti-gravity and now let's create conditional chains

    35:49

    parallel chains and we can just call it as conditional chains. Very good.

    Now the flow will be very simple and very similar as well. So what we going to do?

    We will simply ask for the um movie review movie summary and before that

    36:06

    what I will do I will write the review. What does it mean?

    Let me show you. So first of all this is the prompt template right?

    We do not need to now write the summary. Now because we want to just evaluate the response.

    So let's say you are actually building a pipeline.

    36:23

    Okay. And you need to automate this particular AI workflow as a data engineer.

    So let's say you're a product based company. If you're not working in a product based company, don't worry.

    You will be working in a product based company very soon. Don't worry at all.

    So let's say you're working in a product based company and your company is selling some products, right? And you

    36:40

    are assigned a task where you need to automate their customer reviews. What does it mean?

    So basically let's say there's a product and there's a customer XYZ wrote something. Hey, what is this product?

    product blah blah blah blah blah.

    36:56

    So in that particular scenario, you do not need to build your NLP or basically natural language processing u machine learning model to categorize it. You can leverage LLM.

    That's why you are

    37:11

    an AI data engineer, right? You do not need to involve an ML engineer.

    You as a data engineer can do it. So now what you will do?

    You will take the review, right? And you will pass it to the LLM.

    Make sense? And that LLM will actually

    37:27

    categorize it whether the review is positive or negative. Okay.

    How LLM can do that? LLMs are actually the NLP models.

    Okay. Behind the scenes built on top of the transformer model.

    So they are well trained on these kind of tasks.

    37:43

    Really? Yeah.

    Yeah. They are very very well organiz very very well like trained on these types of tasks like categorization and you can say if you want to evaluate something LLM can do that very well because it is trained on such a large corpus of data I cannot even tell like such a large corpus of

    38:00

    data so it can evaluate these things very nicely usually we do it with the help of bird models as well basically encoder only models but it is not very performant that's why we use LLM that means like um transformer based models so LLM can do that very Okay. But yes, we need to guide it.

    We

    38:16

    need to also use a structured output that you learned in the beginning pyantic because LLM yes can do that work. But LLM will not just say yes or no.

    You need to guide LLM bro. Only respond as yes or no.

    Do not say this

    38:31

    person has said da da da da da. You need to just say yes or no.

    And you will be doing it with the help of identic. Make sense?

    Okay. So now your fundamentals are strong enough to build those things.

    So let's try to write our this thing. You are a movie reviewer or review um

    38:52

    you are a movie review evaluator. Make sense?

    And then I will say please categorize the movie review as positive or negative. Simple.

    So this is my

    39:07

    prompt. But before building this prompt I will build my pente class.

    if you remember. So I will simply say from paid import base model and I will simply say class and let's say llm schema

    39:22

    schema base model and let's say movie summary flag movie summary flag okay and I will use something called as literal yes or no okay or let's say positive or negative

    39:39

    positive and negative And let me just import literal as well from typing import literal. So this is my schema because I just want literal means like you just have these two options either positive or negative.

    You cannot answer anything else. So I will just use this

    39:55

    particular schema to build my LLM. So how I can just do that?

    I will simply say llm structured structured output equals llm openai dot

    40:11

    with structured output and lm schema. Perfect.

    So now this LLM knows that it needs to generate the response in the form of this thing positive or negative and that's it. Make sense?

    40:27

    Make sense? Okay.

    Okay. Makes sense.

    And obviously um I can just test it as well. Let's see what it returns.

    Dict must be a prompt value. What is this?

    Um okay, makes sense

    40:44

    because we do not have like any kind of input. But okay, makes sense.

    So I can simply say like this. Okay, see now it has done just this movie summary flag equals to positive.

    41:02

    Very good. Now do you know what?

    Now you will say an llama we need to use this particular thing in our downstream models as well. But this is a pyic object.

    How we can just parse it? You have two options.

    You can either create your own um custom function custom runnable to use it or there's um one

    41:18

    thing as well provided by langchen. It's called pyntic output parser.

    what pyic output parser will do. Pentic output parser will simply parse it.

    Okay, let me assure you. So if I say

    41:36

    demo chain, okay, this is a demo chain and I will simply say llm structured output and I will say pentic output parser. Pentic output parser and I need to define the schema and schema is llm

    41:53

    schema. Right?

    And now if I say this thing, now let's see what we get. Pyic output parser is not defined because we didn't run this.

    Now let's see. Um, okay.

    What's wrong with this?

    42:11

    Okay. Okay.

    I think this is fine because by default it should take the battery. Let me just specify this pyic object.

    I think so

    42:27

    because this is the code. Oh.

    Oh. Oh.

    This time is the error is different. Okay.

    This time it says validation. Okay.

    Validation error makes sense. That means it is running fine.

    Okay. This is running fine.

    So now this is a good thing. So now do you know what happened?

    This has actually thrown one

    42:46

    error which is called validation error. That means they're saying hey validate see if I am just running it without this.

    Okay, if I'm just running it without this. So what do we get?

    If I just comment it out let's say let's say like this. What do we get?

    We

    43:02

    simply get llm schema movie summary flag equals to positive. Okay, make sense?

    So it is just returning like this. Okay.

    So now if I'm just adding this particular thing which is this one, it is throwing the validation error. That means it is

    43:17

    saying the validation is not right. And what is the error?

    Let's read it. Input should be a valid string.

    Type equals string type. Input valid equals to lm schema.

    This thing input type equals to LM schema. For further

    43:32

    information visit this this is this. So it is saying like it is a kind of validation issue and you cannot do it like this.

    So in order to solve this, you actually just need to remove this. Why?

    Because you do not need to do this.

    43:48

    Why? Why?

    Because you are already converting the text into a pyic model. If you so if you see the response, it is already there.

    If your LLM would be just generating the dictionary but not a pyic, then you would have used a pyic output parser. That's the advantage you

    44:05

    get when you use the premium models which are hosted by reputed tech companies because they understand everything and you do not need to do anything. So that makes sense.

    So our this thing is ready and we know that this will generate a kind of um you can say pilenting object right and that's

    44:23

    it. That's what you want.

    So now what we need to do we will simply use it and how we can just use it. So let's say this is our lm structured output.

    Let me just rerun this. So let's make sure everything is fine.

    So this is done. Okay.

    So now we have this particular LLM

    44:39

    created. And now we can simply say chain with conditional chains.

    Perfect. So this is the kind of prompt that we are using.

    You are a review evaluator. Please categorize a movie review as positive or negative.

    And this is the input. I will just pass the input

    44:54

    in the runtime. Perfect.

    And this is the task two which is this one. Perfect.

    And I can run this as well. Perfect.

    Now I don't need to create the string out parser because we have pyic. So that is

    45:10

    fine. We need to use this to convert the pyantic object into a kind of dictionary you can say because we will be receiving the object pyic object right and in order to create the further conditionals

    45:25

    we need to just use a kind of custom function. So I'll simply say pentic JSON because I want JSON.

    So this will give me a kind of you can say pyentic

    45:43

    object right. This will give me the pyantic object.

    So now how I can just convert this pyentic object into the form of this thing. So what I will do I will simply say input right and this will be of llm schema

    45:59

    type make sense because this is a schema this is not string this is not integer this is not list this is a different schema and I will return a string perfect now let's try to do it how we can just

    46:16

    do that so I simply say input domodel dump that is the you can say modern way to do that because if If you will see the result of this, let's say I want to invoke this model. I will say result equals llm

    46:33

    structured output dot invoke. Let's say this movie is good.

    Okay. And if I write result,

    46:51

    so this is a schema. If I say model dump, then do you know what?

    You will see the dictionary. This is a function that we use especially with the pyic objects.

    So I simply need to say model dump. And if I just want to get the movie summary

    47:07

    flag, I will use it like this. Movie summary flag.

    Perfect. So these are the best practices that you use.

    Okay. So you just need to get this thing model dump JSON and I can

    47:24

    delete this cell. This is not required and we can just do it here.

    Perfect. I will simply say input dot model dump and we can return this thing.

    Perfect. Nothing fancy.

    Yeah. And then

    47:41

    we can just simply create the runnable lambda and that's it. Panteek JSON lambda to runnable lambda.

    Perfect. So this these are my tasks that I have created 1 2 3.

    And once we have this thing then we

    47:56

    will be creating two chains like whatever you want to do. We will simply say um parall chain one and you say you are a LinkedIn post generator.

    Create a post for the following text for let's say you want to just post it on LinkedIn like whether the comment was positive or

    48:11

    negative. It's just a hypothetical situation.

    But in the real world, what you will do, you will simply simply categorize it and you will simply revert to that email back saying okay this movie was good. Thanks for your feedback.

    If the person says movie is bad, you will simply say apologies for

    48:27

    that and I hope blah blah blah whatever you want to say. But here we already have the code written.

    So I don't want to write the code again and we can just reuse it. Make sense?

    So let's say you want to use this parallel chain one which is chain linked and we want to use this because we it will just simply turn

    48:42

    positive or negative. So it will simply generate the code you can say prompt for positive.

    Okay. And it will generate the post for just positive word.

    That's fine. Makes sense.

    And it will do it here as well. Perfect.

    So now let's come

    48:58

    to the final orchestration. Remember that we created this runnable parallel.

    Yeah we created this. And do you know what here we have something called as runnable branch.

    Yes, runnable branch. So what we do here we simply need to create a kind of

    49:14

    conditional chain. Okay conditional chain like we will simply start with the normal orchestra.

    Let me just show you. So first of all let me just create a conditional chain.

    Now this conditional chain will be runnable branch and obviously we need to import

    49:30

    it. Runnable branch.

    Perfect. So now in this runnable branch I will define all the things.

    All the things. All the conditional branches.

    All the.

    49:45

    So first condition is what? First condition is this one.

    So I will simply write condition. Okay.

    You can also name it. It's your choice.

    But usually we simply write a function directly. Because just tell me one thing.

    We want to go to this chain. If the answer is yes.

    So we obviously need to define that

    50:02

    particular function. Okay.

    So we'll be using the lambda function. So I will say lambda x okay lambda x and then I will say positive in or let's say positive in x

    50:19

    make sense if that is the scenario if lambda is there then simply you need to run my function which is the chain link let's say if positive we want to run LinkedIn chain otherwise um you can say

    50:38

    chain LinkedIn and we can also run chain install install chain runnable

    50:58

    perfect install Instagram not insta chain runnable make sense so these are the two conditions that I have created if this is true then this and otherwise it will

    51:14

    be like this make sense and then default we can simply say chain default we do not have any kind of chain default so we will simply leave it as is this so this is my conditional chain default but much oh man what is this so we can simply say default in this one because we just have

    51:29

    to so I will simply say default equals Equals like you can just say default equals or you can just simply write it like normal chain let's say chain Instagram

    51:48

    install chain runnable perfect now what do you want got an unexpected yeah I didn't pass this thing this auto suggestion pass this thing okay so now this is my conditional chain so now we can simply create our final orchestration and it will look like

    52:06

    Final orchestrator equals first of all we need to write this all of these things right this insta chain runnable and blah blah blah blah blah right and this is not parallel chain we will simply say conditional chain

    52:23

    perfect and here it will become conditional chain two so from where we are starting our chain from Here obviously we started our ch with this prompt template then lmm structure output right

    52:38

    then pyntic JSON so let's write it and I don't know why this model this anti-gravity model is not picking right context super super high premium coders see in front of you so final

    52:54

    orchestrator equals and we can simply define it prompt template and then conditional idiot lm structured output and then we have that condition that um

    53:10

    pyantic JSON function pentic JSON lambda and then we have conditional chain make sense let's run this and let's try to run this

    53:25

    and what do I need to do I simply need to pass the input and input how does it look like I think we have the input as input. Okay.

    So let's try to give the input and input will be I love this

    53:42

    movie or let's say this KGF movie. Wow.

    Now let's see what it does. We should see like two things.

    No, not two things like just one thing. Positive.

    Perfect. So it has written the

    53:58

    LinkedIn post for positive. Why?

    Because it actually evaluated our you can say prompt whatever we have given as positive make sense and you know how I know this was like a little bit complex but that's how you grow that's how you

    54:15

    learn and I hope now you have a clear understanding of all the branches all the things and everything now let's jump on to the next chapter and this chapter is my personal favorite as well because this is very very special in this we will be covering

    54:30

    First of all, yes, finally react agent intro. Perfect.

    So now you would have heard about this word, this keyword, whatever you want to say, it's called React.

    54:46

    Let's talk about it because it is one of the most popular keyword, one of the most popular jargon we use in the agentic AI and AI data engineering world. It is called react.

    55:02

    Now this word is not react bro. This is react.

    See this is not like react. This is react.

    What does it mean? React is actually made up of two words.

    55:19

    Reasoning And then acting, reasoning plus

    55:36

    acting. Basically, it is reasoning plus evaluation plus acting or reasoning plus observation plus acting.

    But this is the highle definition. So let me show you react agent.

    uh react agent full form.

    55:53

    Yeah, reasoning plus acting. Yeah, that's it.

    Like that is the you can say core of everything. Reasoning plus acting.

    There's a middle thing as well which is called evaluation or observation. But I will just explain you everything step by step.

    Now just sit back and relax because you need to enjoy this thing because this is very

    56:09

    important because once you understand the React architecture, boom, you understood almost everything. Really?

    Yes. you you would have already understood so many things but this is amazing.

    This is the hot hottest part of the entire thing react. So let's say

    56:27

    you want to build an agent. Okay, let's say you want to build an agent.

    Okay, let's build an agent. Let's say you want to build a react agent

    56:44

    because everyone is just trying to say I want to build react agent. I want to build react agent.

    I want to build react agent. Now how to build react agent?

    Why to build react agent? So we know that react means reasoning plus acting.

    Okay. Now do you know what what does it mean?

    Let's say there is Rahul. Bro, come

    57:01

    here. So let's say this is Rahul.

    This Rahul wants to build an agent. Okay.

    So that whenever there is a user who sends the input,

    57:17

    that's what users do, right? Input.

    Okay, make sense? This user will send the input.

    Okay. Yes.

    Now do you know what?

    57:35

    Now currently we are using um really really high level and advanced agents LLMs provided by OpenAI which are amazing which are actually answering all of those things that it knows right but

    57:51

    there are some questions that LM does not know. Oh really really what are those question?

    LLM knows everything. Let me show you.

    If I ask LLM this question, if I ask LLM this question, let me just bring the code conditional chains and this is my code.

    58:11

    So let me just ask this simple question lm open a which is the best model one of the best model that we have in the market. Okay.

    dot invoke I will say hey

    58:27

    what's my name simple simple question let me just run this let's see what it says I do not know your name would you like me to call you if you want tell me now I will use it for this chat simple so it

    58:45

    does not know my name I don't know why but yeah it doesn't know my name so what does it mean that means LLMs are not trained on your personal data. LLMs are not trained on your organization's data.

    59:00

    LLMs are not trained on data that it doesn't have access to. Only you have access.

    So what to do in that scenario? For example, let's say you have a database.

    You have a data warehouse in your organization and you want to talk

    59:16

    to that database. You want to just let's say make some API calls, right?

    Let's say you want to just you you have a um posgress database and you can just use psychopg um library to connect to it and you just want to grab the data as a result or let's say you have data

    59:31

    warehouse and AWS and Azure you can use the API calls to get the data make sense like you can just do that and you know how to build those Python functions yes but how NLM will do that because LLM doesn't have access to do to do to to do

    59:48

    that and how What we'll do then in that scenario we simply create something called as tools. What do we create tools?

    So let's say we have some tools. Let's say we have a tool for our

    00:03

    postgress. Let's say right or let me just use postgrace.

    Perfect. So I have this function postgra database that can access this postgrace.

    Okay. Then let's say I also have access

    00:19

    to um email right because obviously LLM cannot send an email to uh me. Let me show you.

    If I say uh

    00:35

    send an email to anal lamb blah blah.com. Yeah.

    at the rate gmail. Right?

    Let me just write this.

    00:58

    Is it actually sending an email? See, I can't send email directly, but I can draft you one copy.

    See, it cannot send the actual email. It cannot.

    So, what does it mean? That means it doesn't have

    01:14

    the capability to send an email. Let's say it also wants to um do something with your let's say code.

    It doesn't have access to access to do that. It can answer your questions but it doesn't have the capability to perform these things.

    Then who can perform these things? Who has the access and who has

    01:31

    the capability to do all of these things? You.

    Okay. So, should we become LLM now?

    No, not really. I didn't say that.

    No, we do not need to become an LLM. Now you need to create the tools.

    You need to create the functions for these things. One function for this,

    01:48

    one function for this, one function for this. Okay.

    Okay. These are our functions.

    Okay. These are our functions.

    Perfect. Perfect.

    And you will create something

    02:03

    called as tools or basically toolkit. These are tools.

    Tools are what? Functions, right?

    So when when I combine all of these things when I combine all of these things what it will what it will become toolkit right this is my toolkit

    02:19

    where we have all the tools written this is my toolkit make sense anything anything fancy we have discussed nothing so simple so so so simple

    02:34

    perfect so now I have this toolkit do you know what will happen this thing, this LLM will connect to to this toolkit, okay, to this toolkit and it can make

    02:49

    use of these functions. Really?

    Yes. It can talk to these tools in the back end.

    Wow. Now, what we did, we actually increase the power of our LLM.

    And this step is called tool binding.

    03:05

    This is what tool binding. We basically bind our LLM with the tools.

    That's why it is called tool binding. Okay.

    Okay. Makes sense.

    Tool binding. Right?

    Do you know what will happen now?

    03:21

    Now my input my input if this input can be answered directly by LLM directly by LLM. It will simply give the response take it like this.

    It will simply say

    03:38

    hey you want the response okay let me just give you the response and this is the response take it like this. Simple right?

    But when this LLM doesn't have this capability to do that work, it will look

    03:56

    for its tools. It will look for its toolkit.

    It will search for the relevant tool. Hey, let me just check.

    Do I have that tool to perform that action? This user has asked me to send an email.

    Do I have the tool? Let me just first of all

    04:11

    check it. It will simply search for that particular toolkit, right?

    It will search for that toolkit. Now that tool from that toolkit will be used

    04:26

    if that can do the work. Let's say this email tool is there.

    So what will happen? It will it will simply go to its toolkit.

    Okay, go to its toolkit.

    04:42

    to search for the tool. Let's say this toolkit has got the response.

    Now, do you know what will happen? This tool will not send the answer directly back to the user.

    No, no, no, no, no, no, no, no, no. This will give the response back to the React agent.

    04:59

    Okay? Like this.

    This will be a loop. So, what did I say?

    This React agent will search for the tool. It will make the use of the tool.

    The output of that tool will go back to the LLM. Basically, the React engine that we are building.

    Under the

    05:14

    word, it is just a just an LLM, right? So, this will go back to the LLM.

    It will observe it. Hey, this tool has generated this output.

    Is it um good enough to go ahead or should I just make another tool tool

    05:30

    call? Let's say this email tool has done the work and has not actually completed the work.

    It needs to make another tool call. It will again go back to the toolkit and it will search for the task that it's spending.

    Then it will bring that tool and then it

    05:47

    will just give the response back and then it will just observe it. This step is called evaluation.

    This step is called basically um you can say reasoning or basically observation. So that is why it is reasoning plus acting.

    That means it will first of all call it.

    06:06

    It will take action. Then it will just do the reasoning part as well.

    It will also observe the output. Once LLM is satisfied that I have the final answer only then it will generate the output only then

    06:23

    and if it is not satisfied with the output it will not generate the output and it will keep on searching for the tools. Yes, that is the entire concept of React and this is a kind of agent that everyone tries to build nowadays because

    06:38

    this is a kind of autonomous agent because you have the tools, you just create a wide range of tools or let's say long list of tools and your React agent can actually do everything but you need to understand all the ins and outs of this thing because it is not

    06:54

    very easy to understand it. Let me be very honest.

    Yes, I will show you the code to implement this thing first of all. so that you can actually see what is going on, how the things are going.

    Then I will be breaking down each and everything like how it is doing

    07:09

    everything behind the scenes because in lang chain you do not see behind the scenes. No, you do not see it.

    If you like if you know like the things in detail in in depth then obviously you can just see it by default it will not tell you how it is doing that. I will just show you how it does that behind the scenes and why it is very useful

    07:26

    because then you can just customize the this agent as well for your own use case and again for the interviews it is like very very very handy very super very superful very superful very helpful I hope it makes sense

    07:41

    I will also show you tool binding I will also show you how it is working make sense okay let's see first of all the code implementation which is given in the documentation as well which is the first step then we going to deep Okay. Okay.

    So now let's just try to write the code

    07:58

    and I can also show you the documentation. Let's see react and lang.

    And you know what there are so many tools available in the lang as well. So you do not need to create tools from scratch.

    There are tools such as Wikipedia search. There are tools such as called um duck the go search.

    If

    08:15

    you just want to search for news, latest news. There are so many search.

    There is like tely search which for for from which you can just make some um search like you can just use tei as a search engine. So there are so many so many integrated tools as well.

    I will be using two to three and then you can just

    08:31

    explore and learn. Okay.

    Um react agent lang chain. Okay.

    Where where is that documentation? I think documentation.

    Let's try to explore on our own lang chain and agents maybe here. Okay.

    So

    08:48

    here they have just updated this thing. Earlier it was called as create react agent.

    But now they have just created a new function, new class whatever from langchain.gent import create react agent. So now you just create like this create agent and tools.

    See simple

    09:03

    simple. Did you understand that?

    You simply need to define the model that you want to use and you want to just add the tools that you want to attach to the LM. That's it.

    And everything will be done on its own. See simple.

    But yes, you need to create the tools on your own and obviously you

    09:19

    you need to define the tools how LLM would know like what tools it needs to just use or create right and there are some dynamic models as well which I'm not a big fan of it because if you just want to make a tweak I would rather stick with my own code to make some

    09:35

    tweaks instead of using their decorators because they can just make changes to their code base anytime and they will simply say hey we have a new update available we have this v1 available And yes, your code base will be broken. I cannot just make my own decorators,

    09:51

    bro. You do not need your decorators.

    So that's why I do not use um these decorators and all. If I want to make any tweaks, if I just want to make my agent dynamic, I will just write my own code and I will just make it.

    And yeah, that's me. Okay, so let's try to see

    10:08

    what are the tools. Let me just click on tools.

    And here are the tools that we have. And yes, we can use the tool decorator.

    This is something which is stable. We can just use this.

    Okay. And yeah, it's I I like like using this tool definition.

    And there are so many tools.

    10:23

    First of all, I just told you that you can create your own customized tool. So this is a definition for that.

    If let's say you want to sort something within a database, right? And you can just simply do it.

    If you want to use integrated tools, let me just see if they can just show you the integrated tools. Um

    10:41

    integrated tools. No, they have not shown here.

    I can just search. Let me first of all close all these tabs.

    So I will simply say duck go search LinkedIn because that is like one of the tools.

    10:57

    See so here we have oh yeah I think they have just moved the thing to integrations. Perfect.

    So if I just go to integrations I will see the all of these tools. popular providers I have tool for

    11:13

    OpenAI, Google, Enthropic, AWS, Google, HuggingFace, Chroma, Pinecon. These are by by the vector databases and data bricks and Mistral.

    All of these things are here. Red is elastic search and all providers like we have everything here.

    11:29

    These are the you can say the ones which are validated and there are some community providers as well. If I just open this one, I will see a long list of all the things.

    See all providers. Click here and then you will see all of the providers here.

    See? Wow.

    Such a big

    11:47

    list of tools that you can just create. These are all of the you can say tool integrations whatever they have created duck go search duck db like so many things 11 labs.

    Wow. So you have so

    12:02

    many things. So let's try to start with anything.

    We can also use IBM Jaguar. Wow, so many things.

    Wow, man. They have actually added so many things.

    You can also explore on your own. See,

    12:18

    te is also there. Tele is basically the search engine that is very popular.

    And then we have wow so many things. We have Twitter as well.

    Wow. So now let's try to uh apply this thing react intro.

    Okay.

    12:33

    So this is our import thing. Perfect.

    So now what we going to do? First of all, we will create our tools, right?

    Let's create our tools because that is the base. Okay.

    Let's create our tools. And don't worry, we will be just creating some

    12:49

    amazing tools. Okay.

    So first of all, let's create the tool number one. And this will be let's say uh news search because let's say your user wants to get some news right so we can just

    13:06

    have that tool and what is the basis of the new search tool it is called duck go search and I can just show you duck go search it's here so if you want to use duck go search it is free that's the best thing you do not even need any kind of API key because with some other

    13:21

    search or new search tools you need some API keys but it is completely completely completely So how we can just use this tool? This is first of all very well integrated.

    Duck go search is actually a standalone um you can say service that

    13:37

    you can just go to their website and you can just see how you can just use it um as an API as a as a you can say as a python function as a python API but you do not need to do this because even if you write this let's say python so you will get the code like how to make an API call using pi and then you can just

    13:54

    see this but with lang chain you do not actually need to you do not need to even install this you actually do not even need to install this And you do not need to even write this Python function like this because that's how you just make API calls, right? Because we are simply writing the making the API calls to the function.

    But here we have lang chain

    14:11

    integration uh and we can simply use this function tuck search run which is created by langchain. Okay.

    So what I will do I will simply copy this and I can simply go here from langchain

    14:26

    community.tools tools import duck duck go search run and this is an instance for that okay let's say search tool it makes more sense and I'll simply say Obama's first name so what it will do it will make that tool call or let's say I want to search the news who is an lamba

    14:44

    let's see if it knows search no module name langchain community yes we need to install this module which is called langchain community so I will Here this one do we have virtual enabled? I think they

    15:02

    should add this feature where you can automatically enable the virtual environment. Why I'm writing this again and again?

    Activate activate activate UV add and then line chain community.

    15:19

    Perfect. Now let's wait.

    Perfect. So now let me just rerun this.

    DDAX Python please install the pip install UDAX. What do you mean could not import DDGS Python package?

    15:37

    Uh please install it with okay there would be something maybe dug search. So I will simply say UV pip install DDGS.

    Let's add it as well because it is saying that. So let's do

    15:52

    that. Ooh, it knows man.

    Okay. Okay.

    Okay. That's good.

    H good. Okay.

    So what it has already like what it has

    16:08

    done this is not generated by AI. This is a tool call.

    That means I made a tool call to duck go search run. Okay.

    And I simply wrote this thing. this thing who is Anchal Lamba basically a news I want to get some news some

    16:25

    information regarding this question who is Anala so this answer is the response of this particular function called duck dug go search on this is not generated by llm this is not generated by lm yes we will

    16:40

    bind this tool to lm so that it can make the use of this function but currently it is not connected to lm this is simply the response getting we are getting from the duck the go search make sense Perfect. Makes sense.

    Simple, simple, simple. Okay.

    So, this is our tool. Let me just hide it.

    Um, it's fine because

    16:56

    why show up? So, search tool one tool is done.

    Let's say I want to add one more tool and it will be um which tool we should use because we have very long list and I'm a big fan of this tools list. If you can even go to integration

    17:12

    and then tools and toolkits, you will get so much so much so much so much. So, you have Google Serper.

    See these are the paid ones, these are the free ones. So you can actually compare what you want to use for search, for code, for productivity.

    You have like so many things database, finance integration.

    17:29

    And let's try to use one tool for um let's say oh we have Gmail toolkit as well. Wow, nice.

    Let's use for Wikipedia. Let's use Wikipedia.

    Wikipedia. Wikipedia.

    Wikipedia.

    17:46

    Um where is Wikipedia? Yeah, here.

    Let's use Wikipedia. Okay.

    So, what I will do? I will copy this and I will Wikipedia search tool.

    I will write this

    18:02

    and then I can simply go to this wrapper because we want to run this and let's write this. So, this is like Wikipedia tool.

    So, I can say Wikipedia tool. This is source tool.

    Make sense?

    18:18

    Okay. So now if you just want to test it uh the same way we tested is uh pip install Wikipedia.

    Okay. Let's install Wikipedia.

    Not a big deal. Perfect.

    Now let me just run this. Now let me just test it.

    18:35

    Wikipedia tool do invoke. Let's say what is the capital of France?

    It should know right because this is Wikipedia. Wikipedia knows everything.

    page closed. See what is this thing?

    You are actually getting everything because when you just make a call to Wikipedia

    18:50

    tool, you are actually getting the response of the entire Wikipedia, right? So this is that particular thing.

    That's why you're simply seeing everything. So you are saying France but before France, you are also seeing this garbage stuff as well.

    Closeed question contrast with open-minded question. So this is like the response that you get in the in

    19:07

    whenever you're visiting the page the Wikipedia web page, you will see all of these things, right? So that's exactly it is returning.

    So do not feel bad. That's how it works.

    That's why we need to connect these things to LLM so that LLM can prune these things. Right?

    Okay. Makes sense.

    So these are the two tools that we have created. Let me also create

    19:23

    one custom tool so that you will also learn how to create custom tools. So let's say you want to create one tool uh which will do something that it can also not do like we do not have any kind of integration with that particular function that you want to build right.

    So you can build your custom function as

    19:38

    well. tool three and I want to make it as um custom enterprise tool.

    Let's say that tool is just for my enterprise like where let's say you are working right. So I will simply say from lang chain

    19:58

    dot tools import tool perfect now I will create a decorator tool and then I will write let's say enterprise tool make sense make sense so now what I will

    20:16

    do I will simply create the function and what this function does this simply sends an email we have integrations as well for email. I'm just like demonstrating it with a hypothetical situation where we do not

    20:31

    have the let's say integration with email. Right?

    So what I will do I will simply say return and I will say email sent. Let's say email sent.

    Obviously I need to write the code here to send the email and for that you would need to create these APIs as well from mail gun and maybe Gmail

    20:49

    API. We are not getting the those APIs right now but you know how to create all those basic stuff like API and then just making the calls.

    You can just write your code here. Now very important thing whenever you are creating your custom function you need to add a dock string.

    21:05

    Why legends are asking what is a dock string? Okay good.

    Do string is nothing but a kind of way to add comments in your function. You write it like this.

    three um double quotes and then you

    21:20

    simply provide a description. You simply add a kind of message so that LLM can read because just tell me one thing you have created this tool right you have created this tool you are creating this tool LLM will pick one tool two tool it

    21:37

    needs to pick that tool on the basis of a you can say a kind of message right how LLM will decide how LM will decide it so in that particular thing you have to you have to just tell okay this is the information this tool is built for this purpose. This tool is built for

    21:54

    this purpose like this. Okay.

    Okay. So, we have to write the description.

    So, that's why I think we can even write description here as well because that makes sense. Description.

    I will simply say um this is a tool to search Wikipedia.

    22:11

    And I can also write maybe duck go search here description. This is a tool to search the web for news.

    Let's say and I can just complete this. This is a tool to send emails to employees.

    Very good. Email sent.

    Let me just run this.

    22:27

    So that's how we can add the description. Obviously these are the integrated functions.

    So we do not have like power to add the dock string within this. But I'm so sure when lang chain would have integrated these functions, it would have already added the dock string because they know like you are

    22:43

    very lazy. You will not write dock strings.

    But you should always try to add it. Right.

    So now this is the information. Now these are the tools that we have created.

    Simple. Okay.

    Now let me just add a toolkit. Basically let's create a toolkit.

    Now this is our

    22:58

    toolkit. And in this toolkit we will simply write all the function names that we have.

    So we have first of all um search tool. Search tool.

    And we have then Wikipedia

    23:13

    tool. Perfect.

    And then we have enterprise tool. See how well this particular IDE is picking the context.

    It is written here. Enterprise tool.

    It is suggesting me

    23:29

    email sender tool. Bro, bro, which model are you using in the in the back end?

    I want to know. Okay, now enterprise tool.

    Let me just run this toolkit. So now I have the toolkit of these three tools.

    If I want to see the

    23:45

    names of these toolkits, I can also write like this toolkit dot get tools. I think this was a function.

    I don't know if it is available now. List has no attribute.

    Get tools. Um

    24:02

    makes sense. I think because this is a list.

    So actually we can actually write like this toolkit directly. Yeah.

    So now we have these two like these tools. Let's say DGO search Wikipedia query run structured tool

    24:18

    basically name equals to enterprise tool because this is the kind of tool that we have created. So that is why it is coming like this.

    Make sense? So now we have created these toolkits and it looks good.

    It looks good. Let me just add this code here.

    So now let's create our agent because now we're good to go.

    24:34

    Perfect. Agent basically react agent.

    React agent. Perfect.

    In order to create the react agent, you already know the code that we have just seen in the documentation. It

    24:50

    is very simple, straightforward code. Just few lines of code.

    Earlier it used to be a little bit bigger but now they have just simplified it. So I can just make some changes.

    I want to use GPT 5 mini and tools equals to toolkit.

    25:07

    Perfect. Now let's try to show you the agent.

    Now let's run this. Perfect.

    So this is my agent. Wow.

    Do you know what? They have just started visualizing this in the V1.

    It was not available before. We had zero visibility.

    We have we had zero visibility of our agent that we

    25:23

    were creating before. So now do you know what is this?

    This is a kind of mermaid mermaid it is called mermaid code. So whatever the it is like doing this thing we can actually um see the back end code as well but I don't want to show you that code because you will not understand anything in that back end

    25:39

    code. No no no one can understand it.

    It's basically a way of designing all of these you can say DAGs and in Langraph we just can show you as well and then we can just visualize it as well because in the back end what I have um realized after exploring v1 they have literally

    25:54

    rewritten everything in langraph I'm so sure like under the hood um lang graph is running I'm so sure because this kind of thing we built in lang graph but yeah it's fine so this is our agent now you can see that this is the model right because this is the model

    26:10

    And what is the model? Model is this this one.

    This is the one. We can also say llm.

    It's not a big deal. I can also say model equals to lm lm openi that we have right.

    I can even run this um chat open object has no

    26:26

    attribute lower. What do you mean?

    Did we create lm openi? Yeah, we have this llm open.

    Oh, I see. I see.

    I see. Okay.

    Okay. Okay.

    Makes sense. We have to use this name as is because remember our init chat model thing where we have to

    26:42

    define the model itself. So we cannot say lm openi like this because this is not the model variable.

    This is the model itself that they are calling. Okay, that's fine.

    That's fine. So now this is the model.

    Okay. And the these are the tools.

    Is it similar to this

    26:57

    one? Yes.

    Obviously, obviously this is exactly I know like this looks better but yeah this is exactly the same thing because this is the model and these are tool calls the this start and end is just a way to tell that this graph is

    27:12

    starting and ending make sense this graph is starting and ending okay now I will just show you very good thing so now let's test this agent that we have built now I want to invoke the agent but I will invoke it using streaming now what is streaming So

    27:28

    you are a data engineer you should know about streaming. Streaming is streaming means when we just display or basically use a data as soon as it arrives right.

    So when I will be just making this um model call there will be so many things running in the back end right in the background and we want to also visualize

    27:45

    this. We also want to see what are the things going on in the background.

    So we'll be using streaming so that we can see everything what's going on and it's a good way to actually understand the thing as well. So let me just show you React agent invoke

    28:00

    with streams. Okay, very good.

    Perfect. So now I can bring that streaming code.

    It is also written in the documentation I guess. Uh where is stream?

    Stre stream

    28:17

    uh wav tool call. Okay, maybe in the quick start system prompt because I use streaming with langraph.

    I do not use it with lang chain. Uh, okay.

    28:32

    Where is that? I I saw it somewhere.

    I saw it. I don't know where, but I saw it.

    Oh, yeah. Here it is.

    It was under SQL database toolkit because I know like I saw it. So let me just copy this uh let

    28:50

    me just paste it here. So what is this thing?

    I am simply trying to make an make a model call. Okay.

    And I want to write my question. So what I will do?

    I will simply say um who or let's say give me the latest news

    29:09

    about let's say stock market. That's a good question.

    So do you know what will happen? Let me just first of all run this.

    So human message this this is my message give me the latest news that I have sent to the AI. Now it will go to the tool call.

    Just wait just wait just wait you will see everything. Just wait.

    29:25

    It is running in the real time. See see now again.

    Again let's wait. Let's wait because it is generating more messages.

    Okay. Okay.

    Tool message. Okay.

    Okay. More tool messages.

    Wow.

    29:43

    Okay, let's wait because we just simply posted a very generic message and it is just making more and more more and more tool calls and that now that is done. Now let me just show you what happened.

    First of all, let me click here scorable element

    29:59

    so that you can see all of the output. So first of all, first of all, we made a first of all we made a query like we asked a query.

    Give me the latest news about stock market. Perfect.

    This is my message. It went to the

    30:16

    model. Yes, simple remember the flow.

    It went to the model react agent. Okay, our react agent that we have created.

    So now then what happened? React agent basically LLM didn't have the built-in function to search for the news because

    30:32

    LLM is not trained on news right or it cannot be because you need news like latest news maybe today's news so now it looked for any tool in its inventory in its toolkit and it found a tool how because you provide the description

    30:48

    right so now we found a tool called duckgo search simple it used that tool called the go search and it called that tool and this is the call ID and these are the arguments stock market news January 826 because today is January 8 C

    31:05

    I'm literally getting today's news US S&P 500 do NASDAQ latest blah blah blah blah blah then this tool this AI message didn't generate anything didn't generate anything this tool basically AI message

    31:22

    generated a tool call and that's it with arguments ments with arguments. Okay, with arguments and remember this thing because this concept is really important.

    You need to understand everything. So AI message that means AI message means message is coming from

    31:38

    LLM. So LLM did not generate a response.

    It generated a tool call. It generated an indication that I want to make a tool call bro.

    Please allow me to make a tool call. Make sense?

    Then it made a tool

    31:55

    call because we have provided those tools. This is the answer that my tool has returned.

    This is not generated by LLM. This is the message that my tool has returned.

    Basically, duct go search has returned. Perfect.

    Now it went to AI

    32:13

    message again. It got the data.

    AI message was not satisfied. It made another tool call.

    It said give me more news. Now it got this message.

    Then the AI message said like give me more data. It gave its query with obviously some

    32:30

    changes because it wanted to make some more calls. Then it got the value back from the function.

    Then AI message again made a tool call and then again tool message gave its a data. Make sense?

    And then finally AI message has returned the data like returned the answer at the

    32:47

    end. This is the flow.

    The point why I'm explaining this thing because I will just show you the back end as well like how it is running in the background. LLM never generates the answer.

    It simply indicates that I want to make a tool call. If it needs to make a tool call,

    33:03

    it will make the tool call. It will get the response back and then it will generate the response.

    If it does not need to make a tool call, it will not do anything. So all of these things are actually managed by lang chain.

    Actually everything is managed by lang chain.

    33:18

    Under the hood how it is possible to do it. So basically under the hood what do we do?

    We simply first of all create an LLM. We bind the tools manually.

    Here Langchen has done that part for us. First step

    33:34

    tool binding. Very good.

    After binding the tools we make the API call. We basically make the um call to model.

    Right. Then we need to define this logic.

    These two arrows these are not very simple. We need to build this

    33:50

    logic. We will simply say if in the response if in the response we have something called as let's say tool call I will just show you if it is written make a tool call then at that time I will call

    34:07

    that function. How?

    because I have the information for the tool call name basically call ID plus its argument because LLM is smart enough to generate the arguments. It will say bro you can

    34:23

    make this tool call and the tool call will require this argument. I can do that for you but I cannot make a tool call.

    You will make the tool call and you will just run the function and you will give me the response back. Who is you?

    Who llm is you? it is talking to himself or herself or itself.

    34:40

    Make sense? That's how it works.

    That's how it works under the hood. Okay.

    Now, if I just show you um very good example of like tool calling like under the hood. Let me just show you

    34:56

    that. Uh if I want to bind my own model because currently we didn't bind the model, right?

    Bind the elem. didn't bind the uh LLM.

    Let me show you manually

    35:13

    binding the LLM. So currently LLM with tools and once you grab this concept bro I would say just go in the industry and build something simple.

    So manually binding the LMIT tools. So

    35:29

    currently if I use my traditional llm openi right if I say llm.t invoke and if I just ask the same thing what's the latest news what's the latest news about the stock market maybe it will just give me the

    35:46

    news which is like outdated because it was trained like long way back but it will not do anything but let's see the result and then I can just continue with that because I can uh otherwise give the other prompt for which like it will not have access to do that at all maybe send

    36:01

    an email or whatever So let's wait. Let's wait.

    Let's wait. See, I don't have internet access.

    Perfect. So it cannot do that.

    Right? Now let me show you the magic.

    So this is the uh without binding.

    36:19

    Perfect. Now let me just write with binding.

    Perfect. Now with binding what will happen?

    I will simply say llm bind it. And I will bind my LLM with tools.

    llm openi

    36:37

    dot bind tools. Perfect.

    And what are the tools? Basically toolkit.

    Make sense? Now what will happen?

    First of all, let me just run this.

    36:52

    Perfect. Now if I ask the same question from it bind it let me just ask this question let's see what will happen

    37:08

    perfect now do you know what will happen like what actually happened AI message didn't say I can't fetch live news or blah blah blah it has returned empty content because it didn't generate anything but just scroll towards

    37:23

    Just scroll towards right and you will see something called as scroll towards right right see tool calls

    37:40

    llm is saying yes I cannot generate anything but I can make a tool call. So it has listed the tool it wants to use for the tool call.

    That is called binding the model with the tools. Langen does it automatically for us.

    And you

    37:56

    will say like how it is doing this. This is the code.

    This is writing this code for you. Okay.

    So it uses this thing and you can also see it has generated the arguments. See arguments query because we are using query um you can say input.

    I can also show you um if I go here in my duck go

    38:15

    search where is that duck go search duck go search uh okay so this is like general function so it knows like it will be a query so it's fine and then it is saying stock market news today it has generated a phrase that it will use perfect

    38:34

    make sense and see it has created multiple instances of it now this time again same tool but with different argument because it can have like multiple um you can say those um phrases. So earlier it said like latest

    38:49

    news today then S&P 500 today then it will be like one more see do Jones industrial whatever. So it has actually created three different phrases.

    So it called the model three times and do you know how it happened? You know it knows this thing.

    So LLM returned this thing.

    39:06

    So let me show you. So LLM returned LLM returned this tool call.

    Right now, now what we do, what we do manually like lang is doing everything for you. But what we do manually, we take this list

    39:23

    of tools because it can have only one tool or it can have like a list of tools as well. We take this list of tools and we make the function call one by one and we keep on updating that particular response with the message type called

    39:39

    tool message. That's why you are seeing a new type of message called tool message.

    See tool message we do it manually. We do it in langraph.

    But in lang chain that's why like lang chain is a lightweight you can say AI

    39:54

    agent builder because you do not manage it anything. you everything is like done for you from the agentic point of view not from the branching and all branching and is like fine with language but for agentic standpoint I hope it makes sense why I'm putting

    40:09

    this this much of effort for this topic because when I was also learning I was so confused and I wish I had this kind of explanation in front of me and in just one video I would have felt so so so good but I'm still so grateful for

    40:26

    all the resources that I also got. So it's fine.

    It's fine. It's fine.

    It's fine. Right?

    Because every resource is good because I would say each resource or every resource will teach you something. It will not like give you something which

    40:42

    is not useful. It will still give you something useful.

    So still grateful for all the resources that I have that I have used. So amazing.

    So now I hope it makes sense. Right?

    Now let's try to build a quick one more example and it will be like very handy and very useful for data engineers because I want to

    40:58

    build that it will be fun. So let's end this video with that example and then I will at the end give you a very important thing to just cover and it's called rag.

    I know you are waiting for that thing. Do not worry.

    I will also tell you what to cover in rag and how to cover in rag because rag is actually a

    41:15

    totally different area and you need to go much much much deeper in rag as an independent topic but don't worry I will just tell you how to cover that as well because there's a good news I have a dedicated course on rag as well but let's cover this let's complete this

    41:30

    example first of all it will hardly take 10 to 15 more minutes and then we're going to sum up and then we can just talk about rag as well quickly and then I will just guide you to the best resource for rag simple if you just want to learn from me obviously make sense okay because rag is actually different like totally

    41:45

    different as an independent topic it has nothing to do with any framework rag is rag like rag is totally different okay so let's talk let's create second and react db agent

    42:02

    let's try to create an agent for our SQL database let's try to do this it will be fun so now you'll be saying an Lamba from where we can just get the database bro this is this a question is this a question you don't have a database

    42:19

    I know you would have a database but let's say the database is in the other machine and you do not want to download any server again and you do not even have a docker installed so let's quickly create a lightweight database it's called SQLite if you are not familiar with SQLite it's very simple SQLite is a

    42:34

    kind of serverless database that you can run and it stores everything within the file files within the storage and you do not need any kind of server. It is a lightweight database not like it should not be good for scaling and it's just for P and we are good with that.

    So let's create a file to init database

    42:53

    right and for that I would need to install SQLite if it is not installed already. SQLite SQLite was not found.

    What do you mean in package registry depends on SQLite?

    43:09

    What is this Python version? The resolution failed for other Python versions.

    What project while the active Python version 3.12? Okay.

    The resolution failed for other Python version supported by this uh project supported Python. If you want to add Python regardless failed resolution, provide the frozen flag.

    Why

    43:24

    do I need to provide frozen flag? Let me just import it.

    SQLite 3. It has given me the whole code.

    Let me just comment it out everything and let me just run only this thing import thing. Let's

    43:40

    see if it is working fine or not. Yeah, it is working fine.

    Okay, we do not need your import. I think it would have already imported it.

    So, okay, perfect. That's good.

    That is good. So now what we need to do, we will create a database.

    43:55

    So in order to create a SQLite database, it is very very very simple to create a database. If you have the database here in the folder, it can connect to it and currently we do not have.

    So it will create the database for us as well. So let's create it.

    And yeah, so I'll simply say connection equals SQLite 3

    44:14

    dot connect. And if you are familiar with pi myql or psychopg2, it is exactly the same.

    So in that thing, we simply write psychopg2.connect and then we just pass all the thing host db whatever password and all. Same with pyql.

    It is sqlite3.connect. So now you'll say how

    44:30

    to connect with this because we do not have any database. We do not have any password.

    We don't have anything. So it is called the location of database where our database is located.

    It is located in the current directory. So I'll simply say um um I will say um

    44:46

    my DB. Okay.

    My DB or let's say sales DB just to make it more professional. Sales DB.

    And then I want to store it inside um let's say

    45:02

    orders orders dot or let's say sales do database sales db perfect and let's create a folder called sales db in the chapter 3 sales db

    45:19

    bd okay perfect so let me run And so what it will do, let me show you. So it will throw the error.

    Wow. What?

    What's wrong with you, bro? Uh unable to

    45:35

    open database file. Are you serious?

    So you need to create this, right? SQL3.Connect.

    Sales DB. Is the spelling right?

    Yes. Okay.

    Sales DP. I think I know the issue.

    Like there's

    45:52

    no issue. So currently this file is running from this folder langchen tutorial and we want to run this from this folder.

    So I will simply say cd and I can also add the system path from here. It's not a big deal.

    So I can also say import os and then os.make directory

    46:09

    if it is there. But we already have the folder.

    So we can also say system dot os. Cis or let me just import sis as well.

    import SIS and once we have the SIS

    46:28

    we can simply add it to the system path. So now what will happen?

    Now it will add our you can say system path to this directory. So now let's try to run it.

    If not we can just run it from here. Unable to it should add it.

    Um OS

    46:45

    system.path. Let me just print it first of all.

    Like what is the directory? Print this thing.

    Hm. Okay.

    Lang tutorial chapter 3 unit. py

    47:01

    that makes sense. Okay.

    So it is just returning the same thing. Let me just try running it from the CD because I know it happens whenever we just use or run this thing via this debug button because this is not that good way to run the Python file.

    We should always run like run Python

    47:18

    something like that. So let me just remove it or keep it.

    It doesn't make any sense. So let me say cd and then chapter 3 and then we want to go inside sales DB.

    Not sales DB. I

    47:33

    think chapter 3 is fine. Now if I say python init db.

    py. Perfect.

    Now if I check the sales db it would have created the sales.bc. Perfect.

    Because it creates a sales. Db automatically.

    Make sense? If I write

    47:51

    con equals to this. And if I just run python it will create this.

    See sales. Db.

    I know this database is empty for now. But database is created.

    Simple. And now I can just rerun the cell as many times as we can.

    Because this will

    48:06

    not throw any error. This will not.

    Why? Because file is there.

    Oh, okay. Makes sense.

    Because now like it usually either creates the database if it doesn't exist and if it

    48:23

    exist, it simply connects to it. So let's create a cursor quickly.

    Uh cursor equals con.cursor. And here I can simply say quant cursor which is this one connection is this one.

    And now I can create a table quickly.

    48:38

    And I will simply say cursor.execute create table ID customer name product name quantity price total. Perfect.

    I want to create this table. And I also want to say cursor do.execute and let's insert some data.

    Insert into orders and values will be this. This

    48:56

    just write the values as well. Bro, bro, you can just do this thing as well or you can just run a loop.

    But for now, we are simply running it

    49:11

    for the demo purpose. So let's run it like this.

    Okay, this is one way of doing it. You can also say with um not open basically cursor as

    49:26

    like cursor and then you can just simply use the context manager to do it like there's multiple ways to do that. I personally like this thing cursor.execute.

    If you just want to use context manager you can just do that as well like this with cursor as cursor and cursor.execute. What is the benefit of it?

    All the things will be done within

    49:43

    this context. It is called context manager.

    Okay. I can even run this as well like this.

    So the moment we are writing anything outside this the context is over. We do not need to worry about anything.

    Make sense? Okay.

    So now let's say let me just make it here.

    50:04

    Okay. Width is here.

    And I can say cursor dot commit and connection. Okay.

    So now let me just run this again. Cursor object does not support the

    50:20

    context manager pro. Oh man, this doesn't support because this is SQLite because I have used it with um postgra and my SQL it works fine but SQLite doesn't support it because SQLite is like very lightweight lightweight light lightweight lightweight.

    So let me just

    50:37

    remove it and just follow that traditional approach. So nothing to do with it.

    So what again what we can do it do with it if it is not allowed. So let me just run it now.

    50:53

    Uh okay it has run it successfully. Now this sdb has some data.

    Now we'll say how do we know? I can just show you.

    Um I can just simply create a demo file just for you. I will simply say demo.

    py and I will

    51:10

    simply say import SQLite 3. I will create a connection with this one and I will say select a from orders and let's see what does it return.

    Okay, let me just show you bro. Perfect.

    51:26

    And this one do we have any kind of opened that thing? No.

    Oh man. Let me write cd.

    Wow, it is saying no this thing is existed.

    51:42

    Oh man, let me just open the new terminal. Amazing product man.

    Amazing product. Amazing product.

    So I will simply say Python demo. py.

    51:58

    Perfect. Object does not support the context manager pro.

    Okay, we forgot, bro. Perfect.

    Let me just run this. Perfect.

    See, I can see the data. That

    52:15

    means data is there. Wow.

    That's how we retrieve the data. So now we have database ready.

    Your tension, your stress is over now because I have created the database for you. You do not need to download any server.

    So we have this database ready. Now what we are trying to do here, we want to create an agent which will take care of our SQL

    52:31

    things. Can we do that?

    Yes. it will automatically run everything and it will give the um you can say data to us.

    So as you can see we have three records 1 2 3 because we ran it I think two times so that's why it is returning like this. Uh let me just clear it and let me just run

    52:47

    it just to see the response. We have three response.

    Okay so we have laptop, tablet and smartphone, right? So maybe I just want to check the order for tablet, right?

    Maybe I want to check it or maybe I want to just check how many sales we made for smartphone. I want to just see

    53:03

    you are creating an agent for any manager and you are a data engineer. You have created this particular data warehouse.

    Now you need to build an AI agent as well for your database, right? So what now what you will do?

    You will create a SQL agent and you can build SQL agent so easily with lang.

    53:19

    Literally man you do not need to run any kind of those things. Um it will already like run everything on your sandbox and it it is very good.

    Let me show you if I can just find that documentation. This is the one SQL database toolkit.

    So they have created a special tool for SQL

    53:34

    workloads. But yes, you need to be very very cautious to implement it in production because you need to add some guardrail so that no one can inject some queries which can harm your database and all.

    So you need to take care of those things and it is a kind of common sense you need to do that. But let's say we

    53:50

    are good with that and let's actually try to build this. Okay.

    So our DB is done. So first of all I can just create this DB.

    What is this DB? This DB is nothing but a kind of you can say connection that you need to create.

    So first of all I will write this thing

    54:07

    SQL database agent. Perfect.

    And I can just close this and let me just write it here. Wow.

    54:24

    Let's wait for it. Okay.

    So this is done. So now now what we will do?

    We'll simply create an agent which will run the query as well because there are two types of agents. One is

    54:40

    like who creates the query. The second one is who just runs the query as well because we want to create the runs the query as well because we do not need to just generate the query.

    Okay. So let's first of all create the database and we have a special function called SQL database which is this one.

    And we do not need to

    54:56

    even create cursor or basically the connection. So everything will be done by lang chain.

    So how we can just create the database connection? If you read the documentation, you will feel a little bit confused.

    But do not worry. You do not need to create this connection at all.

    You simply need to use this particular thing SQL database and that's

    55:12

    it because you already have the engine. Okay, make sense?

    So you simply need to say SQL database uh like this SQL database or let's say DB whatever you want to say let's say SQL DB equals

    55:28

    SQL database from from URI and then you simply need to pass the URL and the URL is like this SQLite and we do not have this thing dot and we have I think sales DB

    55:45

    Perfect. And within that we have sales db dot sales.db basically I think so yes sales db and sales dodb I think it makes sense because we have like same folder within the same you can say folder and then we have the database within that.

    Okay so this is our DB that we have

    56:02

    created. So so far we are good.

    This thing is done. Now we need to create LLM.

    Let's quickly create the LLM. Now let's quickly create the LLM and let me just bring that particular thing that code

    56:19

    this one. Let me just write it here.

    So it will create the LLM for us. LLM open.

    So now we have both the things both the things. Yes.

    LLM and DB. And now we need to create a toolkit like this.

    Okay. Make sense?

    Now we just need to

    56:35

    create the toolkit like this. Let me just copy it.

    Uh let me just paste it here. So now what are we trying to do?

    We are creating a toolkit and in this toolkit we have so many tools related to SQL. Really?

    Yeah. They have built-in toolkit

    56:51

    where they have integrated so many tools within the same toolkit. Let me just show you.

    If you scroll down, if you just write get tools, you will see the entire list of tools. Wow.

    We have query SQL database. We have infosql database.

    We have list SQL database. We have query

    57:06

    SQL checker. So these are the built-in functions that are created by Langchain.

    You can even create your own function. It's not a big deal.

    What will happen? You will simply get the uh you can say input.

    Then you will pass the database as context and then you will simply get the query in return and then you will

    57:22

    just run it on sandbox. So it's not very something magical.

    It's just like building a wrapper and making your life easy. You can simply use the functions.

    Make sense? So toolkit is like toolkit SQL database and DB equals to DB which is not DB SQL DB and LLM is lm openi

    57:41

    perfect now it makes sense perfect so now what it will do it will use this database as the context and it will use this model obviously because just tell me one thing is anything magical happening here no so what is happening here I will go to lm with my NLP with my

    57:59

    natural query natural language query, right? I will go to LLM.

    I will say, "Hey, generate this SQL query." LLM will say, "Okay, bro. I will generate the query, but at least tell me the database name.

    At least tell me the table name. At least tell me the column names.

    Only then I can create the query." Simple.

    58:17

    So, you are going to LLM with this database as the context. So, when you pass a database as the context, now LLM has the you can say context of the entire database.

    It knows the database name. It knows the columns.

    It knows the table name. Everything.

    58:34

    Oh, okay. So, that's how it generates a query.

    You're simply passing the context and that's it. Okay.

    So, let's run this. Toolkit is ready.

    And if I just show you toolkit dot get tools, you will see all of the four

    58:50

    tools that you have. Perfect.

    Now, we have toolkit as well. We have everything as well.

    So, now how we can just create the agent. So, now just skip everything.

    You simply need to copy this and just paste it here. So we are using langchain aagents import create agent.

    Okay. So

    59:08

    now this is the function that we use and we use only and only LLM and toolkit and do we have toolkit? Yeah, we have toolkit variable and I need to use lm open.

    Perfect. So now

    59:25

    this is my agent. This is my agent.

    See, now it has created this agent model tools and blah blah blah. And this tools has all the SQL tools.

    Make sense? Make sense?

    Now, do you know what?

    59:40

    Now, you can literally invoke this agent and you will get the answer. Okay, let's see.

    On Lamb, let's see. Okay, I will say agent dot invoke.

    Okay. And I want to say how much

    59:56

    sales we made for smartphone. I know the answer.

    The answer is if I see the terminal, if I have the real records for smartphone, we made 500, right? But is this 500 and,000?

    00:11

    What are these two columns? Let me just check the init.

    py and let me just remove this. So we have price and total.

    Okay. Oh, okay.

    Price and total. Oh, because we have quantity as well.

    So total price is,000. Okay.

    1,000,000. Let

    00:28

    let's try to use tablet because it is very straightforward. 600.

    Let's use this tablet. How much total sales we made for tablet?

    It should be 600. If it is using everything right, if it is using right SQL and

    00:44

    everything. Wow, we have an error.

    Expected dict how much total sales you made for tablet? Okay.

    For troubleshooting this this expected dictionary. Okay.

    So do we need to use any kind of um input while calling it?

    01:00

    Let me see. Yes.

    Okay. Okay.

    We need user and me. Oh, so messages is the key.

    See this is this is the downside when we use the automated creator autom automated created functions. But no not a big

    01:17

    deal. We can just use this as this one as well.

    So if you have got the idea like what I'm trying to say how much total sales we made for tablet.

    01:33

    So we need to pass the messages key as well. Okay.

    Let's now try to run it. Let's see what happens.

    Okay. Okay.

    Very good. Very good.

    Very good. Yes.

    Yes. Yes, it is generating the output.

    Let me just

    01:48

    zoom out a little bit. Perfect.

    Total sales for tablet equals to 600. And it has already given you the reason as well.

    Three units into 200 each. Wow, man.

    Literally amazing. Let me show you what actually happened.

    So,

    02:05

    first of all, I asked this question. How much total sales we made for tablet?

    Perfect. Okay.

    Now, it made a tool call to this particular function. It's called SQL DB list tables.

    We didn't create this function. We didn't create this tool.

    Lang can create this tool. So what it does, it

    02:21

    simply lists down all the tables that we have because currently I just asked query related to single query, single table. Maybe I'm asking a query which needs to join multiple tables.

    So first of all, it will list all the tables and it will give to LLM. Simple.

    See list

    02:36

    tables are done. Then it figured out that I want to use orders table.

    LM is smart enough to identify. Perfect.

    Now this table information see SQL DB schema that means what is the schema of the orders like what are the column names? Oh so this function actually acted and

    02:54

    it gave it the schema name like all the column name. Then this is the table create table orders ID integer and blah blah blah.

    This is the tool message and this is the schema that we pass to LLM. Now AI is smart enough to just generate

    03:10

    a query and run the generated query against it based on the schema based on the schema. That is why in the modern applications we always say that whenever you're creating a table whenever you're

    03:26

    generating the tables you need to provide some description to each column so that LLM can easily pick the right columns. People are saying LLMs are hallucinating.

    LLMs are not generating right results because data governance is poor. Data quality is poor.

    You need to

    03:41

    create tables with right descriptions. In the modern tools and technologies, there are some options where we can just generate the descriptions like this.

    And it is important because see I know that there will be some column let's say total. So it aligned that particular

    03:58

    question with that particular column name. But sometimes we are asking a question which is not very much aligned with the column name.

    So how it will find it? Using description h make sense

    04:14

    that's how it works. So see you have actually built a kind of agent that can create or basically and I can also remove it that can actually write or b not just write you have

    04:30

    actually built an agent which can just do anything on your database anything. Make sense?

    Makes sense. Makes sense.

    So that's how you just do all of these things. Wow.

    Literally. Wow.

    So, it is a kind of

    04:48

    agent in itself. H okay.

    So now let's try to understand the concept of rag and then I will just provide you the best resource for that as well. So first of all let me just provide you that then you can just jump

    05:04

    on that topic. Let me just say rack anal lamba just search it and this is the rack full course that you can just watch and this is an amazing beginner friendly like this will not expect any any anything from you in terms of rag and

    05:21

    this is I think 3.5 hours course you can simply go there and you will love this course this is like from zero to pro and actually mean it this is like amazing course okay but yes let's cover like what exactly the rag is just fundamentals and then you can simply go to this course make sense because see I

    05:37

    could have just added rag in this particular course as well not a big deal for me but I know the depth the fundamental knowledge the conceptual knowledge the coding knowledge the practical knowledge the implementation knowledge that you will get in this particular 3.5 hours course like

    05:53

    obviously it is worth watching this course so I was like let's guide this particular thing and let's provide this amazing masterpiece that we have created to my lovely data fam okay but still let me just tell you what exactly The rag is why rag is so popular.

    06:08

    Let's try to cover rag. Now rag actually solves a problem.

    Okay, what problem? So let's say you have

    06:24

    this thing. Okay.

    Well, let's even remove this thing. Let's say you just have that LLM Perfect.

    06:40

    Let's bring it here. Okay.

    Perfect. So, let's say you just have this thing LLM and you are here.

    Perfect. Now, this LLM

    06:56

    doesn't have the access to your data that we already know and that we have validated as well. Let's say you want to um ask some questions ask some questions from a PDF and that PDF is actually the sales file for your organization.

    07:14

    Okay, simple. Now just telling one thing if I want to ask some questions related to that PDF, it can be a PDF, it can be a text file, it can be literally anything.

    It can be a database as well. Okay, because rag is not just a tool or just a kind of

    07:30

    function. And rag is a concept.

    Rag means first of all retrieval augmented generation retrieval augmented generation. This is

    07:47

    rag. Okay.

    Now what happens when LLM doesn't have the access to your data? How it will access your data?

    The answer is cannot access your data. That is why what we do we provide a kind of inventory

    08:02

    inventory inventory we provide a kind of folder we provide a kind of repository from where LLM can access the data okay so let's suppose I wrote this question

    08:17

    how much sales we made doesn't know this let's say you are writing this from your organization right but LLM doesn't know this thing at all

    08:33

    at all okay so now how LLM would know this so what I will do I will attach something called as let's say a tool let's say this thing let's say I provide this particular archive or archive or let's say my data

    08:51

    my data Okay, I have provided this. So what I will do, I will simply link these both two things like this.

    So now what will happen whenever I will be just asking this question how much sales you made or any question

    09:09

    and I know that information is available within this data lm can do that. H okay is this magic?

    No, there's a you can say logic plus formulas behind it. So what

    09:25

    it does rag is basically like multi-steps process. That's why I'm just suggesting you just watch that video.

    Okay, but still I'm just simply giving you very high level knowledge, very high level. So what it does, it stores your data in the form of vectors.

    Okay, in

    09:41

    the form of vectors. Okay, let's see if we have anything for vectors.

    Oh, let's use this one. Yeah, let that makes sense.

    Perfect. This will store your data in the form of vectors.

    09:59

    Okay, let's say vector 1, vector 2, vector 3, vector 1, vector 2, and vector 3. Let's say this has three vectors because it totally depends upon the context size that you want to pick,

    10:16

    right? So it has these three things okay under the hood.

    So now what will happen? This LLM will go here.

    LLM will actually go to this thing. So when it is going to my data actually it is going to this particular thing.

    Make

    10:33

    sense? So I can change the angle now.

    So this is the reality of it. Okay.

    So it is actually talking to this. So what it does it will convert your question as well in the form of vector.

    10:50

    Okay. Hm interesting and it is called query vector because this is the query right now this query will go now this query will go inside this particular

    11:05

    area and it will find the relevant relevant vector. So this goes inside this let's say this and it found that this query is similar or let's say it stays out but it can just go and check that this particular thing is similar.

    11:21

    How does it know this? Because LLM knows it through the formula that it uses and what kind of formula it uses it uses something called as cosine similarity.

    It can be Ukraidian distance as well but do not worry we are

    11:37

    not teaching you statistics statistics okay so don't worry at all so we can just use this thing so it is not allowing me to just push this let me just push it wait because I want to go to this vector at anyhow perfect

    11:58

    bro okay let's do it like this. Okay, perfect.

    So let's say this query is actually similar to this vector one. Okay, do you know what will what it will do?

    It will return this vector one like

    12:15

    this like this back to the LLM. Okay, now what will happen?

    Now what will happen? This LLM has this information in the form of vector.

    Okay. And then obviously the original

    12:33

    text as well. Let's say it has the file associated with it.

    Um let's say any file. Let's say this is the file associated.

    Let's say the CSV or this txt any file. Okay.

    This file basically data textual data because actual data is

    12:49

    in the form of text right. So this will go to this particular LLM.

    How? On the basis of similarity.

    And what formula are we using for similarity? You do not need to know this for now.

    Right? Like how it is just comparing this query with this vector.

    How it knows like it is

    13:05

    similar to this and not similar to this and this. Basically we achieve this using cosine similarity or ukidian distance.

    But you don't need to worry about that. Just imagine that LLM is doing that all of that stuff for you like all the similarity search right.

    Okay. And in lang as well we have

    13:21

    like dedicated function called similarity search. So we do not need to actually create that function because every um you can say vector database has its own particular way to retrieve or basically compare the data.

    So we will just we can just use lang chain's um package simple. So when we have this

    13:38

    information available now just tell me one thing can lang chain answer or basically can lm answer this question obviously yes because it will use this data right now it will answer you back with the

    13:54

    with the right answer basically okay perfect let's say right answer make sense that's the concept of rag Now why do we call it as retrieval

    14:09

    augmented generation? First of all retrieval it retrieves the similar vector.

    Then augmented. Augmented means making something onried making something curated.

    So we augmented our model with this particular text which was not a

    14:25

    available to LM before and at the end we generated the data with this context. This is called retrieval augmented generation.

    This is the highlevel overview or you can say highlevel um process of rack

    14:42

    complete process. Now how to store this data, how these vectors are created, how the similarity search is happening, how this augmentation is happening, how generation is happening, you can watch that video that I have created on this channel and you can even search it on YouTube um rag anlamba okay or I will

    15:00

    also attach the video in the description. If not, you can simply search on YouTube anchar rag and you will find that video.

    I would say that's all for this particular video and I know you learned a lot of things and you loved this video. So just just just hit the subscribe button right now and just

    15:16

    click on the video coming on the screen and I will see you there. Bye-bye.