Prompt Engineering Full Course 2025 | Prompt Engineering Tutorial For Beginners | Simplilearn

🚀 Add to Chrome – It’s Free - YouTube Summarizer

Category: AI Tools

Tags: AIAutomationPromptingTechnologyTools

Entities: ChatGPTClaudeGitHub CopilotGoogle GeminiOpenAI

Building WordCloud ...

Summary

    Introduction to Prompt Engineering
    • Prompt engineering is a key skill in AI, crucial for optimizing AI model performance.
    • The course covers basics of prompt engineering, real-world applications, and advanced techniques.
    • Multimodal prompting and tools like GitHub Copilot are explored.
    Business Fundamentals
    • Prompt engineering can be used to generate business ideas and strategies, enhancing productivity.
    • AI tools can automate tasks, improve workflows, and support business operations.
    Technical Concepts
    • Understanding of neural networks, language models, and AI tools like ChatGPT and GitHub Copilot.
    • Key technical concepts include multimodality, tokens, weights, parameters, and transformers.
    AI Tools and Applications
    • AI tools like ChatGPT, Claude, and Google Gemini are transforming industries with advanced capabilities.
    • Applications include customer support automation, content creation, and data analysis.
    Advanced AI Techniques
    • Exploration of advanced AI techniques like reinforcement learning and context engineering.
    • Prompt tuning and optimization strategies for enhancing AI model performance.
    Takeaways
    • Prompt engineering is essential for optimizing AI performance in various applications.
    • AI tools can automate complex tasks, improving efficiency and productivity.
    • Understanding AI fundamentals and technical concepts is crucial for leveraging AI tools.
    • Advanced AI techniques like multimodal prompting and reinforcement learning offer significant benefits.
    • AI tools and applications are transforming industries, offering new opportunities for innovation.

    Transcript

    00:00

    [Music] Welcome to the prompt engineering full course by simply learn. Ever wondered how AI models like chart GPT really work or how to make them perform at their

    00:15

    best? You are in the right place.

    Prompt engineering is one of the most in demand skills in AI right now and we are here to make it super easy to understand. So in this course we will start with the basics of prompt engineering and we'll

    00:30

    show you how it's used in real world situations. You'll also learn about multimodel prompting which involves working with different types of inputs and outputs for AI models.

    We'll also dive into using chart GPT for programming so you can see how AI can

    00:47

    help with coding task and we also introduce you powerful tools like GitHub copilot in agent mode to boost your workflow. And as we move through the course, you'll get hands-on experience in building apps with chat GBT and exploring cool topics like VIP coding,

    01:04

    context engineering. We'll also teach you how to use OpenAI Sora and show you how to build a website with Chbdt.

    Plus, we'll walk you through creating a Telegram bot and using Chad GPT for analysis and SGP to level up your

    01:19

    skills. And to top it all off, we'll show you how to turn your new prompt engineering skills into real world opportunities and even make money using Chad GBT.

    So, hurry up and let's get started. Before we move on, here's a quick information.

    If you're interested

    01:34

    in growing your career in AI and machine learning, this course is a great way to start. The professional certificate in AI and machine learning by simply learn university online and IBM will help you master all the key AI skills like chat

    01:50

    GPT, LLMS, deep learning and agentic AI through live classes and hands-on project. And in just 6 month you will work on real world industry projects, use 18 plus popular tools like Python, TensorFlow and earn certificates from

    02:06

    Purdu and IBM. You'll also get career support including resume help, mock interviews and job assistance.

    So what are you waiting for? Hurry up and enroll now and you can find a course link below.

    Hello everyone and welcome to this amazing video on prompt engineering by

    02:21

    simply. It was November 30, 2022.

    Sam Alman, Greg Brockman, and Ilia Saskkever would never have thought that with the push of a button, they would completely alter the lives of all human beings living on the earth and of future generations to come. On November 30, the

    02:38

    OpenAI team launched Chad GPD. Chad GPT was born that day, albeit a very small event in the history of internet evolution, but one that can no less be marked as one of the most significant events of modern IT industry.

    Chat GPD a

    02:54

    textbased chatbot that gives replies to questions asked to it is built on GPT large language model. But what was so different?

    I mean the Google search engine, YouTube, Firefox browser, they all have been doing the same for decades. So how is Chat GPT any

    03:09

    different? And why is it such a big deal?

    Well, for starters, Chad GPT was not returning index to websites that have been SEO tuned and optimized to rank at the top. Chat GPT was able to comprehend the nature, tone, and the intent of the query asked and generated

    03:26

    textbased responses based on the questions asked. It was like talking to a chatbot on the internet minus the out of context responses with the knowledge of 1.7 trillion parameters.

    It was no shock that a computing system as efficient and prompt as chat GPT would

    03:43

    have its own setbacks. So did chat GP.

    It was bound by the parameters of the language model it was trained on and it was limited to giving outdated results since the last training data was from September. Still chat jeopy made waves in the tech community and continues to

    03:59

    do so. Just have a look at the Google trend search on chat jeophaty and hundreds of AI tools.

    The sheer interest that individuals and enterprises across the globe has shown in chat jeopity and AI tools is immense.

    04:15

    AI and AI AI AI generative AI generative AI generative AI AI AI AI AI AI AI. Now here comes the fun part.

    Chat GPT or for that matter any large language model runs on neural networks trained on multi-million billion and even trillions

    04:32

    of data parameters. These chatbots generate responses to user queries based on the input given to it.

    While it may generate similar responses for identical or similar queries, it can also produce different responses based on the specific context phrasing and the

    04:48

    quality of input provided by each user. Additionally, Chat GPT is designed to adapt its language and tone to match the style and preferences of each user.

    So, its responses may vary in wording and tone depending on the individual user's communication style and preferences.

    05:05

    Every user has their own unique style of writing and communication and chat deputies response can vary based on the input given to it. So this is where prompt engineers come into play.

    Prompt engineers are expert at prompt engineering. Sounds like a cyclic

    05:20

    definition, right? Well, let's break it down.

    First, let's understand what prompts are. So prompts are any text based input given to the model as a quiz.

    This includes statements like questions asked, the tone mentioned in the query, the context given for the

    05:36

    query and the format of output expected. So here is a quick example for your understanding.

    Now that we have discussed what a prompt is. So let us now understand who is a prompt engineer and why it has become the job for the future.

    Broadly speaking, a prompt

    05:51

    engineer is a professional who is capable of drafting queries or prompts in such a way that large language models like GPT, Palm, Llama, Bloom, etc. can generate the response that is expected.

    These professionals are skilled at crafting accurate and contextual prompts

    06:06

    which in turn allows the model to generate desired results. So here's a quick example for you.

    Prompt engineers are experts not only at the linguistic front but they also had extensive domain knowledge and very wellversed with the functioning of neural networks and

    06:22

    natural language processing along with the knowledge of scripting languages and data analysis. Leading job platforms like Indeed and LinkedIn already have many prompt engineer positions.

    In the United States alone, job postings for this role run in the thousands,

    06:37

    reflecting the growing demand. The salary of prompt engineers is also compelling with a range that spans from $50,000 to over $150,000 per year depending on experience and specialization.

    So there are multiple

    06:52

    technical concepts that a prompt engineer must be well versed in to be successful in their jobs such as multimodality, tokens, weights, parameters, transformers to name a few. Whether it's healthcare, defense, IT services or ad techch industry, the need for skilled prominent engineers is on

    07:08

    the rise. There are already several thousand job openings in this field and the demand will continue to grow.

    So, if you want to hop on this amazing opportunity and become an expert prompt engineering professional, then now is the time. Have you ever wished you could just tell your computer what you want and bam, it

    07:24

    builds it for you? Well, guess what?

    We're pretty much there. Hey everyone, today I'm going to show you something called wipe coding.

    And before you roll your eyes, hear me out. This isn't some fancy buzz word.

    It's literally coding that just flows. You know that feeling

    07:40

    when everything clicks, when you're in the zone and ideas turn into working code without the usual headaches? That's called wipe coding.

    Here's what's crazy. AI tools have gotten so good that you can basically have a conversation with them and walk away with a real app.

    No

    07:57

    more staring at blank screens wondering where to start. No more getting stuck on syntax.

    Just cue your ideas and tools that actually get it. Today I'm testing three AI coding tools that everyone's talking about.

    At first we have Bold. Now this thing is insanely fast.

    You

    08:14

    describe what you want and it builds it while you watch it. Then we have Lovable, which is perfect if you care about making things look good.

    It's like having a designer and developer role into one. Then we have Pythagora.

    The coolest part is you just chat with it

    08:29

    like explaining your idea to a friend and it builds complex app. I'm going to show you how exactly each one works.

    The good stuff and not the good stuff. Real demos, real code.

    And at the end, we are doing a showdown. Which tool actually

    08:44

    delivers on this whole vibe coding promise. So let's find it out.

    So let's start with our first AI tool for VIP coding which is Pythagora. So from here you can ask Pythagora to build anything.

    For example, it's mentioned here that ask Pythagora to build a CRM app. So we

    09:02

    will ask the same build a simple interactive CRM app and I'll just hit enter from here. You have to choose any of your account you're using.

    09:17

    So as you can see it's setting up your Pythagora workspace. So as you can see our workspace is being created and now from here you can just uh simply describe.

    So as you can see from here I've given a prompt to create a simple customer relationship

    09:33

    management app. The goal is to keep the track of customer information manage relationship communication and uh help with the sales and support.

    And here's what I have in mind. I've given the app features how I want it to be the task management note section simple dashboard communication logos.

    I've also given the

    09:49

    UIX design element. Now before this you can just fill out this essential questions asked.

    So I'm using it for some other purpose. How big is your company?

    Can just mention anything. And then after I've given the

    10:05

    prompt I'll just hit continue. We'll wait for a few seconds.

    So now as you can see it is giving us to write some specification. So this is the overview of the specifications which I have mentioned.

    So the Pythagora is now generating a

    10:22

    detailed specification for app based on your input. So it's basically creating a documentation of the overview which we had given.

    So it's given that right specifications. So are you satisfied with the project description?

    So I'll just hit on yes.

    10:42

    So it's building all the components and the APIs. It takes a bit of time.

    So now as you can see that our app is ready and I'll show you the preview of

    10:57

    our app. So we'll open this in our browser.

    So you can see from here you can just enter your email and then you can enter your password or if you don't have an account you can just sign up and then you can create your account. So as you can see this is our dashboard that

    11:12

    has created for us. I mean just look at it.

    This is just awesome. Look at the features it has added and can you believe this AI has made it completely.

    This is the number of customers, the active leads we have, the upcoming tasks. I mean the quality is just so

    11:28

    premium and that you get it in free version without even paying spending any bucks and I'm pretty impressed by this. I would rate it 10 out of 10.

    This is absolutely best. So let's move on to our second AI tool on our list which is Bolt.

    So from here you can just uh click

    11:47

    on the first link. So here in bold you can just share your ideas.

    So I've mentioned that build a simple ecommerce website with good UI UX and then I've mentioned

    12:04

    more app features and the shopping cart I want to be added and I'll just hit on enter. So you can just sign in with your Google account.

    So now from here you can just give the prompt I've already given. So we'll wait for a few minutes till our preview will appear here.

    So it's first

    12:21

    creating the overview of a prompt which I've given. And now you can see it's installing the dependencies from here.

    Now it's written that I have created a comprehensive production ready

    12:37

    e-commerce website with beautiful design and excellent user experience. The application features a modern clean interface with sophisticated color schemes, smooth animations and intuitive navigations.

    So we'll just have a look at once we are done with creating this.

    12:53

    It takes a bit of time but uh yeah I think it's ready now. So we can just open in a separate tab.

    So now from here you can see that our e-commerce website is ready and yes this also looks amazing. The name is given shop hub and

    13:10

    this is the new collection here and these are the products and you can just uh see to this you know hover animation that it has given to us and this also looks really premium just look at this and this is the uh drop-own list where we have all the quick links customer

    13:26

    service and the contact as well I'm pretty impressed by this bold new because it has created such an amazing website and the minimalism and the sophisticated look that it has given to my website it's really really good. The color layout also it's blue white and it

    13:43

    has kept very minimalistic. So for this also I would rate it maybe 9 out of 10 or maybe 10 out of 10 because both of them are really good.

    Just look at this Pythogura and here we have the Bolt new.

    13:59

    Both are amazing. I mean uh I can't actually compare both but then again let's check out the third AI tool on our list.

    We have lovable AI. So we'll be checking out this AI tool and how we can just create amazing applications within

    14:15

    seconds. So, I'll ask uh this AI to create an amazing weather app with features and good UI UX design.

    So, let's hit

    14:32

    enter. Okay, you can just log in with your account.

    So, now we'll just wait a few seconds till our application is being built. So now as you can see just working on the progress and I must say the interface of

    14:49

    this AI looks pretty good. You can also upload images as reference and you can preview your changes as well.

    So now it's basically working. So from here you can see that it's working on the index.

    This is the code

    15:04

    which you can see from here. So now you can see that our app is ready and we'll just preview it and again just look at the features it

    15:20

    has added. This also looks really good.

    This is the uh weather application named weatherly. And here you can just search your city.

    You can just enter your city name. Let's suppose it's Bangalore and it will give you the weather updated reports.

    So you can see from here no

    15:37

    it's UK. just enter.

    So I don't know why is it giving UK but then yeah the weather information is actually correct. You also have the weather details, the humidity, wind speed, visibility plus you get this 7day forecast today, tomorrow and this is

    15:53

    again amazing and I would rate this maybe this is also good. So I'll rate this 9 out of 10 so as to compare all the three AI tools which we created our applications.

    So the first one we had is

    16:09

    the uh CRM application CRM Pro using Pythagora. So here is our application.

    The second one we built using bold and e-commerce website named at shophub. So you can have a look at this.

    And the third one we created a weather app using

    16:25

    lovable. And here you have this.

    Now let me know in the comment section which one you actually like the most. For me personally I think I would go for bold number one.

    Second, I would rate uh this Pythagora. And third, I would rate this lovable.

    Now, let me know which one you

    16:40

    liked the most. There was a time just a few months ago when we thought we cracked it.

    You could literally say, "Hey AI, build me a to-do app." And just like that, code appeared. No setup, no planning, just vibes.

    It felt like

    16:55

    magic. We called it vibe coding.

    But here's the thing about magic. It's impressive until it breaks.

    people started realizing the code didn't scale. The APIs were hallucinated, tests were missing, and once you moved past a

    17:11

    simple prototype, everything felt apart. Why?

    Because the AI wasn't actually thinking, it was just guessing. And that's where context engineering comes in.

    Instead of just saying build a todo app, you tell the AI who is it for, what

    17:27

    features it needs, how the code should be structured, which tools to use, and even what good output looks like. It's not about shouting instructions.

    It's about setting the stage. And once you do that, the results are on a different

    17:42

    level. Today, you'll learn what context engineering is, how it works, and how to use it with a real hands-on demo.

    Here's what we'll cover. Feel free to skip ahead, but it all connects.

    First, we'll go through what context engineering is, where VIP coding broke our hearts,

    17:59

    prompt engineering versus context engineering, the full ingredient list of good context, biggest challenges and proven fixes, and live demo. So, let's get started.

    So, first let's start by understanding what is context engineering. Now, I'll give you an

    18:15

    analogy of a picky chef. Imagine hiring a chef and only saying, "Make dinner for me." No ingredients list, no dietary notes, no guest count.

    The meal outcome is pure roulette. Now here, context engineering gives the chef everything

    18:30

    needed that is the ingredients in the pantry, the dietary notes which includes no nuts, vegan friendly, past dinner so that we don't repeat, the preferred plating style. So basically in AI terms context is equal to rules plus data,

    18:46

    memory, tools and the desired output. Now it's the engineered environment that lets in large language model reason and not guess.

    So context engineering in AI involves organizing and managing all the

    19:02

    elements needed for a task such as rules, data, memory, tools and the desired output which I already told you. Now this helped the AI make clear reliable decision instead of just guessing by providing all the necessary

    19:17

    backgrounds information. Context engineering ensures that AI can perform well in complex situations.

    To sum it up, it's the art of providing all the context so that the task is plausibly solvable by the model. But why did we

    19:33

    even need this term? Because we tried the opposite.

    So now we'll understand why vibe coding failed and what we have learned. Let's rewind to early 2024.

    The AI scene was booming. Tools were dropping every week and developers

    19:49

    everywhere were obsessed with something called vibe coding. The idea was to just tell your AI assistant vaguely what you want like build a to-do app, create a landing page for my startup, give me a chartboard that replies like Shakespeare and boom, instant code.

    It felt magical.

    20:07

    No setup was needed, no thinking, no planning, just proper vibes. This was especially fun for hackathons, weekend projects, quick prototypes, that wow moment in front of friends or colleagues.

    And to be honest, it was addictive of course, but that dopamine

    20:24

    hit from watching the AI generate full code blocks with zero effort. incredible but really hit hard once people tried to ship those projects or use them in real production.

    Let's now understand the problems with vibe coding. So while vibe

    20:41

    coding felt like cheating the system, it turned out we were mostly cheating ourselves. Here's why.

    So the first problem faced was the hallucinated APIs. AI would confidently use functions, libraries or endpoints that didn't

    20:56

    exist. you would fetch a data function that was literally made up.

    Second was no scalability. Now AI didn't use designed a codebase for future use.

    No modularity, poor file structure, zero comments or documentation. Then we had

    21:12

    brittle test or none at all. Now most AI generated tests either didn't match the code logic, skipped edge cases or didn't exist at all.

    So once the code needed to grow or evolve, it obviously collapsed. Now let's look at what the data said.

    21:29

    Now this wasn't just a gut feeling. Codo released a major industry report, the state of AI code quality.

    And here's the stat that stood out. 76% of developers said they don't trust AI generated code without human review.

    Why? Because VIP

    21:47

    coded projects often look at the first glance. It would break under pressure and it would require more time to fix than to build from scratch.

    The core issue with vibe coding was based on intuition and not intention. You hope

    22:04

    the AI gets it right. You assume the structure is okay.

    You skip the hard thinking. As a famous person once brilliantly said that intuition doesn't scale, structure does.

    So VIP coding is basically AI plus guesswork and context

    22:19

    engineering is AI plus planning clarity and with structure. So if VIP coding is just winging it and context engineering is all about building smart then where does prompt engineering fit in?

    Let's clear that up next. Now let's understand

    22:36

    the difference between prompt engineering versus context engineering. We break down in the simplest way possible with example, stories, and a comparison that actually sticks.

    Let's first look at the core difference. Think of prompt engineering like asking someone for a favor in one sentence.

    And

    22:54

    context engineering, that's like handing them a folder with everything they need to do that favor well, not just once, but repeatedly. Let's look at an example analogy, which is making a sandwich.

    Let's say you want someone to make you a

    23:10

    sandwich. So in the case of prompt engineering, you would say, "Make me a sandwich." That's it.

    No details needed. They don't know if you're allergic to peanuts, if you like mayo, or if you're vegan.

    So what do you get? Probably a sandwich, but maybe not the one you

    23:26

    wanted. Next, in the case of context engineering, you would hand them a sticky note that says, "I'm a vegan.

    No onions needed. Toast the bread.

    Use the sauce from the top shelf. This is how I like it cut.

    Now, they're not just making a sandwich. They're making your

    23:43

    sandwich exactly the way you want it. Now, let's put that in AI terms.

    So, first we look at the feature. Now, in case of prompt engineering, the focus is more on like have you asked the question.

    On the other hand, in case of

    23:59

    context engineering, the focus is on everything around the question. that includes data, rules, tools and memory.

    The typical size of prompt engineering is like basically one to three line. And in case of context engineering, multiple files are there, settings, examples, and

    24:16

    proper instructions. Now, the main goal of prompt engineering is one decent response.

    But in the case of context engineering, it's a reliable system that works across step or task. Prompt engineering is useful for casual use like for example chat GPT or question

    24:32

    and answer. But then context engineering is useful in real applications, automation and in AR agents.

    For example, in prompt engineering, you would say something like write a clean Python to app. But in case of context

    24:47

    engineering, you would say add system rule, use TypeScript, include API docs, provide sample code, define JSON output format. The reliability of prompt engineering sometimes hit or miss.

    But in case of context engineering, it's more structured, predictable, and

    25:04

    scalable. But why this matters?

    Because prompt engineering is great for one-time question, fast iteration. But then if you're building an AI chatbot, training a custom AI agent, you need context engineering because the AI needs more than just a question.

    It needs the full

    25:20

    story. So now that we know the difference, let's break down the actual ingredients of good context.

    Imagine you're giving an AI assistant a task. Now the AI is not human.

    It doesn't understand things the way people do. It only knows what you give it right now.

    25:37

    So if you want it to do a good job, you have to give the right information in the right way. That set of information you give the AI, that's what we call context.

    And just like cooking a recipe, good context needs few specific ingredients. If you forget even one, the

    25:55

    AI might mess up. So what makes up good context?

    Let's look at each ingredients in simple beginner friendly terms. The first one is system instructions.

    This is just like the basic rule book you give the AI. For example, you say always

    26:10

    write a clean code. Use British English.

    Start every response with hi there. These are the universal rules that the AI should always follow no matter what the task you give it.

    The second one is the user input. This is your actual prompt, the question or command you

    26:27

    give. Let's say for example, you say summarize this news article or build a weather app in Python.

    Simple, direct. And this is what kicks off the task.

    Third one is shortterm memory, which is the chat history. Think of it like the conversation you had so far.

    Now if

    26:42

    you're chatting with AI you could say something like can you write a report then you say make it shorter the AI needs to remember the first request to understand the second that memory of what you said that's short-term memory or called chat history next we have

    26:59

    long-term memory this is the memory from older sessions or saved preferences let's say for example you always want the AI to avoid using certain words or you have already told it your name and the job title Now if you have shared this before and it remembers that's

    27:16

    long-term memory. Not all AI tools have this yet but the best ones like advanced agents do have.

    Next is knowledge bases. These are the external sources of information the AI can use like documents website or APIs.

    For say

    27:33

    example you're building a health app. You link the AI to a medical database or PDF guide.

    The AI will search through the material and use it to give smarter answers. Next, we'll talk about the workflow state.

    This means where are we right now in a bigger process. Let's say

    27:50

    you're building an app with AI. This first step would be planning.

    Second step would be writing code and the third step would include testing. Fourth step will be fixing the bugs.

    Now, if the AI knows which step it's on, it can focus better. Otherwise, it might try to do

    28:06

    everything at once and that often fails. One small problem is that the AI can't hold unlimited information.

    It has a limit which is called the context window. So how do we fit in all of this without confusing it?

    That's exactly what we will talk about the next which

    28:22

    is the problems with context and the smart ways to fix them when working with AI. Giving it the right information called context is super important.

    But doing that isn't always easy. Here are a few common problem and simple ways to fix them.

    First is too much of

    28:39

    information, not enough space. Of course, AI has a limit to how much it can read at once.

    Now, if you give it too much, it forgets or it gets confused. You can fix it by saying like summarize older or less important information to make space for what matters.

    Second is information overload.

    28:56

    Dumping large chunks of unstructured text can overwhelm the AI. Then is wrong order of information.

    Now if the most important information is hidden, the AI might miss it. Next is multiple sources which confuses the AI.

    Now if a project

    29:11

    uses different databases or tools, the AI might not know which one to use. Then is messy memory.

    When too much random information is stored, the AI gets lost. So you can fix it by saying use memory blocks to organize what the AI remembers

    29:27

    like facts, past charts or fixed rules. Now these simple fixes help your AI give better more accurate responses every time.

    Now enough of the slides I'll show you a demo with chat GPT. So first I've given a prompt to chart GPT to create a

    29:43

    project plan for launching a new website. So as you can see that uh the GPT has given me this answer the project plan the project overview the goal timeline theme and the phases.

    You can see the phases they have directly given me. The

    29:59

    phase is not it has not explained in depth and detail. Now uh we will use the custom GPT to add more context to this.

    So we can just head over to here and we can just select customize GPT and then

    30:14

    you can just name project and just enter this. I have entered that you are a project manager with expertise in website launches.

    Create a detailed project plan with deadlines for each phase considering the following. All

    30:32

    right. And again you can just add more information from here if needed.

    Now this is the advanced attribut capability. So I have also selected that and I'll click on save from here.

    Now I'll click new chart and again I'll ask

    30:48

    the same question that create a project plan for me for launching new website. Let's see what it responds.

    So now as you can see it has given me project plan for launching a new website and this is the project goal and this is the phase one objective task and it has

    31:05

    also given me the daywise uh time needed for doing each task and this is the phase two. It has also given me the milestone from here.

    So as you can see it has divided the phases into many parts and from it can just look at the task and the number of day the milestone

    31:21

    achieved. So I'll just show you the difference between the um charg customized using context and the normal charg.

    So this was a normal charg which had given me the phases task and just the normal things without proper

    31:37

    date timing. And when I use the customized charge, it it gave me the objective, the task needed to do each phases with today's as well.

    So this is the kind of um answer you can expect from charge when you use context

    31:54

    engineering here. Hey there, let's start with a quick thought.

    Have you ever tried to talk to a machine and it gave you the wrong answer? Maybe you asked Siri, what's the weather today?

    and you got something like, "I'm sorry, I didn't catch that." Or even worse, it gave you

    32:11

    the wrong city's weather forecast. This might sound frustrating, right?

    Now, what if I told you that this happens because you didn't give it a good question. That's where prompt engineering comes in.

    So, in simple terms, it's the art and science of

    32:27

    crafting the right questions or prompts to get the best answers from AI. In today's video, we are going to cover the basics of prompt engineering, which is not just about asking AI questions, but about asking them in the right way.

    And

    32:43

    by the end of this course, you will understand what is prompt engineering. I'll break it down for you.

    So, it's very super simple to understand why prompts matter. You'll see how much the quality of your prompt can change your AI's responses.

    We'll then understand

    32:59

    how AI processes prompts. I'll explain you how AI understands your questions and it's easier than you think.

    Then we'll understand tokenization. This might sound a bit technical but I'll make it an easy way to grasp.

    Don't worry, I have got you covered. Then we

    33:16

    have context window and memory. So how does AI remember things?

    We will dive into this in a very simple way. The anatomy of good prompt.

    What makes a good prompt? We will break it down for you and show you what exactly to include.

    Weak versus strong prompt. Now,

    33:34

    small changes can definitely make a big difference and I'll show you how to do that. Now, after all of that, I'll show you a real life demo on how to use prompt engineering with chat GPD.

    You'll see how to apply everything we learned today in real world situations. So, are

    33:50

    you ready to see prompt engineering in action? Let's get started now.

    Now before we get started, let me explain it to you using a real life story example of prompt engineering in action. Also just a quick information, if you're interested in mastering the future of technology, then the professional

    34:07

    certificate course in generative AI and machine learning is the perfect opportunity for you. Offered in collaboration with the ENICT Academy, IT Kpur, this 11-month live and interactive program provides hands-on expertise in cutting edge areas like generative AI,

    34:23

    machine learning and tools like chat GPT, DL2 and even hugging face. You'll gain practical experience through 15 plus projects, integrated labs and live master classes delivered by esteemed IIT Kpur faculty.

    Alongside earning a

    34:38

    prestigious certificate from IIT Kpur, you will receive official Microsoft badges for Azure AI courses and career support through SimplyLearn's job assist program. So hurry up and enroll now and you can find the link below.

    So here's a quick quiz question on prompt

    34:55

    engineering that I want you guys to answer in the comment section below. So what does zeros short prompting mean?

    Giving the AI multiple examples. Giving no instructions at all.

    Giving a task without any examples or providing images only.

    35:20

    Meet David, a marketing manager at a growing e-commerce company. David's team is launching a new product and his task is to come up with catchy, attentiongrabbing ad copy for their digital ads.

    He decides to use an AI tool to help him generate some creative

    35:36

    ideas. David types a very basic prompt into the AI.

    He says, "Give me some ad copy for a new product." The AI generates a few generic lines like, "Buy a product today and new product now available." After seeing this answer,

    35:52

    David scratches his head because these lines are overly broad and it fails to capture the essence of the new product. He needs something more specific, something that resonates with their target audience.

    So, David refineses his prompt. He types, "Write three creative

    36:09

    ad copy ideas for a new eco-friendly water bottle targeted at young professionals who care about sustainability." Now, the AI gives David more targeted and unique ideas. It gives like stay hydrated, stay green, drink

    36:24

    sustainably, work sustainably, eco-conscious, stylish, and practical. Now, after seeing this, David smiled because this is exactly what he needed.

    By refining his prompt, he got the right tone, the right message, and the perfect art copy. So, here's the lesson.

    Just

    36:40

    like David, when you're working with AI, the more specific and detailed your prompt is, the better will be your results. A vague prompt will give you vague results, but a well structured prompt leads to clear, relevant, and targeted response.

    Next, let's

    36:55

    understand what exactly is prompt engineering. So prompt engineering is the process of designing and crafting the right questions or instructions for AI so that the AI can generate the most accurate and useful responses.

    You can think of it as giving clear directions

    37:12

    to a GPS. The clearer the destination is, the more accurate the route will be.

    Let's say for example instead of saying tell me about climate change you could ask explain climate change in three bullet points in simple language for high school students. Now this level of

    37:30

    specificity help the AI understand exactly what you want. So it can generate a response that fits your needs.

    So now that you know what prompt engineering is, let's see some real world applications of prompt engineering. So where can we actually apply prompt engineering?

    Here are a few

    37:46

    examples. First one is customer support.

    AI can be used to respond to customer queries. The more precise your prompt is, the better the AI can assist you.

    It also helps in coding. Now, developers use AI to generate code, debug or offer

    38:03

    solutions to coding problems. Let's say for example, write Python code to sort a list of numbers in ascending order.

    Now in education teachers use AI to generate quiz questions, explain complex concept or even create lesson plans. Then we

    38:20

    have marketing. Marketers like David use AI to generate art copy, create social media content or even come up with product names.

    For example, create three catchy slogans for a new organic skincare product. It is also useful in data analytics.

    Now AI can be used to

    38:37

    interpret and summarize large data sets. It can automate your reports.

    It can generate insights based on your structured data making data analysis faster and much more accurate. For example, you can say summarize these key insights from this report.

    Now, the more

    38:54

    specific you with your prompt, the better the AI can assist you in these real world scenarios. All right.

    So, how does AI even understand your prompt? Well, it doesn't understand like we do.

    It basically predicts. AI doesn't truly understand your question.

    It just

    39:10

    predicts the most likely next words based on what you have typed. It works like your phone's predictive text.

    Just like when your phone guesses the next word in a sentence. In the same way AI guesses the best next word to keep the response flowing.

    It's not magic. It's

    39:28

    pure pattern and mathematics. AI uses pattern it learned from tons of data during training.

    No emotions, no reasoning, just super smart pattern matching. You can think of this like playing the game fill in the blanks.

    You're given a sentence with some words

    39:44

    missing and you have to guess the missing words based on what you know. The more you practice, the better you get at filling those blanks.

    In the same way, AI guesses the next word in a sentence based on all the word it has seen before, but without any true

    40:00

    understanding. So the clearer your prompt is, the better the AI's guess will be.

    Now let's dive into the key concept tokenization. You can simply put tokenization which is the process of breaking down your sentence into smaller pieces or tokens.

    Now these tokens could

    40:18

    be whole words, parts of word or even punctuation marks. Basically the building blocks that the AI works with to help AI understand text better.

    It breaks down sentences into smaller pieces called tokens. A token could be a

    40:33

    word, part of a word, or even a punctuation. Let's say for example, if you type chat, GPT is amazing.

    The AI breaks it into these tokens. Chat plus G plus PT plus is amazing.

    Understanding how AI tokenizes text is important

    40:49

    because the more efficient the tokens are, the clearer the AI understanding will be. When writing prompts, keeping them concise and avoiding unnecessary words help AI generates better responses.

    AI doesn't see the full picture immediately, just smaller puzzle

    41:05

    pieces which are the tokens and then it works to assemble those into a response. The better the pieces which are the tokens, the clearer the final picture will be.

    Next, we will understand context window and memory. Now, AI has a context window which acts like its

    41:21

    working memory. The context window determines how much of the conversation the AI can remember at any given time.

    Now if you ask AI a long question and the context window gets full, it may forget parts of the earlier conversation. Let's say for example in

    41:38

    simple terms, AI has a memory box called the context window. For GPT3, this box can hold up only up to 8,000 tokens which are the words or parts of the word.

    Once it gets filled up, it starts forgetting earlier parts of the

    41:53

    conversation which can affect its responses. GPT4 on the other hand has a larger memory box.

    So it can hold more tokens and remember more details without even forgetting. The image here shows how GPT4 can handle longer conversations

    42:09

    while fewer memory limits compared to GPT3 but struggles to keep track when the conversation gets too long. Now we'll dive into the some important part which is understanding anatomy of a good prompt.

    Now that you know how AI works, let's break down the anatomy of a good

    42:26

    prompt. A well- constructed prompt is just like a recipe.

    It needs the right ingredients in the right order. And that recipe includes a clear task which is what do you want the AI to do?

    Your target audience who is the answer for.

    42:42

    You can think of it like giving directions to a friend. If you can just say go right, they'll have no idea where they being headed.

    But if you say take a right at the second stoplight and then a left at a coffee shop, you are being much more specific and now they will

    42:58

    know exactly where to go. This is how you should approach your prompt.

    Clear, specific, and with all the right details. Then we have context or role.

    That is what context should the AI take into account. Formatting or constrance.

    How do you want the answer to be

    43:14

    formatted? So this is the anatomy of a good prompt.

    Now let's understand the difference between weak and strong prompt. So at first we have got weak prompt.

    For example, tell me about AI. Now this can be vague which is it

    43:29

    doesn't specify what aspect of AI is needed. It's too broad.

    It AI can't give a general unclear response. It's also unfocused because no guidance is given on how to explain AI.

    Now let's understand the example of a strong

    43:45

    prompt. Let's suppose you type in three bullet points explain AI to a 10year-old in simple terms.

    Now this tells AI exactly what to do which is explaining in three points. It's also audience focused which is it tells AI who it's

    44:01

    explaining to which is 10year-old kid. It requests a specific format which is in bullet points.

    Now we can see from here that strong prompt is much more effective because it gives clear direction to the AI making its response more useful and focused. You can think

    44:18

    of this like asking someone to explain a movie to you. If you just say explain it, you might get a full analysis of the film which might be way more detailed than you need.

    But if you ask can you explain this movie to a five-year-old you'll get a much simpler childfriendly

    44:34

    summary that's the power of strong prompting. Now I'll show you a demo using charge GBT the difference between weak prompt and strong prompt.

    So now let's start with the demo part. I'll show you a clear difference of how you use the vague prompt and how we can improve it using a specific prompt.

    44:52

    Let's say my first prompt is very normal. Uh so I type something like uh tell me about the climate change.

    So this is my prompt to GPT and you see what type of answer will it give to me.

    45:08

    So yeah so you can see here it has given me this particular set of answer. It says climate change refers to significant and lasting uh changes of the earth and the key causes the consequences and everything.

    But uh I feel like uh this prompt is mostly broad

    45:26

    and it's open-ended. The charge responds with a general explanation covering multiple aspects like the causes, effect and the solution.

    So while this information is accurate but still I feel that this is not tailored for my uh

    45:41

    particular target audience or any purpose which makes it less useful if you have a specific goal in your mind. So now I'll be giving a better improved version of my prompt.

    So here is my prompt the improved one and I have said GPT that explain the key causes of

    45:58

    climate change in under 100 words using simple language suitable for high school students. So you can see that uh this here this model it gives a clear boundary a specific topic focusing on the number of words and a defined

    46:14

    audience. Now the response is maybe more shorter.

    Let's see look at the response. What type of response does Chad GPT give us?

    Yeah. So you can see that uh it has actually given me the accurate the only information which is needed that to in under 100 words suitable for high school

    46:31

    student. It might be more shorter direct and written in accessible language.

    Now this version would work well for teaching purpose summarizing or even including in a school newsletter. Now we'll look at the second example which is the missing context versus clear

    46:48

    instruction. So let me show you the first simple uh normal prompt.

    So my prompt is write a post about productivity. Okay this is the my first prompt a normal prompt like how you guys

    47:04

    usually do. So now let's see how what chat GPT generates.

    So you can see here that uh this is a very common kind of prompt that people give it to chat GBT. Now the model will try to fill in the blanks but since it doesn't know who the

    47:19

    audience is and you know who are my target audience, what style should to use, it ends up writing something very uh bland and unfocused. So now we'll be giving the second prompt which is the improved version.

    So here is my prompt. It says you're a

    47:35

    productivity coach. Write a LinkedIn post on how to manage distractions while working from home.

    Keep the tone conversational limited to three short paragraphs. So let's see what chart GPT generates for me.

    So you can see the answer that the GPT has given us. It's

    47:53

    accurate and also suitable for my target audience. So in this version we tell the model what person are to adopt like who the audience is, what topic to cover and how to structure the response.

    So that level of instruction results is much more specific, engaging and also well

    48:10

    organized post that's likely to perform better on LinkedIn or maybe your own personal blog as well. So this was for a second example.

    I'll show you a few more couple of examples. So let's move on to a third example now.

    48:25

    So third example is the flat request versus stepbystep prompting. So first we will be giving a very normal basic prompt like uh write a blog post about AI in education.

    Okay. So I've given my prompt here.

    Write a blog post

    48:43

    about AI in education. All right.

    So you can see here that uh this is the answer which GPT has given to me. Now you can see that this prompt gives no

    48:58

    direction on what angle to take, how long the post should be or how it should be structured. Now the model might generate a long unstructured post that tries to cover everything and it ends up doing nothing particularly well.

    Now we

    49:13

    will be giving a better improved version of this prompt. So now as you can see I've given chart GPT uh three step.

    The first step is to generate three blog title ideas about AI and education. The step two is pick the most engaging one and write a 200word

    49:32

    blog intro. The third step is suggesting an outline for my full blog.

    Now this approach breaks the task into smaller manageable parts. It gives the model room to brainstorm and select the strongest direction before writing.

    So

    49:48

    by structuring the task step by step, you're not just only getting better results but also maintaining creative control at every stage. So we'll just give this uh prompt now.

    And you can see it has given me the answer for the first step to which is the chosen title.

    50:05

    And you can see my title is also there. The blog intro is under 200 words and it has given me step by step the blog outline, introduction, the full story line.

    Yeah, it looks good to me. So here you get a clear difference of how you

    50:22

    convey your message to chat GPT AI. First, if you give a normal prompt like just a normal oneliner prompt, you see the answer here and when you give a stepbystep detailed explanation of how you want the blog to be your outline,

    50:37

    engaging 200 blog intro, the blog title ideas. So this is how charges answer for you.

    Now the fourth example which we'll be looking on is the unfocused request and the difference between role based prompting. So I will give a prompt like

    50:53

    summarize this article. Now this is a very normal basic prompt.

    So uh yeah you have to provide the article which you which needs to be summarized. Yeah.

    So I have an article here of uh about talking about generative AI. So

    51:10

    I've given this to charge GPT and I write here that summarize this article. Okay.

    Okay. Now it has given me a full summary of the article.

    You see it's a bit

    51:26

    short. So it might look a bit clear but then you know it lacks details about what kind of summary you actually want.

    Is it for academic purpose, for social media, for a quick internal update. Now without that context the model makes assumptions and the result may not match

    51:42

    what you actually need. Let's look at the improved prompt to give to charge GPT.

    So my prompt here is you are a journalist writing a tweet thread summarizing this article for tech professionals. Keep it uh keep this

    51:58

    point under 280 characters and focus on the actionable insights. Now while I give this prompt to GPT let's look at the answer how it generates for us.

    So as you can see here the model has actually got a clear person and a platform in mind journalist writing for

    52:15

    Twitter. It also knows who the summaries for and what kind of information to highlight.

    Now this particularly makes the output snappy, concise and tailored for busy professionals who just want quick takeaways. Here you can have a look of all our um key points addressed

    52:33

    in this article. So there is a major difference you know it depends on how you actually give prompts to GPT.

    Using such improved prompts can increase the productivity of your task. Let's now look at the fifth example which is generic questions versus goal oriented

    52:50

    prompt. A normal prompt would be uh what are some marketing strategies?

    Let's type this through chat. So I want to know more about the marketing strategy.

    So this is how I

    53:06

    give prompt. So the answer has given me in bullet pointers 1 2 3 4 5.

    Okay. So it's given me basically 12 pointers.

    You can see here it might respond with a generic list. As you can see the content

    53:21

    marketing, email campaign, social media, user generated content, referral marketing, influencer marketing. But you know they're not filtered or prioritized for real world scenario.

    Now I'll show you how you can improve it using the improved version of this prompt.

    53:40

    So here I've given again the improved version of the prompt that is you're a digital marketing consultant advising a startup with a limited budget. Suggest five cost effective marketing strategies that can implement in the next 30 days.

    Let's look at the answer now. So you can see it has given me a clear

    53:57

    detailed uh answer for my question with the bonus tip actionable tip what you can take with all the clear five pointers with the tips as well. You know you can just look from here that here now the model has got a clear

    54:14

    context. It's advising a startup.

    There's a budget constraint. There's a timeline.

    The result is a list of more actionable and realistic strategies such as organic social media, referral programs or even collaborations which are tailored for your business needs or

    54:30

    your startup needs. So guys now you can see u I have clearly shown you how you can optimize your prompting using a clear well ststructured specified prompt.

    It's very important that you stop using such vague prompts and start giving much more

    54:47

    specifications details and use a proper prompt if you want to utilize charg. I hope with this video you all must have understood the fundamentals of prompt engineering and what you can do to develop a basic prompt into a more precise and high quality prompt.

    55:03

    So guys let's get started and let's understand what is prompt engineering. So prompt engineering is like directing AI models such as the advanced GPD4 to ensure they perform their best based on how you ask your questions.

    And now

    55:18

    we'll see why it's crucial. So imagine you are seeking restaurant recommendations.

    If you ask where should I eat tonight, you might get random suggestion. But if you specify I need a cozy Italian place for a date night within walking distance, you will

    55:33

    receive much more relevant advice. That's prompt engineering shaping your questions to fetch the most useful answers.

    So this was about why it's useful. Now we'll see the crafting effective prompts.

    So crafting effective

    55:49

    prompts. So the number one reason is be specific because detail is key.

    Asking an AI what are some easy vegetarian dinners. That is better than just asking for dinner ideas.

    The next is provide context. Adding context helps AI tailor

    56:05

    its responses. Like telling a friend a story with enough background so they understand.

    The next is focus attention. Highlight crucial details to keep the AI focused on what matters most for your question.

    And then comes iterate as needed. Refine your prompts based on the

    56:22

    responses. Similar to adjusting a recipe to get it just right.

    So this was about crafting effective prompts. So these are the basic ones.

    Moving forward in this course, we'll see the most prominent things that we can add in the prompt that will come in the next four to five

    56:39

    minutes. So let's move to the next one and we'll see a practical example for a prompt.

    So the example is suppose you are using AI to plan a birthday party. A vague prompt might be that how do I plan a party and this could lead to a generic checklist.

    However, a well-crafted

    56:56

    prompt can be like, "What are some creative themes for a 10-year-old's outdoor birthday party in summer?" And what games would you recommend? So, this prompt will likely result in more specific and actionable ideas.

    So, this is how you can generate a prompt. So,

    57:13

    prompt engineering is essentially about making AI work smarter for you, transforming complex task into simple, enjoyable activities. It's a skill that enhances your interactions with technology, making every AI encounter more effective and engaging.

    So, having

    57:30

    explored what prompt engineering is and how to craft effective prompts, let's now dive into the various ways this skill can be applied. So, prompt engineering is not just a technical skill for AI specialist.

    It has practical uses in nearly every industry imaginable. From enhancing customer

    57:46

    interactions to streamlining software development, the applications are vast and varied. So let's see some of the key use cases.

    So the number one use case is content creation. So in digital marketing and blogging, prompt engineering helps generate targeted

    58:02

    content such as articles, social media post and marketing copy that resonates with specific audiences. The next is customer support.

    AI can be used to automate responses in customer service. Well-crafted prompts ensured that the responses are accurate, helpful, and

    58:18

    contextually appropriate. Then comes software development.

    Developers use prompt engineering to generate code snippets, debug programs, or even use AI to conceptualize new software solutions. Then comes education and training.

    AI

    58:33

    can tailor educational content to students learning levels or answer specific academic queries, making learning more personalized and efficient. And then comes market research and data analysis.

    By directing AI to analyze large data set with specific prompts, businesses can extract

    58:51

    meaningful insights about market trends, customer preferences, and operational efficiencies. And then comes healthcare.

    In medical settings, AI can assist with diagnosing from symptoms described in prompts or help in researching treatment options by processing medical

    59:07

    literature. And then comes legal and compliance.

    That is the most used case for the AI. AI can help pass through vast amounts of legal documents to find relevant precedents or compliance issues based on prompts tailored to specific

    59:22

    legal questions or topics. These use cases illustrate the versatility of prompt engineering highlighting its potential to enhance productivity and creativity across a wide range of industries.

    So these were the use cases. Now we'll see the flow of AI technologies from where this LLM models

    59:39

    or the large language models or GPD4 that's an example come into action. So let's start with the flow.

    So AI is the overarching category that defines the goal of creating machines capable of performing task that would require intelligence if done by humans. And then

    59:55

    comes ML. So ML is a method within AI focused on giving machines the ability to learn from data.

    Then comes deep learning. So deep learning is a technique within ML that uses layered neural networks to analyze various factors of the data.

    And then comes LLMs

    00:12

    that are large language models that are specialized application of deep learning focused on understanding and generalizing and generating human language. This hierarchy moves from broad general techniques and applications down to more specialized and sophisticated systems showing how

    00:30

    foundational concepts in AI lead to more advanced implementations. So this was all about a conceptual or the context of the prompt engineering.

    Now moving to the applications of prompt engineering and we'll be using GPT4 for this purpose

    00:46

    and we'll be writing prompts in the GP4 and asking the GP4 model to provide us the relevant answers. So let's move to GPT4.

    So as you search on on any of your browser uh that would be openai

    01:01

    open.com you would be directed to this website and here are their products that is chat GPT that is for everyone for teams for enterprising and there has been a pricing listed here. So you could come here and click on chat GP login and

    01:19

    after proceeding with your credentials you can login into chat GPT and start writing your prompts. So coming back to the openai website.

    So you could see here that's a research and the latest advancements that is GPT4 DL3 Sora. So

    01:34

    GPT4 that's a model that has been developed by OpenAI that can use the camera or the vision technology and can tell you that what object it is and if you show him a code snippet that will tell you what that code snippet has been

    01:50

    written for. And if you use that to just scan writing on any of your pages or any of your copy, it will just scan it and translate to you in what language it has been written and you can translate to

    02:06

    any other language. And then comes Dal3 that is used to create images and then we have Sora that is used to create videos.

    So now moving to the next that is products and you could see that the chat GPT these are the version that are for everyone for teams for enterprises

    02:22

    and then we have the pricing for that and here we have the chat GP API or open API so you could click on that and see so before that before going to the API we'll move to documentation and let's

    02:37

    have an overview of all the things so here's an introduction about the key concepts that is tax donation models that is GPT4 and GPT3.5 then there are assistants that GPT4 can be act as a assistant to anyone and then

    02:54

    we have the embeddings that is it's a vector representation of a piece of data that is meant to preserve aspects of it content or its meaning and then comes the tokens. So there are tokens.

    So you could see here that text generation and embedding models process text in chunks

    03:11

    that are called tokens. And as a rough rule of thumb, one token is approximately four characters or 0.75 words for English text.

    So these are tokens. And now moving to models.

    So we have GPT 4, GPT4 Turbo, 3.5 Turbo. And

    03:28

    here are all the models that are listed by OpenAI. So here we'll be talking about GPT4.

    So GPD4 that's a large language multimodel model and it has multilingual capabilities that has multilingual capabilities and you can

    03:44

    ask him any question in any language and then we have DAL that is used to create images. Then we have TTS whisper and then the embeddings.

    So let's move to chat GPD and before that let's have a API reference that is if you want to use

    04:00

    openAI API or you want to integrate to create a chat board or we have seen a use case that is to create a customer service representation. So for that you could use this open AAI API.

    So for that you have to install the openi module

    04:16

    with the command pip install openai and after that you could use the npm and here are the API keys from where you can generate it and they have provided you all the steps how you can use it for streaming purpose for the audio to

    04:32

    create speech. This is how you could use the API to create your own models.

    So this is the interface of chair GPD on GP4. So this is the audio of chat GPT and here we write the prompts that is the message box and this is the login

    04:50

    person that has logged and if you click here you could see my plan and we have purchased the plus plan that is $420 per month and there other features also we will come to that. So let's see one of those that is customized chat GPD.

    So

    05:07

    here you could write custom instructions that what response you want from chat GPD. So you could mention here that I want the tone to be specific, mild, not to be too loud.

    And you can ask him don't use a advanced English. I want the

    05:24

    answer to be prompt and in a simple English manner. So you could write instructions for the responses you want to have.

    And this is the window. If you click on that, you could see explore GPT section.

    And here is the history section

    05:41

    that what you have written and what prompts and what responses you have created till date. So if you click on this you could get a new window for chat GP4 and these are the models that are listed here that is chat GPT 40 chat GP4

    05:59

    GP 3.5 and this is the temporary chat section. We'll discuss about all this but currently we'll start with the types of prompts or generating the prompts and what things to consider while generating the prompts.

    So moving back.

    06:15

    So let's see how you can create a prompt. So you have to have six things in prompt to make it more precise.

    Number one is context. So this sets the scene or provides background information necessary for understanding the prompt.

    For example, if you're writing a prompt

    06:32

    that in a world where artificial intelligence has replaced most jobs, describe a day in the life of a human worker. So you have asked him and provided the context here.

    And then we have task. So the specific action or question that the responder needs to

    06:49

    address. So if you are writing a prompt, number one is context that needs to be included.

    Then comes the task. So what task you want the GPT to act and provide the response.

    So for example write an essay on the effects of global warming

    07:05

    on agriculture. So you have provided the task here that is write an essay.

    So now the next thing is persona. So persona specifies the identity or role the responders should assume while answering.

    For example, as a medical professional advise a patient on

    07:21

    managing type 2 diabetes without medication. So this is the persona.

    Then comes format. So define how the response should be structured or presented.

    For example, list five tips for improving personal productivity in bullet points. So the format you have asked him to

    07:39

    address the personal productivity in bullet points. So that's the format you have asked him.

    And then comes exampler. So if you want to give the example to GPT that we have a sample here, use that and provide us a response according to that.

    So for example, sometimes a sample

    07:57

    answer or part of an answer is given to illustrate the expected style or content. So we will have a prompt example here like in the example with a protagonist overcomes fear.

    Write a story about overcoming a personal challenge. So this is the example that we can give to an LLM model.

    And then

    08:15

    comes the tone in which tone you need the answer. So indicate the mood or attitude the response should convey.

    For example, write a humorous blog post about the trials of parenting toddlers. So you have mentioned what humorous blog

    08:30

    you want the tone as a humorous blog. So with the right technique you can craft prompts that are not only precise but also versatile suitable for any learning management system.

    So this approach ensures that your prompts will engage students, encourage critical thinking and drive meaningful discussions no

    08:47

    matter what the platform is as we are using the GPT4 here. So you could use cloud or anthropic like there are many platforms you can use any of them with these type of crafting the prompts techniques.

    So embrace these strategies and you will be equipped to create

    09:03

    prompts that resonate across various educational environments enhancing the learning experience for all. Each component plays a crucial role in guiding the response to ensure it meets the desired objective and quality.

    So let's see some of the examples and we

    09:18

    will use all the context or all the types that can be used to create a prompt. So our first example is as a nutritionist speaking at a high school career day, create a presentation outlining the importance of healthy

    09:34

    eating habits for teenagers and use a friendly and engaging tone and include real life success stories to illustrate your points. So the context here is high school career day.

    You have given a context and the task you want is create

    09:49

    a presentation on healthy eating habits and then comes the persona that is nutritionist. You have asked him that you act as a nutritionist here and then the format you want all the response in a presentation that is presentation with

    10:05

    real life stories and then comes the exampler. So you want real life success stories here and you have set the tone here that is friendly and engaging.

    So if you write this prompt in GPD4 so let's ask this prompt from the GPD4 and let's see what he answers. So we are

    10:22

    mentioning as a nutritionist uh I don't remember the spelling but uh we will write. So, speaking at a high school career day,

    10:38

    create a presentation outlining the importance and importance uh what we were writing

    10:54

    importance of healthy heating habits. for teenagers.

    And what we want here is use a friendly tone

    11:13

    and include real life success stories. illustrate your points.

    11:30

    So let's see uh how does the chat respond. As we have mentioned all the types of prompt or all the components of prompt that can be used to create a prompt that is context, task, persona, format, exampler and tone.

    So you could

    11:47

    see that he has started providing the response with the slides that is slide one introduction, slide two why nutrition matters, slide three the teenage plate and slide for the success story. So this is how you could write a prompt and you could get a fully

    12:02

    structured response as you want. If you want to like moderate this response or alter this response as you are not satisfied with it, you can go on and write some more prompts to get it more precise.

    So we'll see another example

    12:18

    for that. Imagine you are a sci-fi author writing a short story set in a future where water is scared, where water is scars.

    So craft a narrative that explores the daily challenges faced by a family using a dramatic and suspenseful tone. So what we'll see here

    12:35

    is context that we have mentioned that is future world with voter scarcity and the task we have asked him for the GPD model is write a short story. The persona that we have given to the LLM model that is GPT is that you are a sci-fi author and the format we want is

    12:52

    narrative and the example we have given is the daily challenges of a family and the tone is dramatic and suspenseful. So we have mentioned all the things for a prompt that could be created a better response.

    So similarly we'll see another

    13:10

    example. So this example is as a financial adviser prepare a guide for young adults on managing finances after college.

    So use a conversational tone including actionable steps and start with the story of a recent graduate to

    13:25

    illustrate common financial pitfalls. So we have mentioned the context here that is financial management post college.

    The task is to prepare a financial guide. The persona is financial advisor and the format we need is guide with actionable steps and the example we have

    13:42

    given is story of a recent graduate. The tone is conversational and this was all about this promp.

    Similarly, we'll see another example that is as a war correspondent draft a report on the effects of the conflict on civilian life focusing on a particular city. Use a

    13:59

    serious stone and provide interviews as examplers to underscore the human aspect of the story. So the context here is effects of war on a particular city's civilian life and the task is draft a report and the persona is war correspondent.

    The format is report with

    14:17

    interviews and the example we have mentioned here is interviews with civilians and the tone we have set here is serious and impactful. So we have seen some of the examples for the prompt creation.

    Now we'll see the examples for writing prompts for a particular field.

    14:33

    So we'll start with number one field that is content creation. So we have mentioned some of the use cases for prompt engineering.

    So starting with number one that is content creation and here we'll write a prompt and that could be that as a marketing manager draft a

    14:49

    blog post aimed a new entrepreneurs on the importance of branding. So use an authoritative yet approachable tone including examples of successful brands to illustrate key points.

    So let's write this. So, as a marketing

    15:08

    manager, draft a blog post aimed at new entrepreneurs

    15:23

    on the importance of branding. Use an authoritative yet approachable tone

    15:42

    including examples of successful brands to illustrate key points.

    16:00

    So this prompt we have written for the content creation. So similarly you can uh ask him to write a story or draft a blog post or write any content that you have asked him a persona here that is

    16:16

    marketing manager. So let's simplify this prompt.

    We have marked the context here that is blogging for new entrepreneurs. And the task we have asked him is draft a blog post on branding.

    And the persona that we are asking the GP to act as a marketing manager and the format is blog post with

    16:33

    examples. An example that we have given him is case studies of successful brands and the tone we want is authoritative and approachable.

    So similarly you can write prompts for content creation that I want to create a blog post or I want

    16:50

    to create a YouTube video or I want to create an article. So provide me a story line.

    How can I approach a particular topic that could be what is LLM? And as we can write it here that act

    17:05

    as AI specialist and help me write an article on topic

    17:21

    what is LLM and and keep that on in engaging manner. So you could see that JV has started generating the response that is he is creating an article on what is a large language model and he has decided the

    17:36

    title here and providing all the content for your article. So what we have provided here is context that is we want an article on what is LLM and the persona that act as a EI specialist and the format we want an article and we

    17:54

    have set the tone here that is in an engaging manner. So similarly you can draft other prompts and you could help them with your content creation journey.

    So moving to next example that is for SEO purpose. So there's another use case

    18:10

    that is SEO and for that we can write prompts that imagine you are an SEO expert running a workshop. So create a presentation that explains the basics of SEO including practical tips on keyword research and link building and use a professional yet engaging tone to keep

    18:27

    your audience interested. So similarly you could have a prompt in your large language model and ask him with all the components of prompt and we can wait for the answer and let's see what it responds.

    So imagine you are an SEO

    18:44

    expert. Here we are asking the GPT to act as a SEO persona or SEO expert and we are asking him that you are

    19:00

    running a workshop also. So we are setting a context and asking him to create a presentation that explains the basics of SEO

    19:20

    including practical tips on keyword research and link building. And now we'll set a tone here that is

    19:37

    use a professional use a professional yet engaging. And now we'll set the tone here that is use a professional yet engaging tone to

    19:55

    keep your audience interested. So we have crafted a prompt here that is for the SEO purpose and similarly you can create your own but so let's simplify this prompt and see what context task or persona we have

    20:12

    mentioned here. So the context here is SEO workshop and the task we have assigned to LLM is create a presentation of SEO basics and the persona that we have asked the GPT is to act as a SEO expert and the format we want the responses in slide presentation with

    20:27

    tips and the example we have given him is screenshots of SEO tools and the tone we have asked him is professional and engaging. So you could see the response that has been generated by the GP4 that he has provided us that in slide one you

    20:43

    could have a introduction slide with a title and the opening remarks and then in slide two you could have the title that is what is SEO you could mention the definition here and what would be the goal for this presentation and similarly we have the slide three that

    20:58

    is for how do search engines work and then we have slide four slide five and similarly all the slides And you could just mention how many slides you want and you can also mention how many slides you want as a response. So this was about the SEO thing.

    Now moving to the

    21:16

    next use case that is for developers. So for developers we can draft a chrome that as a software engineer.

    Write a tutorial for beginners on writing their first web application using React. Include step-by-step instructions and code snippets.

    Make your tutorial

    21:32

    detailed yet easy to follow. So this is one of the prompts.

    Similarly, you could ask him to debug any code. You can provide a code snippet to GPT and he will debug the code and provide all the necessary changes that can be made to the code snippet.

    So for this we will

    21:48

    write an example that so we will write the prompt here that as a software engineer and we are asking him to write a tutorial for beginners

    22:10

    on building your first web application using React. using React and include step-by-step

    22:27

    instructions and code snippets. So we'll write here that include step-by-step instructions and code snippets and make your tutorial detailed yet easy

    22:44

    to follow. So you could see that GPD4 has started

    23:01

    generating the response and similarly you could ask him to generate some code for a particular application or you could also create a website by just asking him the HTML file, the CSS file and the JavaScript file and what options

    23:17

    of the applications you want a website should have. So first see the response that Chad Gypty has created.

    So you could see that he is writing a tutorial for beginners on building the first web application. So first we will

    23:33

    see the simplification of this prompt that is we have set the context here that is tutorial for building web applications and we have assigned a task to the GPD that is write a tutorial on react and the persona we have given him is as a software engineer and the format we want the response is in tutorial with

    23:51

    code snippets and the example we have given is example project and the tone is informative and clear. So you could see here that he is providing the code snippets for setting up your project then navigating into your project directory and starting the development

    24:07

    server and then creating a list component. As you go along you could see that he has created the whole tutorial.

    And if you want the tutorial to be specific that in the first tutorial you want just setting up your project you could ask him and he will create that

    24:23

    for you. And in the second one you want it to navigate into your project directory or you want to create your first web app that GPT will help you create that.

    And similarly if I ask him that as a software developer or act as a

    24:39

    software developer we are giving a persona to him. Help me create a travel website

    24:54

    and make it engaging and easy to handle and userfriendly.

    25:14

    So you could see that GBD4 has started creating the response and you could see that he's providing us the steps that is to define the scope and features of the website and if you want the code snippets you can ask him that I want the HTML file for the current website. So he

    25:31

    will provide you a HTML file. If you want any modifications or want to alter that you could provide him the prompts that I want a specific navigation bar or the search functionality or the visuals.

    So similarly the GPT4 will act upon the

    25:48

    prompts and provide you the code snippets that would be helpful in creating your website. So now moving to the next use case that is data analysis.

    So for data analysis, so you could see a pin bar here. Here you could upload the documents from computer that could be

    26:04

    your text files, XLSX files or you could connect it to the Google drive or the Microsoft one drive and have your documents here. So we will paste a XLX file that would be for the data analysis purpose.

    So here is the Excel data. Uh

    26:21

    here we are uploading it and providing it to chat GPT and I will open for you guys. So you could also see

    26:36

    so I have this data for a particular company. So you could see that they have the order ids and the quarterly order date, the shipping date and the shipment mode.

    So we will use this data and ask the chat GPT to simplify this data or

    26:54

    analyze the data and provide us some that could be pivot tables or creating bar graphs or bar charts or provide us the KPIs for the particular data. So starting with we will provide a simple prompt to chat GP that is you are a data

    27:12

    analyst conducting a workshop prepare a guide that teaches how to create effective dashboards in Excel and include practical examples and visual aids to enhance understanding and use a direct and instructional tone. So starting here we will ask him that you

    27:28

    are a data analyst and we'll start with that file only. We will upload that file that is Excel data and we have the file opened here.

    So we will ask him to create a pivot

    27:45

    table. So we'll write a prompt that you are a data analyst and I have provided you a sample data.

    28:00

    Create a pivot table. So let's move to the table and see that we can create a pivot table with sales and order date.

    So we'll ask him that create a pivot table

    28:15

    and a corresponding chart to analyze sales performance

    28:32

    by order date. So let's wait for the response and you could see here that he has started analyzing it and I have something more for you.

    If you go to explore GPD section you could see here that these are the GPDs provided by OpenAI and other

    28:50

    creators. These are the trending ones that is image generator scholar GPT and these are provided by chat GP that is DAL data analyst creative writing hot modes coloring book hero.

    So you can use this. I will show you guys.

    But before

    29:07

    that, but before that we'll move back and see what our prompt has generated the response. So you could see that CH is asking that it appears that the Excel file contains several columns related to sales data including some duplicate or

    29:23

    similar columns with variations in names. So to simplify the analysis, I will focus on the main columns like order date and customer name.

    So we will ask him to proceed with that only. Yes, proceed with that.

    29:38

    And if you click here, this is the analysis. This is the code that you could use to do the analysis.

    So you could see that it has started generating the response and he is generating the chart here. So here is the chart and he

    29:54

    is provided the description that this line chart showcase the sales performance over time based on the provided order dates and you can see how sales fluctuate on different dates which help in identifying trends, seasons, impacts or specific dates with high

    30:09

    sales volumes. So if you need further analysis or adjustment to the chart, feel free to let me know.

    And if you click here, you could see all the analysis and all the code snippets that are used by the chat JBT to do and create a bar graph here. So you could

    30:24

    use this code and you could use any ID that could be visual studio code or any ID that you have hands on and you could do the similar analysis there. So this was about the data analysis use case.

    Now moving to the next one that is for

    30:39

    educational purpose. So if you want to learn something or you want a road map to learn any programming language, you could use these chat GPT or the LLM models for a particular road map.

    So for that you could write a prompt that as a

    30:56

    experienced educator write a road map for learning Python programming and provide a road map that should cater to beginners and include resources, practical exercises and milestones and use an encouraging tone to motivate learners. So let's see what this LLM or

    31:13

    GP4 provides a response to this prompt. So we'll ask him that as a experienced educator write a road map

    31:30

    for learning Python programming and the road map should cater to beginners. Yes,

    31:47

    and include resources, practical exercises, and milestones.

    32:05

    Use an encouraging tone to motivate learners. So let's provide this prompt and see what our GPT will respond to that.

    So

    32:22

    let's simplify this prompt. So we have set the context here that is we are learning a Python programming and we have assigned a task to the LLM model that is write a learning road map and the persona we have asked him to be as a educator and the format we want is a

    32:37

    road map with resources and the example we have provided him is step-by-step progression and we said the tone is encouraging and supportive. So let's see what response the chip has provided to us.

    So you could see that he has provided a step one that is

    32:52

    understanding the basics and the goal here is get familiar with Python syntax and basic programming concepts and the resources he have provided is documentation for Python that is python.org ORG code academy Python course and then we have exercises milestone and now moving to the step two

    33:09

    that is diving deeper and the duration is 3 to 4 weeks and the goal is to explore more complex programming concepts like data structures and loops and we have the exercises here and he has mentioned the resources and the milestones. Similarly, you could see that he has provided the full road map

    33:24

    with step three that is with applying projects in Python and then we have step four exploring advanced topics and then we have joining the Python community and then the conclusions. So similarly you could have a road map with him and ask him that you act as a educator and a

    33:42

    guide to me and I will start with this road map and provide the road map to a day wise columns and you start with the day one and ask him to judge me or examine me with the knowledge or the basics that I have conquered through the

    33:59

    day one. So he will act as a educator here and he will examine all the information or all the skills you have gained and ask you questions and analyze them and help you get through the road map easily.

    So this is all about the

    34:15

    educator use case. Now moving to the next use case that is legal and compliance.

    So for the legal and compliance use case, we could have an example as a prompt that is as a legal adviser specializing in data privacy.

    34:31

    Create a compliance checklist for small businesses managing online customer data. Use a formal yet accessible tone and include examples of common compliance pitfalls and how to avoid them.

    So you could ask the charge as a

    34:46

    legal guider or as a legal adviser that will guide you for the particular acts or the compliances that are aligned with judicial custodies of your country. So we have a prompt here and we'll ask the chat GPD that as a legal adviser

    35:08

    specializing in data privacy. So create a compliance checklist here.

    35:28

    for small businesses. Managing online customer data and use a formal tone

    35:46

    and include examples of common compliance of common compliance pitfalls and how to avoid them.

    36:04

    So we will simplify the prompt here that is we have provided the context that is compliance with data privacy laws for small businesses and we have assigned a task that is create a compliance checklist and the personise we have asking the GPD to act as a legal adviser

    36:21

    and the format we want the responses in checklist with examples and the example we have provided is case scenario of non-compliance and the tone we want is formal and accessible. So similarly you could have other legal advis JP and what

    36:37

    you need to do is if you want to read any act or you want to analyze any act you could provide the documents to ch and he will analyze that and provide what rules or regulations you need to follow or with what compliances you

    36:52

    should move forward with that act. So you could see here that he's drafting a document with all the compliance efforts for small businesses.

    So with that we have done with the legal and compliance use case. Now moving to the next use case that is healthcare.

    So in

    37:08

    healthcare we can have an example prompt that you are a medical researcher presenting at a conference on the advances in teley medicine. So prepare a detailed paper discussing the impact of teley medicine on patient care during the pandemic using clinical studies as

    37:24

    references and maintain a scholarly get engaging tone to captivate your professional audience. Or in the healthcare section you could also have a diet plan or a chart plan and you could ask him recipes for a particular diet

    37:40

    and you could also mention if you have any allergies. So we will act with a prompt here.

    So let's write here that you are a dietician.

    37:56

    So provide me a recipe that is healthy and include carbs,

    38:11

    protein content, and a small amount of fat. And remember that I am allergic

    38:28

    to peanuts. So you have set the context here and the persona that you are a dietician and you are asking a prompt that you want a recipe that is healthy and include carbs.

    So you could see that chi has started generating the response

    38:44

    and he's suggesting a kuna chicken salad. So you can also mention that you are a vegetarian or non-vegetarian and similarly chair deputy will act upon that and he has provided all the instructions and all the ingredients for

    38:59

    the recipe. So similarly you could use the prompts for the healthcare use case.

    Now coming to the next use case that is customer support. So we could have an example for that that is as a customer service trainer we will design a training module for new agents that

    39:15

    focuses on handling difficult customer interactions and include role-play scenarios, key phrases to use and tips for maintaining professionalism. So use an instructive and supportive tone to encourage learning and confidence among trainees.

    So let's draft this prompt and

    39:33

    let's see what does Chad JP respond to that. So we'll ask him that as a customer service trainer design a training module

    39:51

    for new agents that focuses on handling difficult customer interactions. include

    40:12

    role-play scenarios, key phrases to use and tips for maintaining professionalism.

    40:33

    So use an instructive and supportive tone to encourage learning and confidence among trainees. So we are setting the tone here that is use an instructive and supportive

    40:50

    and setting the tone to encourage learning and confidence among trainees.

    41:07

    So let's proceed with the prompt. And now we can see that we have set the context that is training for customer service agents.

    And the task we have assigned is designer training module. And the persona we have set here is customer service trainer.

    And the format

    41:23

    we want the response is in training module with role plays. And the example we have set here is scripts and responses for role plays.

    and the tone we want the responses in instructive and supportive. So you could see that CHD has started generating the response and given a model overview that you could

    41:41

    have understanding difficult interactions then communication skills and key phrases, role-play scenarios, maintaining professionalism and module review and assessment. So he has drafted a whole new training module for the new agents that can handle difficult

    41:56

    customer interactions with this module. So this was all for the customer support use case.

    Now move to the use case where you could create PowerPoint presentations using the VBA code that is provided by JPD. So let's ask him to

    42:14

    write a VBA code. So we'll write a prompt that act as a presentation specialist and write a VBA code to

    42:30

    create a presentation on topic modise LLM

    42:45

    and provide the steps where we can use the VPA code to create one

    43:00

    and you could also set the tone here but let's see what chat deputy respond to that and we will open a presentation here so we'll open a blank presentation and for using the VBA code to create a

    43:15

    presentation. What you need here is developer options in your PowerPoint.

    As I've already enabled them, you could enable them by just right clicking on the ribbon and clicking on customize the ribbon and you could see here the

    43:31

    developer options as I have already enabled it. I will checkbox the tick and apply it.

    So after applying it, move to the developer option. Click on visual basic and after that click on insert

    43:47

    user form and click on module section here. Now coming back to our GPT and he has created the VBA code.

    We will just copy it and paste in the module section. Getting back to the module section, we'll paste it here and click on this

    44:03

    play button that is run sub or user form. So you could see that this runtime error that is an error occurred while PowerPoint was saving the file.

    Let's debug it and we'll run it again

    44:21

    as we're getting the error again here. So we'll try to copy the error and provide it to chat JPT.

    So let's ask him that we're getting an error on this. I have encountered

    44:38

    an error and the error is and we'll ask him that the error is

    44:54

    while saving the file. So let's see how does he provide the response to this query.

    45:18

    So he's writing the modified VBA code. We'll copy that and move back to module and paste it here.

    And let's see if it works or not. Currently we are having the same error.

    And now moving back.

    45:33

    Let's see what it provides here.

    45:48

    So you could see that we are not saving the presentation now. And he has generated the PPT for us.

    And this is the basic PPT. You can customize it.

    You can ask the chat JPT to create dialogue boxes or insert shapes or you could just

    46:06

    choose a design from here and make your PPT or the presentation a good-looking one. So now moving back to GPT model.

    So this is all about the use cases. We have hands on the prompts now.

    Now we'll see the key features of chat GPT and some

    46:23

    are newly introduced. we will have a hands on the memory feature that has been latest introduction by the open AAI.

    So if you click on the settings and move to personalization section, we have a memory feature here that is you could

    46:39

    store the memory that we entered as a prompt in chat JPD. So the chat GP memory section works by capturing and summarizing key information you provide using your interactions.

    So unlike one-time commands that need to be repeated in every session, this memory

    46:56

    is designed to retain useful details and automatically incorporate them into future interactions. So this means chat GB can adapt over time, improving its response based on the history and context it has accumulated almost like it's getting to know you better with

    47:12

    each chat. And if you want to delete some chats, you could move into the manage section.

    And these are the memories that has been created with JPD. If you want to delete one, you could delete it from here or you could just

    47:28

    write a prompt or you could just write a prompt that I want to delete a memory and you can mention some keywords there. So this was about the chat JP memory section.

    And if you go to settings, we have data controls that if you want to

    47:44

    export data, you could export the whole chat and send it to someone else. And similarly, you have the security feature.

    If you want to have multiffactor authentication enabled on your GPT, you could do that. And you can have some connected apps that could be

    48:00

    Google Drive, Microsoft One Drive. And we have a builder profile section also.

    That is if you want to build your profile here, you could place your LinkedIn mail id and the GitHub section. And you could also link your X account.

    48:17

    And then we have speech module here. That is you want to listen something by chat GBT, you could have a voice assistant here and listen to a particular voice.

    So this is all about the features of Chat GBT. Now moving to

    48:32

    the chat GPD or export GPD section. Here I have shown you that these are the GPDs created by creators and some are created by chat GPD own.

    And here you have an option to create your own chat GPD or the GPT model. So to create that you

    48:49

    just have to write prompts here that I want to create a data analysis GPT and you could write more prompts here and you will having a preview here

    49:06

    how your GPT looks and here's a configure section where you could name your GPT provide a description to it and the instructions and you could have the capabilities of web browser using or the DAL image generation and you can add an icon for your GPT here. So you could see

    49:23

    the preview here that he has provided a sample prompts with the data analysis insights and visualization support. So this is how you can create a GPT on your own.

    Then we have a feature that is temporary chat as we have discussed about the chat GP memory. So it stores

    49:40

    memory as you ask him or write a prompt that remember I want all my responses to be a tone specific and they should be emphasizing. So he will support this and store this memory into chat GP memory

    49:56

    section and use that as a response for the next upcoming responses. And if you don't want chat GPD to store this as a memory you could use a temporary chat section here.

    So you just have to click on chat GPT4 drop-down and here we have

    50:11

    temporary chat. You just enable it and here you can have a chat with the LLM model or GPT4 model here and he won't be storing any feature or any memory regarding this chat.

    Have you ever wondered how AI models can

    50:27

    understand and process multiple types of inputs like text, images and audio simultaneously? Hello everyone.

    Welcome to our deep dive into multimodal prompting, a transformative approach that's reshaping

    50:43

    the landscape of artificial intelligence. My name is Bassel Dclala, and in this video, we'll explore what multimodal prompting is, why it's essential, and how it's applied across various industries.

    Let's embark on this journey to uncover

    51:00

    the future of AI together. Here's our agenda.

    Introduction to multimodal prompting. Evolution of prompting.

    Core concepts of multimodal prompting.

    51:16

    Tools and platforms that support multimodal prompting. Use cases and applications.

    Getting started with multimodal tools. And then we'll take a look into simple multimodal prompts in action.

    51:34

    and we'll have a deep dive or step-by-step prompt building. Last but not least, we'll look into experimenting with different use cases.

    You might be wondering, what is multimodal prompting? Multimmodal prompting is a technique

    51:51

    that allows AI models to interpret and respond to input from more than one modality, like combining text, images, audio, or video. Instead of just reading text, the model can basically analyze visuals, sounds,

    52:08

    or other forms of data to generate richer, more contextaware responses. It basically mimics how humans process information.

    Now, why it matters and where is it used? Multimodal prompting matters

    52:26

    because the real world isn't just textbased. We live in a sensory rich environment and multimodal AI helps machines understand that.

    It's used in applications like image captioning, video analysis, interactive

    52:42

    chat bots with visual context, and even healthcare diagnostics where images and notes are combined. In addition, multimodal prompting is very useful for content creation and recommendation.

    52:58

    Let's talk about the evolution of prompting. Let's learn about the evolution of prompting.

    From textonly models to multimodal models, prompting has evolved from simple text

    53:13

    inputs to complex multimodal interactions. Today's models can understand and respond to combinations of text, images, audio, and more for deeper context.

    Prompting began with textonly models

    53:29

    like GPT2 which responded solely to written input. They basically leveraged NLP which stands for natural language processing.

    Over time models evolved to process images, audio and even video giving rise

    53:47

    to truly multimodal systems that can interpret and connect across different types of information. how models understand and process multiple input types.

    Modern multimodal models have

    54:02

    specialized architectures such as vision transformers, multimodal neural networks to process different inputs. They are also trained on paired data like images with captions or videos with

    54:17

    transcripts. These models learn how different modalities relate to each other, enabling them to take in multiple types of inputs and generate informed outputs that bridge those sensory channels.

    54:33

    Cross model attention helps models link relevant features across inputs, enhancing context and accuracy. At the core, these models use shared embedding spaces to map inputs like text

    54:48

    and images into a common understanding. For example, a picture of a dog and the word dog are brought close together in this space, allowing the model to reason across formats in a unified way.

    55:06

    Let's talk about core concepts of multimodal prompting. Modalities are different types of information we can give to AI like text, pictures, sound or video.

    They help the model understand things the way humans do through

    55:21

    multiple senses. These senses are text with written language input like prompts or documents.

    Image, visual content such as photos, charts or drawings. Audio soundbased input like speech or music

    55:39

    and video timebased visual and audio content combined. In multimodal prompting, in unimodal prompting, the model receives one type of input, typically text.

    In contrast, multimodal prompting

    55:57

    involves multiple types, like combining a question with an image or a diagram. So, input modalities include one or more types.

    This requires the model to integrate different streams of information and

    56:14

    make connections between them. So let's recap the key difference between unimodal and multimodal prompting.

    First we've got the input type. Unimodal prompting has one single type and

    56:30

    typically is text. Multimodal prompting has multiple types and could be text, images, audio, etc.

    Second, context depth. Unimodal prompting is limited to one form of input.

    On the other hand, multimodal

    56:48

    prompting is richer in context and has combined inputs. Three use cases.

    Unimodal prompting can only be used on textbased tasks such as writing.

    57:05

    Multimodal prompting can be used in multiple tasks and complex ones like image analysis cut like image analysis and speech to text four understanding. Unimodal prompting

    57:22

    is linear and it's language focused. Multimodal prompting on the other hand is crossmodal and humanlike.

    Despite its potential, multimodal prompting comes with challenges.

    57:39

    Models can misalign inputs, generate hallucinations, or be biased towards dominant modalities. Input format issues from the model misinterpreting poorly formatted inputs.

    57:54

    Two, limited modal support. Not all models handle every modality.

    Three, ambiguity inputs. Unclear inputs can lead to inaccurate responses.

    Four, high computational

    58:09

    load. Multiple modalities require more resources.

    There's also lack of standard benchmarks and combining training data from different formats can be resource intensive. Let's discuss tools and platforms that

    58:24

    support multimodal prompting. First, we've got one of the most popular generative AI tools, Chat GPT, specifically Chat GPT with vision.

    This version of Chat GPT can understand and analyze both text and images. It's

    58:41

    useful for tasks like generating captions or answering questions about visual content. If you're not familiar with chat GPT, it's basically an AI chatbot based on OpenAI's GPT, which stands for generative pre-trained

    58:57

    transformer models. It can understand and generate humanlike text for tasks like writing, coding, summarizing, tutoring, and casual conversations.

    And now it has multimodal capability. It's widely used in education, business,

    59:14

    and creative work. Chat GPT can handle multiple cut.

    Chat GPT can handle multimodal inputs by combining image recognition with text generation,

    59:29

    allowing users to ask questions or generate text based on images. It processes the image and merges it with text for a complete response.

    The second next popular tool is Gemini by Google.

    59:46

    Gemini is Google's family of advanced AI models designed to rival GPT. It's multimodal, meaning it can handle text, images, code, and other inputs.

    Gemini powers tools like Bard and Google

    00:01

    Workspace AI, aiming to provide accurate integrated assistance across Google's ecosystem. It helps with complex interactions like visualbased search and multimodal content generation.

    It's designed for broader contextaware

    00:18

    AI applications. It handles multimodal inputs very quickly and uses advanced fusion models to integrate text, image, and video inputs, understanding relationships between them for more contextaware

    00:34

    responses and generation. Next, cloud is a conventional AI developed by Enthropic focused on safety, helpfulness, and transparency.

    It's known for its thoughtful responses, and

    00:50

    is designed to be steerable and aligned with user intentions. It can also integrate text, images, and documents, allowing for in-depth analysis and interaction with various input formats.

    Claude is a strong competitor to chat

    01:06

    GPT and excels at tasks like PDF creation, Q&A, and content summarization. Claude also does a good job of breaking down each modality for text, images, and documents separately.

    01:24

    It processes them and then combines them for tasks like document analysis or multissource content generation. Dolly by OpenAI is an AI model that generates high quality images from text

    01:40

    descriptions. It's known for its detailed creative outputs and is integrated into Chat GPT's interface for users on the plus plan.

    It basically accepts textual prompts and translates them into relevant images,

    01:56

    seamlessly merging descriptive text with visual output for creative image generation. It's got a lot of tools that help you build a scene and will turn it into an image from realistic to fantastical.

    02:13

    Let's talk about use cases and applications in multimodal prompting. We have users that can upload images such as diagrams and charts and then ask questions about them with AI providing contextaware answers.

    02:30

    Next, AI creates articles, social media posts, or marketing copy based on both text and image inputs. In terms of content generation applications, AI can create articles,

    02:46

    social media posts, and marketing copy based on both text and image inputs. for product tagging e-commerce platform.

    Use images and descriptions to generate accurate product tags and

    03:01

    recommendations. Multimodal prompting can also be effective for interactive education.

    AI helps in learning by analyzing images, videos, and text to provide interactive educational content.

    03:18

    It's also useful for accessibility. It assists visually impaired users by converting visual content such as photos and infographics into descriptive text.

    Let's see how we can access and set up a

    03:34

    basic multimodal prompting environment. First, choose a platform.

    You can sign up for access to platforms like chat GPT or Gemini. Next, install required tools and install

    03:49

    necessary libraries. Then obtain API keys for the platform and configure your environment.

    Ensure inputs are properly formatted and compatible for the application.

    04:06

    Lastly, test with basic prompts or run simple multimodal tasks. Let's talk about the required tools and input formats.

    First, you need an API access. Second,

    04:22

    install some Python libraries and have a Jupyter notebook. Also, you can use cloud platforms, although some of these applications can be done using your local machine.

    04:38

    Here's a list of the input formats that you can leverage. Got text, image, audio, and video.

    Let's jump into a demo to show how LLMs can be used for multimodal tasks. In

    04:54

    this demo, we'll be using chat GPT to showcase its powerful multimodal capabilities. Keep in mind that most LLM tools like Gemini and clogged can also provide similar capabilities that I'm about to show.

    for Chat GPT. Many of the

    05:11

    capabilities we're about to showcase are only available in Chat GPT Pro. It's available for a limited number of prompt in the free version.

    If you want to leverage the full capabilities, you'll need to subscribe to Chat GPT Pro.

    05:27

    First, after you log in to Chat GPT, you need to make sure that you have GPT4 activated. Next, you'll see that there's a plus button that is popping up where I can add media.

    05:42

    But before we do this, let's generate media using text prompts. Write a three sentence summary of the plot of inception like you are explaining to a

    06:02

    10year-old. You can see chat GPT gave us a response with a concise three-s sentence summary of the plot of Inception.

    So I'll read the first sentence. Inception is about a man named Cobb who

    06:17

    goes into people's dreams to steal their secrets like a spy in their sleep. Let's take things to next step.

    I'm going to ask chat GPT to generate a poster based on the description in cartoon

    06:34

    style. Let's make sure that it's pointing to the description above.

    Go ahead and run this and we'll give it a few seconds to generate the image. Keep in mind that chat GPT sometimes runs under a heavy load of requests and

    06:50

    sometimes image generation takes time. Image generation is done.

    You can see chat GPT generated an output according to my description. A cartoon-like poster of Inception.

    Next, we'll show Chat GPT's

    07:07

    ability to analyze images. I find this task very useful for analyzing complex visuals such as stock charts.

    I'll click on the plus sign and then add photos and files. Select your photo and then insert

    07:25

    the following prompt. Analyze the support and resistance levels of the attach 4hour interval chart of Google.

    You can see

    07:43

    that chat GPD was able to analyze the chart for me in an instance and was able to give me multiple support levels and resistance levels with their descriptions. Let's do another example of image analysis.

    I'm going to click

    07:58

    the plus sign here and upload a handdrawn chart. And here is my prompt.

    Explain the attached image of a diagram step by step. For reference, here's the

    08:14

    image and here's the explanation. Start of the process.

    The program begins execution. Step one, input value.

    Gives me check for stack overflow. And then it explains the rest of the steps according

    08:29

    to the diagram. It was also able to analyze the gist of the diagram.

    It is a standard stack push operation with overflow checking where it prevents adding data when the stack is full and confirms insertion when space is

    08:46

    available. It can also offer an equivalent code in Python or C.

    That was a good example for visual and text reasoning. Let's increase the level of difficulty of this type of task by giving it a mathematical

    09:01

    problem. I'll click on the plus sign again, add photos, and select my image.

    And I'll use the following prompt. What is the answer to the math problem in the attached image?

    Here's the image

    09:19

    for reference. Find the sum of the series.

    And you can see that chat GPT is generating latex or markdown type of syntax in the background so it can display the mathematical syntax and Greek symbols

    09:34

    properly. Not only was it able to solve for these problems, but also was able to give me a stepbystep instructions on how to solve for it.

    Plus, it recognized that the we have two problems, not just one to solve for. In

    09:52

    this last example, we're going to do the opposite of multimodal prompting. Instead of providing an image, we're going to ask chat GPT to generate a complex image based on my description.

    Provide a detailed process for data

    10:12

    science and feature engineering for or the diabetes healthcare data set. The diabetes healthcare data set is a popular data

    10:27

    set available online. So chat GPT will be able to understand exactly what I'm looking for.

    Here we can see step-by-step instructions of the process along with Python code on how to run the analysis such as EDA outlier detection,

    10:43

    multivaried analysis and then feature engineering as requested all the way to modeling and providing proper evaluation metrics. After getting the steps in text format, I want chat GPT to build a visual diagram of those steps.

    Google

    11:01

    Nano Banana is already killing Photoshop and other design jobs and here's why it's making waves across the internet. It's been only few days since it dropped and the internet is already flooded with insane results.

    If you've been following the buzz, you have probably seen it

    11:17

    everywhere and it's all available to you for free. So, let's talk about Google Nano Banana.

    Google's latest state-of-the-art image generation model that's causing a major disruption in the world of digital content creation. Whether you're designer, marketer, or

    11:32

    content creator, this tool is about to change everything. It's already out and accessible through Gemini and Google AI Studio.

    And the best part is it's completely for free. That's right.

    This insane tech that's making professionals rethink their whole career strategy and

    11:49

    it cost you nothing to dive in. Now, let's talk about what makes Google Nano Banana so special.

    It's not about editing images. It's about taking control of your creative vision in a way that's never been possible before.

    With just a few simple text prompts, you can

    12:05

    completely transform an image. From changing characters and backgrounds to adding 3D models and making everything look ultra realistic.

    So, here's what we're going to cover in today's video. And trust me, you don't want to miss this.

    First of all, we'll show you how

    12:21

    to add characters in your images and the results will look 100% real. You'll also learn how to swap backgrounds seamlessly.

    No more chunky AI generated results. We are talking real life quality images.

    So realistic that no one will be able to tell it was done by AI.

    12:38

    Then we'll move on to creating a 3D model of yourself. And believe me, this is a huge right now.

    Everyone's buzzing about it. And I'll show you exactly how you can use it for your business marketing too.

    If you're into advertising, get ready for a complete game changer. I'll show you how to

    12:54

    create ads using Nano Banana and how you can make money with it by leveraging this tool for your business. Now, before we move on, here's a quick quiz question for you.

    What's the biggest advantage of using generative AI for image creation over traditional image editing tools?

    13:10

    Your options are you can generate realistic images with just text prompt. It requires no creativity or design skills.

    It can only modify existing images, not create new ones. It's more expensive than traditional software.

    Now, let's jump into the demo part and

    13:25

    I'll show you exactly how to use Google Nano Banana to achieve all of this. So, let's get started.

    Now, you need to just head over to Google and search for Google Nano Banana and just click here. You get the first link, Google Gemini.

    Try Nano Banana and Gemini. It's 100%

    13:42

    free. So, just click on this link from here.

    And here we have Gemini 2.5 flash. All right.

    So it says hello. Want to try it?

    Now a few things. We already have making your own custom mini figure.

    I mean you already have the prompt given

    13:59

    here. So you could just simply give your upload your image and it will create model action figure for you.

    Now make sure that your Gemini is in 2.5 flash mode which is fast all round help. And from here just select tools and select create images.

    you get this

    14:16

    banana icon from here. That means it's nano banana.

    All right. So now what I'll do is I'll just upload any image of mine.

    Let's say I have this image from here. And then I'll say Gemini to change

    14:32

    this outfit. Make it in blue color dress.

    All right. And I'll just hit enter.

    Let's see what type of result it gives to us. So we just have to wait for a few seconds.

    14:47

    So now as you can see Gemini has instantly given me the result. It took maybe around uh 1 minute lesser than 1 minute.

    And look at the result guys. This is the result it gave me.

    It has changed the color of my dress. And I'll show you the previous dress of mine

    15:04

    which I had uploaded. So this was my actual image from here.

    And I cannot even spot any error because it's so real. All right.

    So, it's pretty impressive to use this one. So, now I'll give one more prompt that is I'll say Gemini to add Elon

    15:23

    Musk standing standing with me, handsfolded, same posture as mine. Okay, just talk to Gemini as if you're just saying a friend to make these changes.

    You don't need a actual

    15:40

    prompt to do it but then just a normal conversation and just submit this option. Let's see what result it gives us.

    So I'm pretty excited to see the result now. So this was so fast.

    You see it has given me the result and Elon Musk is just standing near to me. Feels as if

    15:59

    I've added this image. Of course, it doesn't feel that real.

    But then, so what I'll do is I'll again give a prompt that make him look standing near to me with his hands on my shoulder

    16:19

    so that it looks real. Okay.

    Now here even if you give a wrong spelling mistake, it will understand it for you. So now as you can see it again gave me the result and this time I think I'm holding hand on his shoulder but um

    16:37

    I I guess my face has changed here a bit anyways. All right.

    So now next what I'll do is I'll I've given two images to Gemini and I'll say swap the background of me standing with the Taj Mahel

    16:53

    background. make me appear in front of Taj Mahel background.

    All right. And I'll just hit on enter.

    17:11

    So now as you can see it has swapped the image and it made me appear in front of the Taj Mahal background which is pretty decent and uh yeah it looks a bit photoshopped for me but then it's fine. Okay.

    I would rate it like maybe 7 out

    17:28

    of 10. Next, what I'll do is I'll make my own custom figure.

    So, for this, I think I need to upload my own image. So, I'll upload an image of my This is the image from here.

    And I'll just

    17:43

    enter. That's it.

    So now, as you can see, it has generated the model image for me. I mean this type of pictures are actually trending on Instagram and YouTube.

    So you could just create it using this and this looks

    17:58

    great to me because yeah the quality is also good and the picture is exact the same and I'm pretty impressed by this result which Gemini gave it to me. So according to me I feel the more better your prompt is the more better the image you get it.

    So now we'll use more better

    18:16

    refined prompts. Now I'll show you how you can use Gemini Google Nano Banana for your marketing advertising purpose.

    So let's suppose I want to add for my ketchup. All right.

    So I'll just upload my image. So here I have a ketchup image

    18:31

    and I'll give a prompt to create a high quality advertisement for a Tabasco ketchup. Okay.

    And I'll also mention already uploaded above. Okay.

    The scene should be feature a spicy pepperoni pizza as the centerpiece with generous

    18:47

    drizzle of Tabasco ketchup placed in between the slices. So I've given all the ingredients needed and how I want my uh image to look like.

    Now let's see the result. Now this time I've used a better refined prompt.

    So let's see what type of image it gives us.

    19:04

    So guys, as you can see, I had given the image and this is the result and this looks pretty good because according to my prompt, the image quality is so good and there's no error to be spotted as well. This could be perfect for your marketing or advertising purpose.

    I

    19:20

    mean, you don't have to spend hours doing Photoshop or creating uh pictures for this. It can generate for you within just a minute.

    That's it. Now, we'll also try for some other products.

    Now here I want to create a high quality advertisement for the iPhone 16. So I've

    19:38

    given the prompt that the scene should feature a bustling Bangalore traffic road as the background capturing city's vibrant energy. In the foreground display a large banner showcasing iPhone 16 with its phone flying upwards in a sleek futuristic manner.

    And now let's

    19:53

    see what type of image it will create for us. So guys here I have my image ready for me and just look at the quality of this image.

    It also had this Apple logo and all the text written and this Bangalore

    20:09

    traffic image. This looks pretty cool right?

    This is best for your advertising purpose because without even spending hours doing trying to find out ideas by just giving simple prompt you get such amazing results. So it's pretty impressive and this is how you can use

    20:27

    Google Gemini, Google Nano Banana for your advertising purpose and you can just earn money through this. Next I also wanted to try one more thing that is how could we merge two images in one picture.

    So, I'll say Gemini to u

    20:44

    merge this dog image into my photo wearing black dress. The dog should be cuddling

    20:59

    with me and I need side picture angle. You can also change the angle of the image.

    It's not necessary that you need just the front view of the image which you're giving. So I have told

    21:14

    Gemini to give me the side picture angle of this image that is me playing cuddling with the cat.

    21:30

    So here I have my image ready with me and here is the image of you cuddling with the dog from side angle. It actually took the side angle image of the puppy, right?

    Not me. But then this uh looks okay to me.

    Well, I'll say this again. I need the side angle of this

    21:51

    image from right side angle. So now as you can see it seems I misunderstood the previous request.

    Here is again the image and this is the side angle image guys. You see I I had given

    22:07

    the front view image and it actually created a side looking image right and who would say this is not real because this looks absolutely real 100% sure and yeah it's great Google Gemini Nanobana is actually great for your creating

    22:23

    images removing background swapping images and also creating for marketing and advertising purpose. Personally, I loved it for marketing and advertising purpose and this could be a great way to begin your journey using Gemini and

    22:38

    let's wait for more new features to be added. So guys, that's it and it's worth trying.

    So I would definitely recommend you to try this out and let me know your feedback on the comments. Hey there developers, let's start with a quick thought.

    Have you ever been assigned a bug fix in a massive unfamiliar code

    22:55

    base? You spend hours just jumping between files trying to trace how one piece of code connects to another and your terminal history becomes a mile long with gre and find commands.

    It feels less like engineering and more like being a detective, right? It must

    23:13

    be frustrating. Now what if I told you that this frustrating manual process is the actual problem?

    What if you could just ask your codebase questions and tell it what to do? That's where claw code comes in.

    In simple terms, it's an AI coding agent that lives directly in

    23:30

    your terminal, understands your entire project and can execute task for you. So, in today's video, we are going to dive into how cloud code works.

    It's not just about writing code, but it's about having an intelligent partner in your development workflow. And by the end of

    23:47

    this video, you'll understand what is clot code. I'll break them down what makes it an agent and not just another AI assistant cloud code features.

    We'll look its key strength and how it works with your idees and VS code and jet brains. We also look how it integrates

    24:04

    with your favorite command line tools you already use. You'll understand your entire code base to see the bigger picture.

    Now, what could you do with clawed code? Well, we'll explore the exciting possibilities from fixing complex bugs and refactoring code across

    24:21

    multiple files to write brand new tests and documentation from scratch. How to master your commands.

    I'll show you how to craft the perfect instruction to unlock its full potential and get the most out of the AI. And after all of that, we'll show you a demo of how to

    24:38

    use clawed code in an open-source project. You'll see how to apply everything we have learned today to a realworld coding problem.

    So, are you ready to see your terminal become your most powerful coding partner? Now, before we dive deeper, let me ask you a

    24:54

    quick quiz question. Which AI tool is primarily known for generating realistic and artistic images from natural descriptions?

    Your options are Jasper AI, MidJourney, Chad GPT or Auto.AI. Now, don't forget to mention your

    25:09

    answers in the comment section below. Also, before moving forward, I request you guys do not forget to hit the subscribe button and check the bell icon so you never miss out any future updates from Simply Learn.

    Let's get started. Now, in a world where software developers spend countless hours

    25:26

    navigating complex code bases and fixing tedious bugs, a new class of tools is emerging to shift the focus back to innovation. Anthropic's claude code is at the forefront of this movement.

    Moving beyond a simple AI assistant to

    25:41

    becoming a true coding agent that works alongside developers directly in their environment. So let's start by understanding what is claude code.

    Now claude code is an AI coding agent developed by enthropic that operates directly within a developer's terminal.

    25:57

    Now unlike traditional AI assistant that newly suggests code snippets, claude code is designed with a holistic understanding of an entire code base allowing it to function as an autonomous partner. Now it lives in your terminal and it has an holistic awareness of your

    26:15

    entire project. This means it doesn't just see one file.

    It understands the intricate web of dependencies, functions, and modules that make up your application. Let's talk about the key features of claude code.

    Now, claude

    26:30

    code is designed to skip into your existing workflow without friction. Its key features are built around this principle.

    First of all, it works with your ids and tools. That is, it integrates directly with popular editors

    26:46

    like VS Code, Jet Brains. Furthermore, because it operates from the terminal, it can leverage all your favorite command line tools from Git to Docker, making it a natural extension of your development process.

    The next feature is

    27:01

    its powerful intelligence. It uses agentic search to understand your entire codebase without manual context selection.

    It makes coordinated changes across multiple files. Also, it optimized specifically for code understanding and generation with claude

    27:19

    opus 4.1. The next feature is to work wherever you're working from.

    It lives right inside your terminal. No context switching.

    It integrates with VS Code and Jet Brains IDEs and it also leverages your test suits and build

    27:35

    system. Fourth feature is that you are in control.

    Never modifies your files without explicit approval. adapts to your coding standard and patterns.

    It's also configurable that is it's build on the SDK or run on GitHub actions. Now

    27:50

    let's talk about what could you do with clot code. The possibilities are vast with its deep understanding and agentic capabilities.

    You can delegate task that were once major time sync. First of all, it can fix your complex bugs.

    Feed it an

    28:06

    error log and it will trace the problem to its source and propose a fix. Then it also performs large scale refactoring.

    Modernize legacy code or switch out libraries across entire project with a single command. Then you have code on

    28:22

    boarding. The claw code maps and explains entire code base in a few seconds.

    It uses agentic search to understand project structure and dependencies without you having to manually select context files. Then you can also turn issue into PRs.

    Stop

    28:38

    bouncing between tools. Cloud Code integrates with GitHub, GitLab, and your command line tools to handle the entire workflow, reading issues, writing code, running test, and submitting PR all from your terminal while you grab coffee.

    28:54

    Now, to get started with the demo, make sure you have VS Code already installed in your system. And also make sure to install Node.js.

    After that, what you can do is just head over to this uh clawed website from here. And we can

    29:09

    just copy paste the first command to install cloud code. Okay.

    So here it's mentioned it's npm install. So we'll just copy this command from here and we'll run it in our terminal.

    And yes, it's also mentioned to install node.js. Since I have already installed NodeJS

    29:26

    and VS Code, I'll proceed to my terminal from here. So now I'll open my terminal and from here just copy paste the command.

    Wait for a few seconds. And now as you can see it has changed to packages in 13 seconds and one package

    29:43

    is looking for funding. So I had already installed claude code previously.

    So it just had to uh check for the packages here. That's all that's all you need to do.

    Just enter this command from here and your claude code will be installed in your system. Then what we need to do

    29:58

    is we'll head over back to VS Code and then we'll just um open any file from here. So I'll just open a folder and I have claw.

    I'll select this folder. So my folder is selected.

    So I'll create a new file name as new claw code. Okay.

    30:16

    And I'll just hit enter. Then we'll just open the terminal from here.

    And now that our file and folder is ready, what we need to do is we just need to type claude. Yes, that's it.

    So I'll just enter claude and I'll just hit on enter

    30:34

    from here. So now as you can see from here it's mentioned welcome to claude code and these are the tips for getting started.

    So I had previously installed the system before so it's not showing me all the features made I wanted in the dark mode or light mode but since if you're doing it for the first time

    30:50

    you'll get more instructions. So now that I have all my features ready I just need to give a prompt to claude code and that's it.

    So I already have a prompt ready with me. So my prompt is to create basic to-do list web app using HTML, CSS and JavaScript.

    The app should have the

    31:07

    following features. I've mentioned the features.

    I want a simple UI where users can add task to a list. And each task should have a checkbox to mark it as completed.

    Also delete button. Task should store in local storage.

    Implement basic CSS styling. Also, if you want any

    31:24

    particular color emphasizing, you can just mention it right here. And I've also mentioned that the web app should allow users to add task and uh generate text input and button, mark text as done, striking through the text, delete task, and many other.

    So now that I have

    31:40

    my prompt ready, we'll just need to click on enter. So we'll just wait for a few seconds.

    So as you can see, it's written, I'll create a basic to-do list web app with HTML. Let me plan this task first.

    So we need to give some time to claw to plan and task this particular

    31:57

    app. Now it's creating our HTML structure for the to-do app.

    Meanwhile, you can see right from here that our script.js style dot CSS is already being created. It's all happening because inside uh the claude is generating creating HTML, CSS and JavaScript files

    32:12

    separately for us. So now it's creating our HTML structure for the todo app.

    If you want to interrupt this particular thing, you can just uh select escape or control T to show the todos if you want. Let's suppose I want to see so I'll just press Ctrl T from here and it will show

    32:28

    me the todos. And if you want to hide it again, you can press Ctrl T.

    So now you can see from here it's creating HTML structure, implement CSS styling, adding JavaScript functionality, implementing new and these are the code here. Now you can see the magic of this without even

    32:44

    writing a single line of code. It has generated the complete code for me that two HTML, CSS and JavaScript.

    Now it's saying that do you want to override uh index HTML? I'll just select it.

    Either you can select it yes allow it during

    33:00

    all the edits. No to tell Claude what to do.

    So I'll just press on yes. Now it's implementing CSS styling for a clean layout.

    Let's see what how it generates for us. Also while it's being uh creating our CSS styling script one

    33:17

    more quick information make sure to install clawed extension from here. Now if you've been using the latest version of VS code you must have the claw extension.

    Since I have already installed it um I have given the option of disable and uninstall but make sure

    33:34

    you install this. Yes.

    So again back to basics it's sync adding JavaScript functionality for adding task. It is also mentioned our font family the font size the background color whatever color is needed.

    So if

    33:50

    you want particularly to modify your any font size and the color you can just type in as in you're talking to a person just say it in the prompt that I want this particular size and color and it will generate it for you. Okay.

    So now as you can see it's saying us it has

    34:06

    already created a basic to-do list app with all the features I requested index.html file style dot CSS script.js and from the file section from here you can see everything style dot css we have script.js we don't even need to create any particular file to create our HTML

    34:23

    file and JavaScript file. It has created everything with all our features implemented.

    Now we'll just click on go live and see how the result is. So let's click on this.

    So guys, this is the to-do list it has generated for us and I can't see any functionalities added here

    34:41

    which is quite a disappointment. So I think the uh feature and the UI is pretty fine.

    It's very basic but again I'll head over back to VS Code and I'll say it. So again I'll head over back to my VS code and I'll say it to give more

    34:59

    functionalities. So I've pasted from uh chat GP to add more functionalities to my app and I'll just hit on enter.

    So as you can see I had given the prompt that the app should include all the following features. Add task, edit task, mark task

    35:15

    as completed, delete. So I've given particularly more functionalities to add and let's see how it does the changes for us.

    So we'll get to know whether it's actually worth it using clawed code and if it's actually implementing the changes or not. Okay.

    Okay. So, it's

    35:30

    saying that I'll enhance the to-do list app with all the advanced features you requested. Let me plan the update.

    Fine. We'll wait for a few minutes again.

    So, it says implementing comprehensive CSS styling for modern UI.

    35:47

    So now, as you can see that our app is fully functional with professional modern interfaces. So, again, we'll just check back.

    We'll go back to our server and I'll just reload this again. Yeah.

    So this is our advanced to-do list and now you can also search your task all

    36:03

    categories. So yeah I did implement it few changes from here and the UI is very basic and clean.

    Uh if you want to add more features and colors you can also do that for the time being to show it to you guys. I did a very minimalistic uh UI added and now you can add your task.

    36:20

    You have different categories like personal work, shopping, health, education. We also have a priority like which task is your high priority, low priority, your date is also added.

    So let's suppose I add a new task that is

    36:36

    shopping and I'll keep it as shopping. This would be maybe medium priority.

    And I'll add a date here. And then I'll just add a task.

    So now you can see that my task is added successfully. We have shopping

    36:52

    from here. You can also edit or delete this task if needed.

    And yeah, it worked really well. So not much complaint.

    So as you can see how cloud code can help you create such application website without you needing to even write a single line of code. So that's a wrap-

    37:09

    up on this video. If you have any doubts or questions, ask them in the comment section below.

    Our team of experts will reply you as soon as possible. Are you ready to find out which AI tool is truly a game changer?

    Today we are diving into the showdown between Gemini CLI and Claude code. Two powerful tools that are

    37:27

    revolutionizing the way developers write code. But which one reigns supreme?

    Let's find it out. So first up, we've got Gemini CLI powered by Google's Gemini model.

    This tool isn't just for coding. It's an entire AI ecosystem with

    37:42

    capabilities that go beyond writing code. Gemini CLI helps you manage files, perform web searches, and tackle massive multifile code bases.

    Its incredible 1 million token context window let it handle large scale projects with ease.

    37:58

    And with its open-source nature and generous free tire, it's a versatile and cost effective choice for developers. And on the other hand, we have got Claw Code by Enthropic.

    This tool is all about quality and precision. It's built specifically to help you write clean

    38:15

    productionready code, manage complex multifile edits, and even automate git operations. While it may have a smaller context window than Gemini, its focus on high quality code generation, which makes it the go-to for developers who need reliability and efficiency in their

    38:32

    project. So, here's what we are going to cover today.

    First of all, we'll take a look at Gemini CLI, walk you through its installation, and explore its unique features. Next up, we'll set up Claw Code and see how it offers a more structured, polished approach to coding.

    38:47

    And finally, we'll put both tools to the test by building an app side by side so you can see firsthand which ones take the crown and as the best AI coding tool. So, let's jump in and see who comes out on the top.

    Now, before we dive deeper, let me ask you this

    39:03

    question. What is the primary function of a large language model like Gemini or Claude?

    Your options are to process and analyze images and videos, to control the physical movement of a robot, to understand and generate humanlike text,

    39:18

    or to create new original code and design concept. Don't forget to mention your answers in the comment section below.

    Now, before moving forward, I request you guys to not forget to hit the subscribe button and click the bell icon so you do not miss out any future updates from Simply Law. So, let's get

    39:34

    started. So, let's get started with the installation part.

    So first you need to make sure that you have already installed node.js and then head over to anthropic claude code website. Here you'll find the command and just copy the command from here to install claude

    39:50

    and then we'll go to terminal and we'll just copy paste the command. So now as you can see that our packages have been installed that is a claude and to check it you can just type in claude here.

    All right. So now as you can see that it's mentioning me that you can trust this

    40:05

    files in this folder not. So I'll just click on yes proceed and it says welcome to claude code.

    That means that our claude has been successfully installed. Now our second step is installing Gemini CLI.

    So what we'll do is we'll just copy this command from here. We'll go back to

    40:22

    our terminal and from here we'll just copy paste the command and just we'll wait for a few minutes. So now as you can see that our Google Gemini CLI has been installed in my system.

    It says added 476 packages in 2

    40:39

    minutes. So it took a bit of time.

    Then we installed claude here. Now what we'll do is we'll go to VS code and from here we'll just select file and then we'll click we'll just select file and from here we can just select open folder.

    So I have my claude code minimal folder

    40:54

    from here. I'll be selecting this folder now.

    Now after I selected it, this is my uh program file name. I'll add one more file name which named as uh claude code new.

    Okay. And now I'll just go to terminal from here.

    Select new terminal.

    41:12

    And now what we'll do is we'll again type in claw to actually see that if it's working or not. Now it says welcome to claw.

    That means successfully it has been done. We'll do it again for Gemini CLI as well.

    Now again I'll be opening one more new window and select on open

    41:28

    folder. and then I'll select Gemini CLI from here.

    So from here I'll be selecting CLI folder and I'll just click on select folder and we'll be creating one more new file as minimalistic website. Let's suppose this

    41:45

    is my file name and I'll just remove this gap from here. Okay, I'll just change the name again minimal web.

    All right. Now, what I'll do is again I'll type in here Gemini to check the Gemini installation.

    And guys, here you

    42:01

    can see that our Gemini has been successfully installed. And these are the tips for getting started.

    So, what I'll do is I'll just enter a command and I'll just say to create an e-commerce website for both the Gemini CLI as well as clot code and let's see which

    42:17

    performs better. So first I have this clot code from here and I've given the prompt to create a minimalistic e-commerce website named as minimal mart using HTML CSS and JavaScript.

    I want the design to be clean, modern, easy to navigate and use a light background with

    42:33

    subtle accent colors. And I want the homepage which should include a hero banner, product categories, a grid style product listing as image, name, price, add to cardart and also the search bar at the top.

    And I've also told to add an add to cardart button. Now we'll just

    42:49

    click on enter. And let's see.

    It has started to create this directory first. So it says I'll create minimalistic e-commerce website with all the features you requested.

    Now let me plan first. And now it has come up with all the todos and it's going on to create HTML

    43:05

    structure for homepage with hero banner. Now it has already started with creating the HTML file.

    We'll go to Gemini and we'll give the same prompt to create a minimalistic e-commerce website and just hit on enter. Now Gemini says it's

    43:22

    planning minimal art design, structuring minimal arts pages. Well, we'll wait for a few minutes.

    So guys, as you can see, it has already told me the plan what it is and it says I will create the project structure and also add content to each

    43:37

    file. And here I have it MKDR minimal M allow execution.

    So I'll just select on yes allow always. So as you can see it has already created a file folder as minimal M from here.

    Now you don't actually have

    43:53

    to do anything. Just sit and relax and this Gemini will do everything for you also.

    How can I forget Lord? I'll be showing you that also.

    Well I'll just select here as allow always. You can do it with your arrow button.

    And it has

    44:08

    started to generate the code for me. Meanwhile, I'll go back to my uh claude.

    And let's see what it has done. Well, it's taking time to create the HTML structure for homepage.

    Still, it is working on the index.html file.

    44:26

    All right. So, from here, you can again see the code.

    Both are pretty fast at creating code. But what I felt is Gemini CLI is bit more faster.

    And then it says do you want to make this edit? Yes, of course.

    Allow this all the time.

    44:42

    Okay. So now Claude is working on structuring the homepage.

    So as you can see it has crossed this line. It means it has already created HTML structure for homepage and now it's working on to create product detail page with HTML structure.

    And meanwhile, Gemini is

    44:58

    working on to create JavaScript code and it has already done uh added card product index HTML are already done and it's working on JavaScript. Well, it's pretty fast as compared to uh claude.

    So yeah, it says I have created the minimal

    45:16

    mart website with a clean modern responsive design and the file structure directory includes index style. It's pretty fast.

    right now. We'll see Claude as well because I started earlier with Claude and Gemini actually took up the

    45:31

    race and it went more faster. Well, here I can see that Gemini actually wins the game.

    So, I'll just select on go live and see what type of website has it created now. So, guys, this is the website it has created named as minimal

    45:48

    mod. Welcome to Minimal M.

    And these are the product categories. category one the featured product and it has also added the option of added to cart.

    Well, it looks pretty good to me but what I felt is the UX the design is bit very normal and it has used similar type of images

    46:04

    and icons to show this right. It's more like black and white color shade.

    Well, it's fine. We'll just add more functionalities to Gemini and say to do some corrections.

    Meanwhile, let's see what Claude has done. So, it's still working on creating the CSS styling with

    46:21

    clean modern design. Now what I'll do is I'll go back to my uh website from here and I'll say please add dark blue color background and also

    46:40

    more functionalities dark and light theme change button along with dark. I want you to add more product

    46:56

    electronic items and name with its similar image. Okay, let's see what um what type of

    47:11

    changes will it do for me. Now, it says it's analyzing feature additions and it's designing theme functionality.

    All right, we'll just wait till it does. Okay, here's a plan.

    It will update and it will also add theme toggle button. Add JavaScript to handle the switching

    47:28

    to save the user's preference. Okay, well cool.

    Now we have Claude here. It's just still working on right.

    Takes a bit of time. So Gemini is pretty fast as compared to Claude.

    Looking back to our Gemini CLI, it is already it started to adding more HTML elements from here. And

    47:45

    you can just look at the code from here. So it's editing and now adding more functionalities and features what I already told Gemini to do it.

    Also one more thing you can also look at the number of tokens it is generating while doing performing the code analysis from

    48:00

    here. And yes finally Claude has completed its task and it says I have successfully created a complete minimalistic e-commerce website.

    All right now just let's test it and have a look what type of website has it

    48:16

    created. Well, I'm pretty excited.

    Okay, this looks better. Okay, now it has added like welcome to minimal mod and this is the electronics clothing home.

    Just look at the while I hover just look at the option the

    48:32

    animations it's giving. I'm pretty impressed by this because uh when I look at Gemini's uh feature, I'm pretty impressed by this because just look at the hover and the animations and this is of uh Gemini CLI and I told it to create

    48:48

    edit and this is what it did for me. It looks good to me.

    The color layout which I told to add dark blue it has done that and these are the product categories. This is the shop now option.

    So while I click on that you see all products option you can add it to cart and it says product added to cart. Okay, you

    49:05

    have headphones, camera, smartphones. This is fine.

    Looks good. You have home option, shop option and cart, right?

    So you can while you add, you can also look at the cart and number. What is the total price of the option?

    You can also have this contact option where you can

    49:20

    enter your name. Let's suppose I enter my name and I have my email from here.

    And you can also send a message. It actually works.

    And okay, this looks pretty good to me. But then if compared I also like claude code result because

    49:36

    this looks more sleek and more minimalistic. Now let's see if I give the changes to claude what it does for me.

    So I've told Claude to make the background dark blue and purple color I need more interactive UIUX design and

    49:52

    add button of dark and night mode option for the user. Let's see what type of changes it does for me.

    Well, in the case of Gemini, I told it to add a button of dark and night mode. I could not actually see.

    So, I'll be saying it again. Dark and night mode.

    Button

    50:10

    option I need in front home page. Okay.

    So, it's considering the placement now and it's making changes again. Meanwhile, as you can see, uh clot code is updating CSS with dark blue and

    50:25

    purple background themes and it will take well now honestly to be said starting I felt Gemini CLI pretty faster but then while you look at the results I feel claw code wins this game because it took time but the result it gave it to

    50:40

    me is pretty awesome right I mean you yourself can compare from here this and this right now let's add more um features here which I've told them to add it and let's compare the final version. All right.

    So Claude here has

    50:56

    successfully done the changes. It says I have transformed the minimal M website with a stunning dark blue purple theme and these are the changes it did.

    The new visual design dark mode and night mode toggle options enhanced interactive elements advanced animation. Let's have

    51:12

    a look at the final look of our website using clot code. So guys, this is the final look of our website using clot code.

    It says welcome to minimal mod. And just look at the animations it has added.

    This is pretty impressive, right?

    51:28

    Look at the cursor while I just move it. It's like circular purple color theme which is pretty impressive.

    And the other options electronics, clothing, home and garden, books. And while I hover to this, it actually floats right.

    So it has added a lot of animations.

    51:45

    What I felt is the color which I already told purple. So it's not visible much in this background.

    We can obviously change that. This is just a minor mistake.

    But apart from this, this looks pretty cool to me. The animations, the styling is actually good.

    Let's look at the option

    52:01

    of the dark and night mode. So as you can see, if I just click on this, this is a dark mode.

    And if user want the light mode, they can click on this. This is such a cool feature it has added.

    And this works so smooth. Now in the case of Gemini, let's have a look if it has done

    52:18

    the changes for me. So it says I've updated.

    All right, it has updated and done all the changes. I'll just go back to my website and reload this.

    So this is the final look of my website.

    52:35

    Welcome to minimal M. Again the color theme which I feel we could have direct maybe black color.

    Now this is also one more major minor mistake. And guys, it has finally added this dark and night mode, right?

    So, this is the light mode and this is the dark mode. I like the

    52:52

    dark mode. Now, let me know which one you liked it because according to me, I liked u repeat.

    Now, according to me, if you ask, I feel Gemini has added more of pictures here and tried to make it interactive. But

    53:09

    then it was pretty fast as we compared to clot code. But then if you look at Claude code's result, it took a lot of time but the styling, the layout and the animation it it added is pretty awesome and appreciable.

    So I would go for Claude code definitely. As technology

    53:26

    advances in all aspects of our lives, programming has become increasingly important. It is used in many fields and industries including software development, gaming and entertainment, education, scientific research, web development and many more.

    So needless to say, the demand for programming and

    53:43

    coding in the IT industry will probably keep increasing for the foreseeable future. But where does CHP open AI's popular language model fall in this chain?

    That's exactly what we are focusing on in this today's video. As I said earlier, programming is utilized in

    53:59

    many domains like web development, robotics, mobile development, machine learning and so on. So how can a program achieve maximum code efficiency?

    Nowadays we have UI based tools like charge to make our programming experience more efficient. Although

    54:14

    there are several coding resources platforms such as Stack Overflow and GitHub where programmers can find solutions to their technical programming questions. Tajip stands out from the competition because of its quick response time, usability and support for numerous languages among many other

    54:29

    benefits. Now let's first discuss how Chad works.

    Chad GPT generates responses to the text input using a method called transformer architecture. A large volume of text is fed into the chat GPT from various sources including books, websites, and other social media

    54:44

    platforms. The model then uses this information to forecast the following word in a phrase based on the words that came before it.

    The charge systems allows users to enter text or queries and then the system uses its training data and algorithms to produce the right answer. The answer is created after the

    55:00

    input text has been examined and the pattern most likely to match the input have been identified using the training data. In short, charge is designed to respond to queries logically and command more quickly and accurately.

    But why do programmers use charge on a regular

    55:16

    basis? TJ assists programmers by offering programming related answers and solutions and helping them improve their skills.

    Beside that charge is utilized for code generation, code completion, code review and a natural language interface. Let us understand each in

    55:32

    detail. Charge is trained to generate the code or even the entire program described in the natural language specified by what they want a program to do and then charge could generate the relevant code.

    Look at the example of how charge GPT generates the code. So

    55:47

    now open the charg to generate. So I will give write a palentum program in Java.

    So here you can type write a palindum program in Java.

    56:04

    So using Java programming language it should generate the whole program. So as you can see it has generated the program.

    So it has used a class name

    56:19

    called palindrum checker and it has used is palendrum as a method name and also it will give the explanation on the program. So you can see here why it is explaining why east is palendrum is used as a method and u it also explains the

    56:36

    for loop if a condition and so on. Next we have code completion.

    Tajibri is trained to generate snippets of code or even fullyfledged programs. It can generate a list of possible code completion depending on the context of the user's incomplete piece of code.

    By

    56:53

    automatically producing the entire code, it can help the developers save time and minimize errors. Next, let's see the example of code completion using tajibility.

    So even if the program is explained in natural language, TAGP will generate the proper code and give the complete code. So let's type here

    57:12

    using a function write a program to convert the string in upper case.

    57:27

    So using which language? Let's keep using C programming and enter it once again.

    So as you can see you have just said that using a function write a program to convert the string in uppercase. So

    57:44

    using C programming language and using C programming language it has used function and you know this is the function convert to uppercase and has given the complete code for string uh to convert a string in uppercase and also

    58:00

    it gave the explanation here the convert to uppercase function takes a pointer to a string as its argument and then iterates over each character in the string using a for loop. So it explain why for loop is used, why two upper is used and why the method convert to upper

    58:17

    case is used everything. So let's say uh we'll give one piece of code like void to upper car

    58:33

    str. So as you can see we just gave the method to upper and it's generating the complete code.

    So this is how charge

    58:51

    works for code completion. Next code review.

    Tajibility can analyze code, identify the bugs or errors in the program and further help resolve them. It allows developers to fix errors more quickly.

    So now let's have a look at the example of code review. So in this

    59:06

    example, tag will review the code. So even if the code has some mistake, it will give the proper output.

    Let's say we have given the example here. So we give the function or a method called upper and here we are giving the keyword

    59:23

    called upper. So it should check whether this piece of code is proper or is there any mistake in this.

    So as I said uh it's saying that the given code appears to have logical error as the function upper is being called

    59:39

    recursively on itself inside the loop. So instead of giving two upper we just give upper here right.

    So using the keyword to upper only then the string can be converted to upper case. So here we give just upper.

    So it says that it

    59:57

    is having this piece of code is having a logical error and it gives a proper code for us. So I hope it's clear and then we have natural language interface.

    With the use of chat GPT a software application can be given a natural language user

    00:13

    interface that enables users to communicate with it through natural language instructions rather than through conventional user interfaces. Next let's see how tag helps the programmers for natural language interface.

    So let's say we'll give here create

    00:29

    a software application where the user asked to enter credential

    00:45

    for the todo app. Click enter.

    So as you can see the charge will give

    01:01

    the steps. So it can provide you with an outlet for creating a software applications that requires the user to enter credential for a todo app.

    So here it is few steps that we need to follow to do a to-do app.

    01:21

    So it's giving the explanation step by step.

    01:40

    So it says that determine the programming language and framework. Then set up the database to store the user information and then create the registration page and then finally create the login page as well.

    And once the user is successfully logged in um

    01:56

    you know it will have the options like add edit and delete task as well. And then finally implement the security measures to you know protect your passwords and then test the application to ensure that it works as intended and the user data is being stored and

    02:11

    retrieved correctly. So it gives the steps of how it has to be developed.

    I'm sure you all are aware of charge at this point. The revolutionary new AI based chatbot developed by OpenAI has taken the world by storm thanks to its near

    02:27

    lifelike responses and a very intricate pattern of answers. We have never seen this level of expertise from a chatbot before which really made us think to what extent can we push it.

    There are many questions on lead code that even the most experienced programmers have

    02:43

    difficulty answering. So we wanted to see how far Chat GD can take us.

    Have we finally reached the stage where AI is going to replace us? Let's find out.

    So basically here we will be listing 10 really difficult questions that we found

    02:58

    on lead code popularly asked while hiring and other superior examinations and see if Chad GPD can actually answer or solve those difficult questions or not. But before you like to watch more such interesting videos then do subscribe to our YouTube channel and hit the bell icon to never miss an update

    03:15

    from Simply Learn. So let's get started.

    So here is the lead code. Let's see uh in our list which is the first question that we are going to implement in our chart GPD and see if it's able to solve it or not.

    Mainly we'll focus on hard category questions only. So

    03:34

    according to my research there's a question of median of two sorted arrays. So as you can see the success rate is

    03:50

    35.7%. So let's see the chip is able to do this question.

    So first let us go through the question.

    04:06

    Okay, pressed enter. Let's see what it first returns.

    04:26

    uh on a one approach to solving this problem is to use a modified binary search algorithm to find the median of the two sorted areas. So right now it's particularly giving the logic which we can actually imply to solve this question and uh this is a good thing

    04:43

    about the chat GPD is that before uh giving the code it's actually explaining how they are putting the logic in together in the code. So probably you can use this logic to create your own program.

    But let's see how sensible this

    04:58

    code is. Quite a lengthy program in Python.

    So if uh you are looking for your solutions on charge GPD, you can always opt out or you know mention the specific program you want the code in.

    05:15

    So okay it's it's also gives that okay that the time complexity will be o log min m or n. So let's see if it the case or not.

    05:31

    So we have copied it and we'll quickly paste it over here. As you can see uh as we know that Python is uh very

    05:47

    sensitive towards its u syntax and you can see the indentation over here is perfect but here it's not. So I feel that something like this is something with lead code.

    So let me just quickly

    06:03

    rectify this. You can see I have uh cleared out the indentation issue and let's just quickly run this program so that we can get an idea if this is the correct program.

    06:20

    Uh now you can see here we have an error. Let's see.

    Copy this and see what chart GPD has to say for this.

    06:58

    So if you remember when we actually saw this question there were three arguments passed through this function which was self nums one and nums 2. Ask GPT if uh it can write the code with self argument and see if it's correct.

    07:22

    and wait that if it can pass a self argument through that function and see if it can generate a new code. Okay, now it has cleared that.

    Yes, it takes three arguments.

    07:43

    Let's copy this code it over here again. Then I think we will have to go through the indentation process.

    07:58

    Oh no, this time it's fine. Okay.

    So now quickly run this program. Let's see if this time it passes all the test cases or not.

    Okay.

    08:15

    I think it doesn't need because the class is already mentioned over here. Uh yeah.

    08:31

    Okay. The runtime is 35 ms.

    And as we can see case one and case 2 is definitely passed. So let us see if this code can pass all the test cases

    08:47

    internally mentioned in this question. Now you can see the first three cases are actually accepted but the other three are not.

    There's a runtime error.

    09:03

    So the first question in our list J GPT was unable to solve. So let's move on to our next question

    09:21

    that is zigzag conversion. Now let me quickly search for it.

    This is the question here also you can see the success rate is definitely below

    09:37

    50 and uh the difficulty level is medium. So the hard one charged was not able to solve.

    Let's see if this can be done.

    09:53

    So it will give a string that will be written in a zigzag pattern on a given number of rows and uh then you have to read the line in a certain as you can see over here. So

    10:09

    we have to write the code that will take a string and make this conversion given in a number of row. This also means a certain amount of certific uh repeat does also given a certain specifications that we exactly want.

    So this time we'll make sure that

    10:26

    we are mentioning everything. So let's quickly copy this.

    Yeah.

    10:54

    Okay. Now that we have mentioned all the specifications, uh let me quickly see

    11:19

    read and paste it. Now let's see what code it has to generate.

    It's implementing the code in C++. So meanwhile it's generating the code.

    Let's quickly

    11:36

    select the code. Oh, it's already C++.

    Okay.

    11:53

    Okay. It's also suggesting that we can definitely use Python and Java and it's generating an alternative code as well for us.

    That's sharp.

    12:10

    Just quickly copy this code and the one I think it's a okay. So it's generating in uh Java.

    It's generating. We'll definitely have a

    12:26

    look at its alternative codes as well. Let's quickly have a look at what it has to the C++ one.

    the first code generated by it uh is correct or not. Okay.

    Copy the code.

    12:50

    Paste it over here. Dilation error.

    13:06

    Okay. Okay.

    28. Okay.

    So let's error

    13:22

    the one thing definitely this time we have mentioned all the constraints criteria specifications that we wanted in our code but again charging

    13:42

    what questions we have implemented till now let's see if it has any success straight in further. Okay, it seems okay.

    14:02

    Okay, now it is okay. So, uh this time it is uh generating the solution concerning the error in Java.

    14:19

    uh see what it has to say in Java and we'll make it specific that the error was in C++ program. So generates the correct code

    14:39

    plus program.

    14:55

    what it has to say. Oh, it's nice.

    15:19

    See uh this error is something related to the compiler and now it is giving the updated code in C++ is the correct one or not

    15:37

    apologizing for making errors in their solutions. Quite fascinating.

    Uh yeah, paste the code over here. I don't see

    15:54

    there's a lot of difference or changes over here. Let's see if it runs or not.

    Let me see if all the braces are covered over here or not.

    16:36

    I think it's missing a brace.

    17:10

    Okay. So, there was a syntax error.

    One brace was missing. Um, I don't got that definitely something with the code, but okay, we can give that to CH GPD.

    It was

    17:25

    partially read quotes issue because we were copying quotes still it was not getting the error that there is a brace missing has passed the first three cases

    17:40

    mentioned over here and the runtime is 3 ms. Now just let us submit this code and see if it passes all the rest of the cases or not.

    Mind this time we have actually mentioned all the constraints. So let's see if this has to in two.

    18:01

    Okay. So this time it has passed all the test cases.

    But still my conclusion with this question is uh it was still not able to generate the solution in one go. uh but I can give that to charge GPT because the first error that we faced

    18:16

    was more of a lead codes issue because it was something with compile and charge GPT was able to give a proper oh now let's go back to our problems

    18:33

    list so right now the score is one and one it was unable to solve one question and one Not. So let's have a look at the third question and see if that brings any difference to the chart GPT's

    18:51

    scoreboard right now or not. Third question that we are going to deal with is substring with concatenation of all words.

    This is in

    19:09

    the category again. And the success rate is 31.1%.

    Which is even lesser than the first question that we faced which was median of the two sorted arrays. We trying here we are actually trying to

    19:26

    cover all the spectra the huge spectra of different types of uh questions and you know categories available in coding. and uh to give you an idea of uh how

    19:41

    beneficial chart GPT can be for you to solve difficult questions which can be helpful for your interview base in companies or you you can say well established companies or among companies. So here this video is

    19:58

    specifically for you to give an idea that whether you can use it for your benefit and you know to get an idea or you can actually uh compare it with yours uh and you can get a you know wider

    20:15

    range of different types of approaches to a certain question. So let's start with the third question.

    20:31

    is that you're given a string and an array of strings probably words and all the strings of words are of the same length. Now a concatenated substring in S is a substring that contains all the strings of any permutation of words

    20:48

    concatenated. Here you can see it's given an example that if a words has a b c d e f then ab cd ef basically it uh has done all the permutation and combinations that can be done using that specific array.

    21:06

    Uh and that a c db e f is not a concatenated substring because it is not the concatenation of any permutation of words. So we have to basically return the starting indices of all the concatenated substring in the s string

    21:26

    also you can choose any order for it. Here it has also given to two examples for you to understand the question in a better perspective.

    Now uh copy this question and see if uh

    21:44

    what programming language algebra chooses to answer this time with new chart again.

    22:07

    Copy these constraints.

    22:25

    Okay, it's just a question. Okay, this is the solution that we're

    22:43

    getting right now. You requested a model that is not compatible with this engine.

    Please contact us through our help center at help.openai.com. For further questions, let me just quickly refresh it.

    If it has something to do with the you know demand also

    23:00

    sometimes it happens that the console is very busy and you're unable to implement uh your task in it. So again let's quickly paste it over here

    23:16

    that we want a code. Write the code return.

    Now

    23:38

    press this enter. Definitely it's not giving the same error.

    Let's see it this time it charge has anything to give as a solution.

    23:56

    Okay. So, it's generating the code in Python.

    One more thing every time charge doesn't

    24:11

    follows a similar pattern. As you can see in the first question, it explained the logic first and uh then implemented the code.

    Second time it just gave you the approach not the logic and then implemented the code in multiple languages. Uh first choosing for C++

    24:31

    and this time it's straight away went for the code. So definitely we can say that it has some different styles of generating their code and explaining the code.

    I think it depends on the understanding how they want the code to be presented in front of the user and to

    24:47

    give the perspective that if the code is understandable or not and if the code has multiple approaches I think charge it is capable to capable enough to give that the code is generated let's quickly copy it

    25:03

    and paste it over here I feel the indentation issue is going to be there. It was not there.

    25:18

    Okay. F probably.

    I think if we back speed

    25:36

    Sure. Let me quickly rectify this and I'll

    25:52

    you once the indentations are well

    28:48

    See the indentation is uh corrected. Now let's have a look at the code if it's correct or not.

    Let's quickly run it. Definitely it has given our first syntax

    29:06

    error. Again I can see that even specific video question it definitely goes to at least one error which is mainly the syntax one.

    sure if that is something with the lead

    29:22

    code or you know with the charge GPD code generation

    29:39

    it okay now that we have given the error okay it seems that the error is caused by the use of type hints the function type hints when introduced the version we're using is lower than that. Okay.

    So basically it's generating now in the

    29:56

    Python version. Uh probably this code is well suited for different version of Python.

    Let's change it then and see if that helps

    30:15

    it. is actually generating the code.

    Okay, again the other one I think it's again something with the self uh

    30:32

    Python one the new code generated is here we definitely come back to that error and have a look at the Python 3 code also. Uh first let's copy this code and

    31:28

    And giving the same error and see what it has to say.

    31:49

    uh you know every time the charge generates the Python code we cannot ever takes self as an argument but believe as you can see when we start the code it's already mentioned what arguments we

    32:04

    need to pass from that particular function so I think that is something with the lead code so that what all arguments it's passing even though we have mentioned everything this time. We have mentioned all the constraints.

    We have mentioned all all

    32:19

    the uh necessary specifications that we want in the code. Even then the code is not correct in the one go.

    So probably I'll give this point to GPT it's something with the lead code

    32:35

    because it's passing that parameter and uh everything we have to mention that parameter pretty much when we do the chargity is able to solve the question. So let's see what let me actually mention that

    32:52

    pass uh self parameter self argument from that parameters one of make it a one of the parameters

    33:38

    processing that yes, self can be passed as the first argument to the given function. Now let's see if that it's able to give the

    33:54

    correct solution or not. Again we can see it's uh generating the code and uh you know Python 3 but we can give that to charg that either it uses Python or Python 3

    34:13

    the error is with the self argument. So once we mention that error and when we mention that specification that if charg can pass self argument through that particular function in that code the

    34:28

    solution is pretty much right. So here it's also implementing and giving the answer.

    Okay. It also has mentioned that uh it's

    34:43

    important to mention import imported list and counter. So okay just copy this because again there will be a lot of uh ination issues.

    35:05

    Okay, now that we have it, let's copy the code is already mentioned. So yeah, we'll just

    35:23

    copy it from the function. Paste it.

    Let's

    35:49

    run this code and see if it has the solution in it or not. Yeah, there is an indentation issue.

    Let me

    36:24

    again as you can see it is able to pass all the test cases here and the runtime is 28 ms. Now let us submit this question and see if it is able to pass other test cases or not.

    36:42

    accepted. So it is able to pass all the test cases and uh I think this is something with lead code again uh whenever we are generating the Python code we are actually passing self argument uh in lead code but chart GP is

    37:00

    not assuming it. So this solution is definitely correct.

    uh even though we are specifying everything we will have to be more specific that we have to run one more argument uh from the function so that it you know generates the

    37:17

    solution in one go. So let us uh try that in our next question.

    We can definitely see Charg was able to solve this. So now the scoreboard is 2 and one.

    uh among the three questions it

    37:32

    is definitely able to solve two questions. Even the first question it was able to generate a correct solution but it was not that accurate to pass all the test cases

    37:49

    to our question list. The next question that we are going to cover is n queens

    38:08

    category but still the success rate is 63% over here. Again a new genre or question we are covering over here.

    Let us see if solve it or not. Definitely the success rate shows that many people were able to

    38:24

    do it. Definitely more than half of people who have attempted it.

    So let's see if the question can this AI can beat that or not. So uh I have copied this question.

    This

    38:42

    question mentions that the n queen's puzzle is the problem of placing n queens on an n into n chessboard. such that no two queens attack each other.

    Given an integer n, return all distinct

    38:58

    solutions to the n queen puzzle. So basically n is any given number and you have to create a puzzle of n into n and you have to arrange all the queens in such uh you know way also the number of queens in the board will be equal to the

    39:14

    number and in such a way that it is not able to attack each other in any case. this part of the question and the constraint is just one.

    Let me

    39:36

    that right it will be easier to keep a track of uh what all questions charge GBT is able to solve.

    39:55

    So these questions are very popular uh in interviews uh whenever you actually go to technical rounds and uh or prestigious companies. These questions are very popular.

    Uh

    40:11

    they are considered as very uh suitable questions to check uh to check your you know IQ and to check your potential that how well aware you are towards your coding potential.

    40:29

    Now that we have pressed enter it is giving the logic it is going to implement in its code. One approach to solve the end queen's puzzle is to use backtracking.

    Idea is to start by placing a queen in the first column of the first row. So

    40:44

    let's see if the code is again capable to you know solve it or not. Again it's giving the Python code.

    41:00

    Generating the code. Let us quickly see whether it's Python or Python 3.

    Python Python 3. Okay.

    Once the code is generated, I'll also

    41:15

    write that you need to pass one more argument from the main function uh that is self and uh let let it generate the code again and see that code can run in one go or not.

    42:10

    what I want and let us see. Sure.

    Here's an example of how you can pass an additional parameter self. Let's see if it suits the code or not.

    42:54

    The solve end queen function is a method of the end queen's class and it takes self as its first parameter followed by the integer ends. So yes, this solution does take self as a parameter.

    So let's see if this can run in one go or not

    43:11

    because this time we already covered the most uh frequently generated error which is syntax error of not mentioning parameter self. Let me see the indentation if it's correct or not.

    43:45

    This time uh let us quickly run this. class.

    44:06

    Okay, there you go. Now you can see that it has actually run the code in one go and all the test cases are passed in one go.

    Even the runtime is 39 ms. So definitely chart GBT is able to provide the solutions the logics in a proper

    44:23

    manner. It's just that we have to be more specific with what we want exactly uh from charge GBT.

    Right now we can see that it has been able to successfully generate four out of five uh questions that we have actually implemented till

    44:39

    now out of which four are from hard category. Let us submit this code and see if it also covers all the test cases in internally fed in for this question.

    Just a second. I think I will have to

    44:55

    submit it again. There you go.

    It has accepted all the test cases and this question is done by child GPT. It has actually implemented

    45:14

    Next question that we are going to cover till now I can say charged has taken the lead. It is pretty much able to implement all the questions.

    uh I think there are still some range of questions that it is not able to implement. As we saw the first one was not a huge success

    45:34

    but um I can still give that to charge as it is in AI and still in a you know developing mode but still if it can give you 90% of the output correct it is a pretty decent and you know amazing thing to do.

    46:02

    is shortest subarray with some at least k. Now let me search for it.

    46:21

    Again this question is from hard category and its success rate is even low which is 26.1. Let's see if this question can be solved by chart GPT.

    46:36

    Let's have a look at the question. Given an integer array nums and an integer K, return the length of the shortest non-MPT subarray of nums with a sum of at least K.

    So if there is no such subarray return minus one.

    46:53

    So sub array is a contigious part of an array. It has also given a description of what an array is or what sub array is.

    So now that we have

    47:10

    a mute chat and let me change.

    47:30

    So here is one way to solve the problem. Initialize two pointers uh left and right both pointing to the first element of the array.

    Initialize a variable. So now this time this it is giving pointers to solve this question uh you know a

    47:46

    perfect approach in a sequential manner so that you can also use these pointers and the logics it's giving to actually implement your own code apart from you know asking it to generate a code. So I can see that it has given the pointers

    48:02

    but not the code specific.

    48:18

    Oh yes uh it's giving an implementation of the algorithm in Python. Uh again we can see it's not the Python 3, it's Python uh little lower version of Python.

    Um,

    48:37

    switch to Python. n we have to pass self.

    49:14

    So it has generated the code and explained what all variables and what all statements have the individual functionality as. So we have also asked it to pass self parameter through its function and then write the code.

    So let's see if it aderes to it and

    49:30

    generates new code with self parameter. Okay, so here is the new code with self passed through the main function.

    49:48

    Now I do have confidence on chat GPT till now that it was able to generate logical and you know pretty decent solutions for every question.

    50:05

    is passed to the class as an argument when an object is created and is stored as an instance variable. The shortest sub array takes K as an argument which is the target sum.

    So basically it's trying to explain the code that what

    50:22

    exactly it's doing and what individual statements have as an influence on the code and whatever parameters they are passing what influence or what position they hold in the code.

    50:39

    uh definitely charge is not just generating the code uh you know it's also explaining the logic and approaches towards it and when it's generating the code as you can see here they have legit explain the whole code how it's actually

    50:55

    functioning that's a good way to you know put emphasis on you know put a confidence on the solution pushing

    51:15

    this. Yeah.

    51:34

    Okay, let me Okay, so it's apologizing. It made a

    51:51

    mistake in its previous response. The init method should take two arguments, one for the nums array and one for key.

    So now it's again generating a new code or alling to the syntax error or type error generated

    52:11

    right or not mentioned that here the method takes two arguments one for the nums array and one for the k which was the error exactly

    52:29

    And it's again explaining the whole code so that one person who watches code or you know the logic could understand the functionality of it.

    52:48

    Let's see if this code can run code. Checking for indentation.

    Removing class

    53:04

    and run. One more type error.

    Let's copy it and

    53:26

    and we mention that we want self argument to be passed. It is still not able to generate you know the correct solution twice.

    So let's see if this time it can work

    53:41

    out. issue though with the code.

    Apologize for the confusion. Seems that I misunderstood the

    54:06

    you are trying to call the function directly without creating an instance of the class. in that function outside of the class and simply call it by passing the parameters like this.

    Okay, the code is definitely generated by itself. So definitely I am not calling any function.

    Uh the code I copied was

    54:25

    actually generated by chart GPD itself. So it's it contradicting its own pointers.

    Maybe we have copied the question uh sorry the code. Let's paste it.

    Again

    54:42

    look for indentation and run this code.

    55:01

    Okay. Once again it is not able to you know us error is there.

    Now again we have to

    55:16

    copy this error and paste it.

    55:42

    Again according to charge GPD it was confused with the context of the question. Uh like I said the code was anyway generated by charge GPT.

    So yeah this time I think ch GPT is trying to contradict its own logics.

    56:00

    Let's see if the current code can do the miracle of solving this question.

    56:30

    class and run. Okay.

    So after four attempts of running the codes generated by chart GPT on lead

    56:46

    code for this particular question finally now it's able to pass all the test cases. So I can I have a very contradicting point right now.

    Uh not exactly contradicting more of a

    57:02

    skepticism that okay chubby does generates uh proper code or logic but it doesn't considers all the criterias or it also has a tendency of taking the

    57:18

    question in a wrong context. So I feel that when we as a human try to solve these questions we definitely try to implement the all the logics and if we get it we can actually get the code

    57:34

    in one go or you know demand of the question and be AI being the superior version or you know trying to be the superior version of human brain and going to the extents of a human brain. Okay.

    57:51

    Still faces those issues can be a you know drawback for chart GPT because you can see for this specific question we have faced multiple types of error and we have seen charge GP contradicting its own prior code. So

    58:10

    definitely this is something to consider or you know something to think about. Submit this code

    58:32

    like I said after giving four attempts it is still not able to pass all the test cases. It is only able to pass 61 out of 97 test cases which is almost 70% of the test cases

    58:50

    30% of the test cases are still not passed. Even though we mentioned all the constraints, we have mentioned the comments, we have mentioned all the errors that this particular code can go through, still it was not able to generate the proper code that could go

    59:06

    through all the test cases. So this was a fail for charg uh at least till now we have made out this point that charge GPT is definitely not able to solve every question.

    59:24

    question. Click array with same average.

    All right.

    59:44

    ar with this is from the category of hard questions and the success rate is only 25%.

    59:59

    Uh let me refresh it and remove this code. Now let's have a look at the code what it demands.

    So you are given an integer array nums. Now you should move each element of nums into one of two arrays A and B such A and B are non-

    00:16

    empty and the average of array A is equal to average of array B. Now return true if it is possible to achieve that condition is not justified.

    Alo given a note that for an array uh

    00:35

    average array is the sum of all the elements of array over the length of array. Okay.

    So it it is giving you the logic of how to actually you know find out the average or what average is exactly and also it has given a few examples to give you how the code needs

    00:52

    to be projected or implemented. Please copy the code and paste it.

    01:23

    Okay, it is possible to achieve this by checking all the possible subsets of the nums array and comparing the averages of the subsets. So definitely it has given you the approach uh the way you can actually think of solving this question.

    01:38

    uh however this approach would have a time complexity. So it's also giving that this approach can have a time complexity of O to the power uh n.

    So again it's suggesting a different approach a more efficient approach that will be using dynamic programming. Now

    01:56

    that can be used to find the subsets with a specific average and create a 2D array with the length of I and J that will be the length of the two we have actually given as input and represents

    02:11

    whether or not it is possible to get a sum of J using the first elements of the nums array. So, [Music] so the time complexity of this approach

    02:27

    would be OS. So, definitely is not given a code.

    Uh, let me ask for it.

    02:45

    that not for every question charging a code. It is also mentioning just and the approaches that we can actually use.

    Uh definitely again we will have to mention that what we want from them

    03:10

    as you can see I've mentioned this time to write a code for it and now it's generating a new code for it is also mentioning comments that what every snippet of the code is actually for and what it will do like here you

    03:25

    can see it will fill the todp uh dp stands for dynamic programming this here it has mentioned initialized the first column is true so it is also mentioning the comments for better idea of you know understanding the code in a

    03:40

    better way also giving the note that this implementation assumes that the nums array is non empty and that the elements of the nums array are non- negative integers Okay, also the above implementation will return the possible subset that can be

    03:57

    formed by the array to fulfill the given condition and not boolean true and false. But that's what we want exactly, right?

    Uh still okay let us this approach is in Python. So let's copy

    04:12

    this and see this code can run or not. I can clearly see that again it needs to pass self uh parameter

    04:36

    indentation.

    04:56

    Give it a heads up.

    05:26

    or I have mentioned what I want specifically. Uh sure here is an example of how you can pass the self parameter.

    So now the current solution will pass self. Okay.

    05:42

    Great. There was an error.

    Okay.

    06:14

    ing and it has generated a new code uh considering the criteria I just mentioned that I need self parameter to be passed through the function can partition. So I have copied it and uh let's quickly

    06:31

    paste it over here. uh you know pass all the cases or not wait uh we need to remove this.

    Okay, it's done. Now quickly just run

    06:48

    this program. Okay, so here is a error attribute error.

    Solution object has no attribute. Okay.

    So, let me copy

    07:03

    it and paste it over here and see what charge GP has to say about this error. Okay.

    The error message solution object has no attribute uh split array same average suggest that

    07:19

    there is no method named with this in the solution class. So, it is likely that the test case is trying to call this method but it does not exist in your implementation.

    Anyway that I have not mentioned this method. This was given by chargy itself.

    So again we can

    07:36

    see the condition that it is contradicting its own. So again it has generated new response and it says that it should resolve the issue and the test case should be able to call this particular method correctly.

    So that is something for us to decide.

    07:52

    Now let us copy this code and see that now is it able to run or not and pass all the test cases or not.

    08:10

    and click on run. Okay.

    Now the new generated response does works for this particular question and the runtime is 12 ms and it does

    08:27

    passes the first two cases mentioned. So let us quickly submit this code and see if the code is perfectly you know fine to deal with all the test cases actually fed by lead code.

    08:46

    Okay. Now again we can see one more situation that this code is not able to pass all the test cases even though we mentioned all the specifications constraints and we were pretty precise about the questions and parameters that we want our code to be done in certain

    09:03

    form but still it is just able to pass 68 test cases out of 111 which is almost 50 to 60% of uh total amount of test cases. So let's move on to the next question

    09:25

    that is find substring with given hash value. So let me quickly search for this question.

    Find substring

    09:41

    with cash value again it's it's a question from hard category and the success rate is 22.2%.

    09:58

    So let's have a look at the question what it demands. So here the hash of a zero indexed string of length k given integers p and m is computed using the following functions.

    Now hash with parameters SPM

    10:15

    uh this is the logic given how we want our output to be demonstrated in certain value or how the hash can be generated the particular formula uh how you can get the hash value of your you know uh string.

    10:31

    So the question is you are given a string and the integers uh and you have to return the first substring of that string of length uh given to you. Here it's K.

    The test cases will be generated such that an

    10:47

    answer always exists. We are going to copy this course and mention all the specifications mentioned here in their chart GP console so that it gets all the specifications.

    11:06

    Copy the code and so I have copied the code and paste it

    11:24

    over here. Now let's check for indentation.

    It's fine. Let's remove class my class and

    11:45

    again faces a type error. So let's move on to the next question.

    Again we saw that chachi was not able to solve this particular question. Coming back to our list, the next question in the list that

    12:02

    we are going to cover is partition array into two arrays to minimize some difference. Let's quickly search for that question.

    Partition array.

    12:25

    to two arrays. Again, this question is from hard category and the success rate is even more low.

    So, let's have a look at the

    12:40

    question first. It's partition array into two arrays to minimize some difference that is generated as the output.

    Here we can see that there are three constraints. So now quickly copy this question and see if it is able to solve

    12:56

    this question or not. First we have to create a new channel.

    Copy all the constraints.

    13:27

    Let's see what charge JP has to say for this particular question. Let's ask it to generate a code.

    This time it has only generated a logic. Not exactly logic.

    The approach they they are going to follow or anyone can follow to solve this question.

    13:45

    Let me ask if it can generate a code. It's definitely taking longer time to generate this code.

    Okay, here is the

    14:02

    code which is possible Python implementation and uh okay the code is generated and uh it's mentioning that the code takes an input an array of an integer called nums. So

    14:18

    basically uh it's explaining that what it actually it's doing which is pretty much explained in the question itself that what exactly it needs to function like and what will be the variables and what will be the inputs and how it needs to be segregated to uh obtain our

    14:35

    optimum result. So let's copy this and paste it over here.

    Again like I said we can see that it's generating you know uh

    14:54

    again we could see that pass self parameter. Okay.

    And then

    15:15

    attribute error. Let's quickly copy it.

    Paste it over here. See what chart GPD has to say about it.

    Okay.

    15:31

    that you have seen that after placing the error it's giving that error message you're seeing suggests that there's a problem with the function name in your code. So the error message is indicating that there is no function called minimum difference within the solution.

    Uh also

    15:46

    I have seen that apart from the function name it doesn't have a self parameter. So let me just write it down.

    Self

    16:04

    argument in apps. Wait, let me copy.

    That will be better.

    16:23

    and write a new code with it. Let me see if it can generate a new code this time.

    16:42

    So yeah, it has agreed. So it there is an example and uh here's the code.

    So I have copied this and pasted over

    16:59

    see any such huge difference in the syntax or the logic of the code but yeah we can definitely think of running now. Let's run this code.

    Let me copy it and paste it over here

    17:16

    again. An error has prompted out.

    I don't know how uh valid code we'll get after projecting this error over here in charge GPT.

    17:33

    So yeah, definitely considering the error message you are seeing is indicated that the function is returning inf which is not a valid value for the expected return type integer. So in the base case where I and J are both zero the value of is set to float

    17:50

    which is positively infinity. However the expected output is an integer.

    So this valid this value is not valid. So again considering this change the charge JP is giving a new code.

    So let us see how valid this new code is.

    18:10

    Okay, I can see that it has changed the logic over here. Instead of float inf has uh written a new syntax, a new logic over there now.

    And also charge guarant guarantees that this may solve the issue

    18:25

    of returning inf as the value. Let's see how accurate charge gpt here is.

    Now we have pasted the logic. Uh yeah as oh sorry I need to remove these two

    18:43

    lines. Now let's quickly run this program.

    Okay. So now as you can see it could only pass one test case but not the

    19:01

    first two. So there's no point of submitting this code.

    As we can see that the logic for this code or this code as a whole is not well. Moving on, next in the list we have is

    19:18

    longest common subpath. Now let's see longest common subpath.

    So this question also comes from category of hard questions and the

    19:34

    success rate is 27%. Now let's see what this question demands actually.

    Now let's quickly copy this question with all the constraints and whatever apart from example is left on the screen we need to copy it. As you can see over

    19:50

    here you cannot just miss out on any specifications. Looking at the constraints, copy and paste.

    20:07

    Now enter. Giving the approach to solve this question which is dynamic programming approach.

    Okay, we didn't want any code over here. So let's try that if we can present a code.

    20:37

    Okay, I have asked to write a code for this and uh yes, Char GP has definitely agreed to provide me a code for it. Uh Charg is done with its explanations.

    I'll uh type it out and wait for the new code which will contain self as a

    20:53

    parameter that it's this code is pretty similar to the previous one. It's just that it's using the self argument in the function as it was mentioned by me.

    So let's quickly copy this code.

    21:12

    Paste it over here. Check for the indentation.

    Run this code. I should be any Okay, I spoke too fast.

    21:30

    Here we have another error. Let's see what charg has to say about this.

    The list index out of range error is

    21:46

    likely occurring on line 10 because the indices I and J are being properly bound checked before being used to access elements in the path and TP arrays. Okay, so we have found a new code.

    Let's quickly copy this code and see if this

    22:03

    code is capable of you know eliminating the errors we probably found.

    22:20

    Click copy and paste. Let me check foration.

    Done and run. Then we got a error over here.

    Even

    22:37

    though after providing so many specifications and criterias and errors and conditions yet Chad GPD was not able to provide a perfect solution for the code. Uh moving on to the last question of our

    22:52

    list that is going to be uh sum of total strength of wizards. Hopefully this question does some magic for ch and prove itself lucky for chart GPT graph for this video because for now we can

    23:07

    see it's a 50/50 scene. Uh half of the question charge GP was able to provide solutions what with and half of the questions chyd couldn't actually figure out what uh needs to be done even the logic and approach was correct still the

    23:23

    implementation of the code was not correct. So let me search for this question.

    Sum of total strength of wizards. Again a hard

    23:38

    category question. So let's have a look at the question first.

    Let's quickly copy this question. Uh create a new chat.

    paste it over here

    23:54

    and uh look for constraints. Now that we have copied it, paste it and enter.

    24:11

    He's giving an approach how you can actually uh think of the solution for this particular question. charge has not generated any code.

    Uh let me ask for it.

    24:26

    Okay. So here's an example.

    Again they have implemented the code in Python. Now it is also giving a note that the approach is valid only if you're allowed to modify the original array.

    And also we

    24:42

    are not working on down over here.

    25:10

    function and then write this code. So let's see if it can do it with self argument.

    Okay, so yeah, there's an example. It's generating the code.

    Okay, so let's see

    25:27

    what's the update with the code. here.

    25:48

    This dert. Let's add solution

    26:09

    and run the code. So there's a runtime error.

    Let me see why

    26:24

    this error and paste it over here. What charg error message solution object has no attribute mod suggests that there is a

    26:40

    class named solution and the code is trying to access an attribute named mod on an instance of that class but attribute doesn't exist. we probably uh need to make more specifications and if it still doesn't works then it clearly

    26:55

    classifies that charge doesn't takes every point or classification uh you know in consideration uh which ultimately u you know reflects on the solution.

    27:11

    The new code is here. Let me quickly paste it.

    Okay, let's try running this code. Let's see if this works

    27:26

    running it again. Again, it has a runtime error.

    I was unable to solve one more question. Now that we have tried and tested a huge spectrum of questions from deep code on chat GPT, we can conclude that though

    27:43

    chat GPT is an amazing tool with the bright future, it still has its own limitations and maybe it is not ready to replace humans or compete with human brains. These questions were picked from a list of frequently asked questions for interviews and examinations.

    Chat GPT

    27:58

    does have a potential to generate logics and approaches for the code in an effective manner. But still its ability to analyze the question is weak as compared to humans.

    As we know these questions are there the success rate does shows that a proper solution do exist for these questions. But still

    28:14

    even after multiple attempts charge GPD was not able to find the correct answer. But we can also give charge GPD the benefit of doubt that it's still it's in in its initial phase and still there are a lot of aspects that need to be worked on.

    So probably in future charge GPD can take an upper hand over this. But for

    28:31

    now charge GPD needs to do a lot of work for these situations. Have you ever wished for a coding assistant who can not only suggest lines of code but actually build your app?

    Well imagine this. You're stuck at a project with multiple task at hand.

    writing code,

    28:46

    debugging, fixing errors, running tests, and it feels like you're constantly bouncing between them. Now, what if I told you there's a tool that can automatically take care of all those task for you, leaving you free to focus on the bigger picture.

    That's exactly what GitHub copilot agent mode offers.

    29:04

    It's like having a superpowered coding assistant that can handle everything from creating apps to debugging them. Today we are going to explore GitHub copilot agent mode by building a simple app using HTML, JavaScript and CSS.

    I'll walk you through what makes VS Code

    29:20

    Insiders the perfect platform for this. How agent mode works its magic and how you can use it to create an app with almost zero effort from your side.

    So let's dive in. Before we dive into the demo, let's take a quick look at what VS Code Insider is because this is where

    29:36

    all the magic happens. If you have used VS Code before, you would know it's a powerful code editor.

    But VS Code Insiders is like the VIP version where you get to try out new features and improvements before they hit the stable version. It's updated daily, which means

    29:53

    you get to play around with latest and greatest tools including copilot agent mode. And now you might wonder what exactly makes VS Code Insiders different from the regular VS Code.

    Well, in simple terms, VS Code is your reliable,

    30:08

    stable editor that you can use for your daily coding task. Whereas VS Code Insiders on the other hand is where the cuttingedge features are tested.

    So if you want to early access to new tools and improvement like Copilot agent mode, VS Code Insiders is where you want to

    30:24

    be. All right.

    Now let's talk about what Copilot agent mode and what makes it stand out. You might be familiar with GitHub copilot which gives you the code suggestions while you work.

    It's like having a helpful co-worker giving you ideas but agent modes take that a step

    30:40

    further. It's more than just a helper.

    It's like an autonomous coding partner that can handle entire project for you. When you switch to agent mode, you can give a multi-step task like creating an app from scratch or fixing bugs across multiple files and it will take care of

    30:57

    the entire process. It doesn't just suggest code.

    It writes the code, runs the test, and fix errors automatically while you simply watch it work with magic. Think of it as having a personal assistant who can analyze your code, figure out what's missing, and then

    31:12

    complete the task in real time. So now let's move on to our demo part and see how it actually works.

    So now let's get started and see how copilot agent mode actually works. So the first step is installing the VS code insiders.

    So just simply type on the Google and head over

    31:28

    to this website. Remember do not install the normal uh visual studio code.

    You would need to install the visual studio code insiders for this particular agent mode. All right.

    So since we have already pre-installed this application from here, I'll just head over to the

    31:44

    website from here. So this is the interface of Visual Studio Code and the only difference is between a Visual Studio Code and the Visual Studio Code insiders is this.

    It's quite light in color, light blue in color. That's it.

    So yeah, so we'll head over to this uh

    31:59

    chat section. This is the GitHub copilot uh icon from here and just head over to this copilot edits.

    From here, you would need to select agent. So by default, it will be automatically in the edit mode.

    So just select agent from here. And for

    32:16

    this particular model, I'll just suggest since uh claude 3.5 is good for coding. So we'll just select the cloth 3.5 solid which is only the preview version here.

    So we'll just select that. And we also have this voice chat option.

    But for this you would need to actually install

    32:32

    the extension known as VS code speech. So if you install this particular extension what will happen is uh you can get the chance to actually interact with the co-pilot by simply talking and seeing your particular prompts or command.

    What changes do you want the

    32:48

    co-pilot to do it for you. So it's actually really cool and I'll just suggest that simply download this particular extension from here and then our next step would be we'll just see how it works.

    Okay. So first we'll just create a normal um application or an app

    33:05

    using HTML CSS and JavaScript. So I'll just click on this voice chat option and I'll just say the copilot to actually create a completely new application.

    All right let's do. So I'll just click on this uh voice chat option and I'll say to create a new app using HTML, CSS and

    33:21

    JavaScript. Create a new app using HTML, CSS and JavaScript.

    So see it automatically started running now and it's saying that I understand you want to create a new app but I notice there's already an existing project structure in your workspace. So

    33:38

    should I check its contents to understand what's already there and avoid conflicts or let me gather some context from here. So here I see that the current workspace contains a to-do list application.

    So let's we will just modify the changes in our to-do list application. So it's saying that would

    33:53

    you like to uh create what kind of app would you want to create and would you want to create a new directory or use the existing features requirement create a to-do list and I will say that please uh modify changes in the existing to-do list application improve the UI design

    34:11

    of the application and try to add more features and functionalities. So I've told uh to create a to-do list application and I'll say to actually improve the structure of the application.

    So let's see how it does. So our files are already from here and

    34:28

    it's making changes. So we'll just head over to our index.html.

    Okay. So it's actually a learning platform rather than a to-do list.

    So it will actually create and from scratch it will create a to-do list from here. It will generate all the edits.

    So, as you

    34:44

    can see, it's applying the edits from here. I mean, you don't even have to do anything.

    Just simply type your prompt, sit back and relax, and the copilot will do everything for you. So, now it is actually uh adding the to-do list interface, adding the due dates,

    35:00

    priority levels, and the categories. So, these are the improvements which uh the copilot is actually doing to my existing project.

    So now it will add the styles to enhance the UI of the new elements. So we'll just go to this style CSS and

    35:16

    see what changes is it actually doing. So now that it's doing changes in our uh CSS, so we'll just also see that uh make changes in the JavaScript.

    35:32

    So now it will actually improve the JavaScript functionality to support the enhanced UI features that we added. So it's first analyzing the existing JavaScript files from here.

    So it is now enhancing the JavaScript functionality and whatever updation needs to be done

    35:50

    in this particular file which you can just simply click and just check the edits it's generating for us. So now that it's doing the changes, I'll just say to run the application.

    So first it's saying that uh it's a todo

    36:07

    application. So we'll need to serve it through a web server.

    Let me help you set that. And you can use this command in the terminal here.

    Also one more thing is that. So just simply type on this app.html HTML and from here we'll just select on open uh reveal in the

    36:23

    file explorer and we'll just click on this particular app source code. So yeah so here as you can see our to-do list is ready now and it has given me all the due date low clarity select category.

    So we could actually improve the UI UX of

    36:38

    the design but then uh for the time being it's actually okay and it has also the categories from here. You can see it has added the categories like work, personal, shopping, health, active.

    So you can just add a task like let's suppose I want to go for running and the

    36:54

    due date you can add from here. Okay.

    And you can just add this particular task from here and see whether it's completed or not. You can also delete this particular task if you don't want to do it or if it's completed just select here completed.

    So I think it's pretty cool by just not typing any

    37:11

    particular code it will generate a whole particular website for you and that too within seconds. So it's really amazing.

    One more thing I would like to say now if you would need to actually improve the UIUX of the design you can also type here and see how to improve the

    37:27

    structure and the UX of the design. And if you want to add more functionalities, you can also do that.

    So for the time being, I think it's it generated a really cool website. Now one more thing now you would be thinking that why is this so important for me?

    So if you're

    37:43

    actually tired of writing task, code, fixing bugs, testing, debugging, then copilot agent mode is your answer. It takes all of your place so that you can focus on the bigger, more creative aspect of your project.

    So whether it's

    37:59

    building new apps, updating old code, or running test, C-pilot does the heavy lifting and that you stay productive. And there you have it.

    GitHub Copilot agent mode isn't just a tool. It's like having a coding partner who is always on top of things, ensuring your projects

    38:15

    run smoothly. And by automating everything from code writing to error fixing, it makes development faster, easier, and a whole lot more fun.

    So, if you haven't tried VS Code Insiders yet, make sure to jump in and test agent mode for yourself. It's a complete game

    38:30

    changer for developers looking to boost their productivity. So, if you enjoyed this video, don't forget to like, share, and subscribe and comment below with your thoughts on GitHub Copilot agent mode and how it can help you in your exciting journey.

    Hello everyone and welcome to the tutorial on prompt library for all use

    38:46

    cases at Simply. The prompt library is a comprehensive toolkit for mastering my ad use cases with ease.

    Whether you are delving into programming, honing creative writing skills, or exploring data analysis, this library offers a versatile array of prompts tailored to

    39:01

    your needs. Now, before you move on and learn more about it, I request you guys that do not forget to hit the subscribe button and click the bell icon.

    Now, here's the agenda for our today's session. So, guys, we are going to start with first understanding the prompt structure.

    Moving ahead, we are going to understand testing and iterating. Then

    39:19

    we are going to explore the prompt examples and at the end we are going to conclude our sessions with utilizing prompt libraries and resources. So guys in today's video we will be exploring the prompt structure for various use cases.

    Now first let us try to

    39:34

    understand the prompt structure. So guys I'll break down the prom structure.

    So here first we have the action verbs. So guys think of action verbs like a boss telling Chad GPT what to do.

    It's like giving Chad GBT a job. So for example,

    39:51

    if I say write, you are telling Chad GBT to put words on the page. For example, if I say write a story, I'm telling Chad GPT, hey, I want to you to make up a story for me.

    So this is something like this. Now let us ask Chad GPT, hey, so

    40:10

    write is your action verb all over here. So this is the first prompt structure that I would like you to apply.

    Now the second one you could give a theme or a topic about. Now if you say just write a story, chat GPT is going to give any

    40:25

    random story. So we won't want that.

    The next thing that we cover basically is topic or theme. So what theme or topic you are looking about?

    This is a part where you are giving chat GPT a subject to talk about. Imagine you're telling a

    40:41

    friend let's talk about cats. So cats are the given topic.

    So if I say write about your favorite food I am telling chat GPT tell me about your favorite food. So you have to always include a topic or theme along with your action

    40:56

    verb. So here I can include some certain thing like this that write a story about food.

    So you could see all over here chat GPT

    41:12

    has given two uh responses. This is response one and this is response two.

    Now the third thing that comes up all over here is constraints or limitations. Think of constraints as a rules or

    41:27

    boundaries for Chad GBD to follow. It's like saying you can talk about cats but only in three sentences.

    So if I say write a poem in 20 words, it's like I'm telling Chad GPT make a short poem using only 20 words. So this is one of the

    41:44

    things that you have to always keep in consideration regarding what task you want to give. So always include constraints or limitations.

    Fourth one is background or information context. So

    42:00

    so this is also one of the most important parameters. Uh what exactly it means is like this part sets the scene for JGBT like giving it a background story.

    Imagine you are telling someone about a movie before they watch it. So if I say imagine you are on a spaceship

    42:18

    and telling Chad GPT pretend like you are flying through the space. So this is also very very important for you to consider to give certain idea regarding your background or information.

    Now the fifth one is conflict or challenge.

    42:34

    Guys, this adds some spices to the problem. It's like a puzzle for a problem for charge GPT to solve.

    It's like saying talk about cats but tell me why some people don't like them. So if I say charp explain why reading is

    42:50

    important but you can't use the word book. I am challenging charging.

    So this is where conflict or challenge you have to give to charge GPT. Now example let us take one example on this.

    So for example if I say the action verb

    43:06

    as right we'll highlight this with red and the topic or theme could be like your favorite vacation. If I talk about a background or context like say you are on a beach with your friends or conflict or challenge we can

    43:24

    give all over here something like in just 50 words. So guys this is certain thing to follow while giving a prompt to charge GPD.

    So in this way putting all together you could combine all these three things and form a sentence and

    43:39

    this prompt is going to be very very effective to solve the problem of generic responses. Now with this simple example you can see how different components come together to create engaging prompts for charge GPD to work with.

    So guys whenever you

    43:54

    are giving a prompt I would request you to always follow this structure. So it's going to create a map for you to get a more precise answer.

    Now let's take example and elaborate the prompt library with examples to make it more understandable. So guys let's take

    44:10

    another example of text classification. So for text classification we'll take the action verb as classify and our text type would be product review.

    Example could be classify the following text as negative, positive or neutral sentiment.

    44:29

    And after that you could give like the product review exceeded my expectation. So if you give certain thing like this you would say this is a positive sentiment.

    So making your prompts in this manner with a proper

    44:46

    structure you are going to get a very particular response which fits what you need. So always remember this structure whenever you are framing any bra.

    Now let's move to the second part that is testing and validation. Guys, testing and iterating are essential steps in

    45:02

    refining prompts and ensuring optimum performance from chat GBD. Let us break down this process.

    The first process is prompt validation. So before using a prompt, it's crucial to test to ensure it that it generates a desired response accurately.

    Then you evaluate the

    45:19

    output. You're going to generate responses using the prompt and evaluate the quality, relevance, and coherence of the output.

    Third, check for errors. Look out for any errors, inconsistencies, or unexpected behavior in the generated responses.

    Compare

    45:34

    against expectations. Compare the generated responses against your expectation or any requirements to verify that they meet your desired criteria.

    The fifth one is solicit feedback. Seek feedback from peers, colleagues or domain experts to validate the effectiveness of the prompt.

    For

    45:50

    example, like analyzing the results. So you would say analyze the results to testing to identify areas of improvement or refining the prompt.

    Next is modifying the prompt. Based on the analysis, make the adjustment to the prompt structure.

    Next, then fine-tune

    46:06

    the parameters. Experiment with different variations of the prompt such as adjusting constraints, changing topics or refining prompt to assess whether the changes have resulted in improvements in the quality of the generated responses.

    The fourth one is retesting. Test the modified prompt

    46:23

    again to assess whether the changes have resulted in improvements in the quality of the generated responses or not. And the final step is iterate as needed.

    Iterate the testing and modification process as needed. until you achieve the desired outcomes and generate high

    46:40

    quality responses consistently. So this structure you have to always follow when you are iterating.

    So I'll give you an example. So like we have given uh initial prompt as write a product description for a new smartphone and I

    46:56

    would say include details about features, specifications and benefits and I would say add a constraint all over here that keep the response in 100 words. So this is your initial prompt which you are given.

    Now for testing

    47:14

    the next comes is testing. Generate product descriptions using the initial prompt.

    Evaluate the quality and relevance of the generated descriptions. Check for errors if inconsistencies or missing information is there.

    Compare the description against the expectations and requirement. So this process comes

    47:31

    under testing. Okay.

    So give it uh like change your prompt a little bit. give a specific uh description regarding a certain product and you would ask that and just next process would be evaluate the quality and the relevance like what

    47:46

    you are uh getting as a response check for errors like go to Google like see if it's same is coming up then what's the customer expectations regarding that product so if the overall structure is like technical structure is maintained so this gives the first phase of testing

    48:03

    next one comes the analysis some descriptions s lack detail and fail to highlight its key features. Okay.

    So in this scenario the descriptions vary in length and structure leading to kind of inconsistencies. Certain descriptions like here will focus more on technical specifications

    48:19

    than the user benefits. So overall the quality and the coherence of the descriptions needs improvement.

    So you have to take all these parameter and you have to reframe your prompts. Okay.

    Then next comes is iteration. You have to modify this prompt to provide like more

    48:36

    of to give a clear instructions and emphasize the user benefits. Write a captivating product descriptions for a new smartphone.

    Okay. Then move to retesting.

    Generate product descriptions using the modified prompt. And for the outcome, you would say that the revised

    48:53

    prompt should yield more compelling and informative product descriptions. So this is how you have to do iterate continuously to get the proper response like which you would be needing.

    Okay guys, now let's move to the final part of this video that is utilizing the

    49:09

    prompt libraries. Guys, utilizing prompt libraries and resources is essential for streamlining the prompt writing process and access a wide range of pre-esigned prompts for various use cases.

    So you're going to get a library of a predefined prompts. Okay.

    So there's one website

    49:25

    like which I want to show you. This is called Anthropic.

    So Anthropic has recently released a prompt library. So guys, they have given a wide data of a prompt library.

    So if you just click on this, so you're going to get like what are the effective prompts in all these

    49:40

    domains. So give it a shot.

    Uh try to see what are the uh like resources you're going to get all over here. It definitely is going to fine-tune your responses.

    Now let's move to the process. So when we are talking about the prompt libraries, the first step is

    49:56

    explore the existing libraries. So you can see that I have given a reference to a prompt library all over here which is released by anthropics team for club and also workable for chatb.

    Next is you have to understand the available prompts. Familiarize yourself with the

    50:11

    prompts available in this library and including their structures, topics and constraints. You have to also analyze how prompts are categorized and organized within the library to quickly locate relevant prompts for your needs.

    Third is adapt to prompts to your needs.

    50:26

    Customize existing prompts to suit your specific objectives, audience and use cases. You can modify prompts by adjusting the action verbs, topics, constraints or background information which aligns with your requirement.

    Create your own prompts like combine

    50:42

    different components such as action verbs, topics, constraints to craft prompts that addresses specific task or challenges. Next process you have to do is sharing and collaborating.

    You will share your prompts with the community to contribute to the collective pool and

    50:58

    resources. So this is one way of learning that I really really want you to follow.

    Now you have to keep experimenting and iterating at the same time. And finally you have to see the documents and organize all your prompts for the scene.

    So what you can do best

    51:13

    is see all the existing prompt libraries like I'll show you one more. So prompt library for chat GPT

    51:29

    GitHub for all use cases. So you could see explore various repositories on GitHub like what are the uh kind of like prompts available like this repo specifically focuses for the academic writing.

    So just visit this uh

    51:46

    repository and uh you could see they have given a lot of thing like for brainstorming they say so you can see the action verbs all over here try to like uh uh try this prompt and see how you are getting a response. Then for article sections like what's it's there.

    52:03

    So you're going to get a lot of like things and uh more of the experiment and more uh you are exploring the more idea you are going to get regarding this. So my advice would be just explore as much as libraries you can and depending upon

    52:19

    your use cases you have to make an organized prompt structure. So following this format which I've told you follows the action verb the topic or the background information.

    Then what are the constraints you have to give? Okay, it's any particular theme is there.

    You

    52:35

    have to include all those things and use the existing prompt library also. So you can refine your uh prompt and always to get a good response.

    It's my personal experience that you have to keep fine-tuning, keep testing, iterating,

    52:50

    analyzing so that your result comes very fine. Hello guys, welcome to this reasoning with O1 course.

    What if we told you that there's an AI that does not just give answers but actually thinks? So introducing 01, the latest and the most

    53:06

    advanced reasoning model from open AI. So unlike other models that rely on patterns, 01 is designed to reason through problems, selfcorrect and explain its thinking step by step just like a human would.

    So today we will

    53:23

    explore what makes O1 so special. Then we will come across how it reasons.

    Then we will see where it's useful and finally we will see why learning about this model is essential in the age of

    53:39

    intelligent machines. So let's just dive in.

    So what is 01? So 01 is OpenAI's new AI model.

    But it's not just a smarter chatbot. It's a reasoning engine.

    While models like GP4 are brilliant at

    53:57

    language and creativity, O1 gives a step further. It can think logically.

    It can solve problems with multi-step reasoning and analyze the complex data. In short, O1 is the closest we have come to building an AI that can think through

    54:13

    our task, not just respond. So why does this matter?

    Because the real world problems in maths, science, code and life require structured thinking, not just quick answers. So let's now move on to

    54:28

    what is reasoning and why needs it. So before we understand how O1 works, let's ask what is reasoning.

    So reasoning is a process of analyzing information, finding connections and drawing conclusions like solving a puzzle. For

    54:46

    example, you see dark clouds, people are carrying umbrellas. So even if it's not raining, you reason it as it's about to rain.

    So AI needs this same skill. So without reasoning, an AI is like a parrot.

    It repeats. With reasoning, it

    55:04

    becomes a problem solver. And that's exactly what O1 brings to the table.

    So let's now understand how 01 reasons. So let's break down how O1 actually reasons step by step.

    The first one is chain of

    55:19

    thought reasoning. So O1 solves problem step by step just like we do on a paper.

    For example, the question is what is 12 into 15? O1 does not just say 180.

    It thinks it as 10 into 15 is equal to 150.

    55:36

    Then it calculates 2 into 15 is equal to 30. Then it add final two results that is 150 + 30 is equal to 180.

    So these chain of thought method reduces errors and shows the logic behind every answer.

    55:55

    So let's now move on to the next that is reinforcement learning. So Oman gets better over time by learning from success and failure just like a person.

    If one method doesn't work, it tries another, sees the result, and improves

    56:10

    next time. It's like a chess player who gets sharper with every game.

    Then we'll come to reasoning tokens. So when O1 solves a complex task, it uses invisible tokens to keep track of its thoughts

    56:26

    like a mental sticky notes. So imagine solving a math problem.

    You scribble your steps on a paper. 01 does the same internally helping it to stay organized and accurate.

    So let's now dive into the real life examples. So let's see 01 in

    56:43

    action. The first one is solving a math problem.

    So the question is what is 25 + 37? So 01 thinks like 20 + 30 is equal to 50 and 5 + 7 is equal to 12.

    50 + 12

    56:59

    is equal to 62. And the final answer it gives is 62.

    So it's not magic, it's methodical reasoning. That's what makes 01 more trustworthy and transparent.

    Then we are going to the next example that is debugging a code. So now let's

    57:16

    shift to a coding scenario. If you ask why is my Python code giving me an error.

    So here's what 01 does. It reads the code line by line, spots where the error happens, then explains why it occurred, and then it suggests a fix.

    57:33

    step by step. It's like having a calm, logical coding partner who always explains their thinking.

    So let's now move on to the next one. We have what makes 01 different.

    So here is a quick comparison. Say we have a capability.

    So

    57:52

    GP4 predicts the next word and same 01 also does. So GPT4 thinks through steps.

    It doesn't think through steps but 01 does. So GPD4 selfcorrection is a basic

    58:08

    thing but the selfcorrection in 01 is a very advanced thing. Then it solves a complex problems.

    GPT4 sometimes does it but 01 is consistently doing it. Then transparent reasoning.

    GPT4 has no

    58:23

    transparent reasoning but 01 has a transparent reasoning. So let's move on to where can 01 be used.

    So because of its structured reasoning, 01 is useful in multiple domains. The first one is maths and science, solving equations,

    58:41

    explaining experiments and so on. And then we have coding.

    So writing, debugging and understanding programs. Then we have research that is analyzing data, summarizing papers and verifying facts.

    And then we have creative work.

    58:56

    So planning content, brainstorming logically are all falling under creative work. Then we have image analysis.

    This interpreting visuals step by step. So wherever thinking is needed, 01 delivers clarity.

    So let's now come to the common

    59:13

    mistakes avoided by 01. So previously older models had a few common issues such as the jumping to conclusions directly then ignoring small details and then we have repeating errors in the new

    59:29

    context. But one avoids all this by reasoning in steps checking its own logic and learning from feedback.

    And that's why it's more dependable especially when stakes are high. Let's now move to why

    59:47

    01 is so important. What is the future of work and thinking?

    So because reasoning AI is the future and it's already here. Learning OI works helps you use AI more effectively, ask better questions obviously, then understand how

    00:04

    machines think and then stay ahead in the technical research and educational field and become a critical thinker yourself. So this is not just about AI.

    It's about preparing for a world where thinking with AI is a part of how we

    00:20

    work, learn and create. So let's now move on with the key takeaways.

    So let's recap what we have learned. Doesn't just answer, it reasons.

    Then it learns from mistakes and improves. It explains its logic step by step.

    Then it's accurate,

    00:38

    explainable and helpful across industries. then it's future of intelligent systems and you can be a part of it.

    So whether you are a student, developer, analyst or just curios is just a tool that helps you think better. So let's dive into deeper in the

    00:56

    next module. Hello guys, welcome back to the second module.

    So in this module we will learn about prompting in 01 models. So prompting is a way we interact with the AI by giving instructions and receiving responses.

    So with 01 models

    01:12

    like GPT40 things have gotten simpler. So these models understand prompts with more accuracy which means you can be clear, concise and direct without overexplaining.

    So there are certain principles of prompting in O1 models.

    01:29

    The first one is it is simple and direct. Then number two is no explicit chain of thought needed.

    Then the third one is structure over description and the fourth one is show don't tell. So

    01:44

    there is a summary for this like to get the best out of the O1 models remember these four principles. You have to be simple and direct.

    No need for explicit reasoning. It's already built in.

    So give structure not long descriptions and

    02:00

    show tone with examples. Don't just tell.

    So these principles will help you get better faster results from 01 models while keeping things smooth and efficient. So let's then get into the demo part.

    So here we'll write our first code as. So the first thing import

    02:17

    warnings warnings dot filter warning ignore. So this is a basically a warning control.

    So this is a Python code snippet does two main things. It suppresses warnings and retrieves the OpenAI API key for use in the later interaction.

    So I will just break it

    02:34

    down step by step. So the first line import warnings imports Python's built-in warning module.

    So this module is used to manage the warning messages in the program which are typically notifications about for the potential problematic or unexpected events in your

    02:50

    code. It helps developers keep track of the minor issues without even interrupting the program's execution.

    So next warnings dofilter ignore is used to tell Python to ignore any warning messages that might. So normally warnings are shown to alert the

    03:06

    developer about something. So that may be critical issue but could be imported.

    So then the third line from helper import get open AI API key imports a function called get open AI API key from a module named helper. So this function

    03:23

    is responsible for retrieving the open AI API key which is necessary for interacting with the open AI services such as generating text or using other machine learning models. So finally the open AI API key is equal to get open AI

    03:41

    API key calls the imported function to get the API key and stores it in the variable open AI API key. So these variables know how the key needed to access the OpenAI's features and it can be used for even making the

    03:57

    authenticated API request. So this code is designed basically to suppress any unnecessary warnings while ensuring that the program has the power of the API key.

    So let's just run this. So then basically we have imported JSON.

    So the

    04:13

    firstly import JSON allows working with the JSON data obviously commonly used in the API interactions from ipython.is import display then markdown HTML imports functions to display etc. So then from open AAI import openai imports

    04:30

    the open AAI class so which is used to interact with the open AI services. Then client equal to open AI API key equal to open AI API key creates an open AI client using the API key enabling communication with the open AI's models.

    04:49

    Then finally we have GPT model equal to GPT 40 mini and 01 models equal to 01 mini defines the name of the two different models that can be used in the API requests. So in short, this code sets up the OpenAI client and defines

    05:06

    the two models for use in natural language processing task. So we'll run this.

    So here basically the code snippet defines a prompt for generating a function using the open AI's model then sends the prompt to the open AI API to

    05:22

    get a response. So let's now break it down step by step.

    So bad prompt equal to generate a function till insulin creates a string variable called the bad prompt because it is a statement basically. So this

    05:38

    prompt is a request for the model to generate a function that outputs the smiles ids a way to represent the molecular structure basically for all molecules involved in the insulin. It also guides the model to think step by

    05:54

    step outlining task like identifying the models creating the function and looping through each molecule to return the smile ids. So then we have response equal to client.chartcomp completions.create

    06:11

    model. So this basically sends the prompt to the open AI API for processing.

    It uses the client.comp completions method to interact with the open API specifying the model that is 01 model which was defined earlier. The

    06:28

    messages parameter sends the bad prompt to the model as input under the user role. So in short this code defines a prompt using the open AI's model to create a function for returning the smiles ids for insulin molecules and sends that prompt to the openi API to

    06:45

    generate a response. So let's now move on.

    Run this and move on. So then basically we have display HTML division style equal to background color.

    So basically displays the formatted HTML and markdown content in a

    07:01

    Jupyter notebook. So then we have the first version.

    The first one, first line displays an HTML division with a light background color and a border also. So it also includes a

    07:18

    heading that says markdown output beginning with a downward arrow emoji. Basically the HR tags are used to insert the horizontal lines before and after the heading.

    Then we have display

    07:36

    HTML function is used to render this HTML in the notebook. Then we have display markdown.

    This line takes the content of the response from OpenAI's model stored in this and it displays it as markdown. The markdown function is used to render the content in a more

    07:53

    readable formatted way. So this content could include text, headers, list or any markdown elements generated by the model.

    So then we can run this and you can basically see

    08:09

    the heading. Then the step one is identify all the molecules involved in insulin chain A, chain B.

    Then step two prepare the smiles and strings for each amino acid. Then we have create the function.

    08:25

    This is basically the output. So then we have this good prompt generate a function that outputs a smile ID as we have given a bad prompt.

    It is necessary to give a good prompt as well. So yeah we have given a good prompt.

    So let's

    08:42

    now display that good prompt. Mark out beginning understand insulin step by step and you can see the difference clearly.

    08:58

    So this is basically a great code but don't get overwhelmed. I'll break it down step by step.

    So this example is an structured prompt basically and used to guide an AI

    09:14

    assistant to behave like a customer service representative for a company named any corp. So which sells storage solutions such as beans.

    The purpose of this prompt is to make sure that the AI behaves professionally, respectfully,

    09:29

    and in line with the company policies while responding to the customer queries. The first part of the prompt labeled as instructions clearly tells the assistant what role it is playing a customer service assistant and emphasize the tone it should maintain kind and

    09:46

    respectful at all times. So this ensures assistant responds in a friendly and helpful manner just like a real support agent would.

    So next comes the policy section which outlines the six specific

    10:02

    areas that the assistant must follow. So these include handling refunds, recording complaints, providing accurate product information, maintaining the professional conduct, complying with the policies and data privacy rules, and refusing to answer questions outside

    10:18

    these topics. For example, under refunds, the assistant is allowed to process them as long as it follows any corpse rules and documents everything properly.

    If a customer complains, the assistant should listen carefully, reassure them and escalate the issue if

    10:34

    necessary. So if asked about topics unrelated to these areas, the assistant must politely refuse to answer and guide the user back to the allowed topics.

    The final part is a user query which shows the customer's actual message like hey I

    10:51

    would like to return the bin I bought from you as it was not finded as prescribed. So these falls under the refunds category in the policy.

    Since the assistant is allowed to handle the refund request, it will process this message accordingly possibly by

    11:07

    confirming the issue, starting the return process and recording the complaint. So in short, this structured prompt creates a safe and clear framework for the assistant to operate.

    It tells the AI what kind of support it can offer, how to behave, what topics

    11:23

    are allowed, and how to respond when faced with customer issues. All while making sure the customer feels heard, respected, and healed.

    So let's now print the statement the structure prompt.

    11:39

    Now you can see it is printed. So now this code is from the Python program using the open AI API which allows developers to interact with the AI models like chart GPD.

    So specially this line is used to send a prompt to the AI

    11:55

    and get a response back. The function call is then client.comp completion.create.

    So client is the OpenAI API client that was likely used up earlier in the code. It connects your code to the open AAI

    12:11

    server. So the dot chatcomp completions.create part is how you send a chat message to the AI and ask it to complete the conversation.

    So in other words generating a reply. So inside the

    12:26

    parenthesis the first argument is model is equal to 01 model. So this tells the API which version of the AI model to use.

    for example, GPD4, GPD 3.5 Turbo or some other customer or fine-tuned model

    12:42

    stored in the variable. So this ensures that you are using the right AI brain for your task.

    So the second argument is messages again. So this is where you pass the actual conversation history or input formatted as a list of message objects.

    So in this

    13:00

    case there's only one message in the list and it has two key parts. The role that is the user then the content is a structured prompt.

    The role the user so this tells the AI that the following content is coming from the user so not from the system or assistant and the

    13:17

    content is the combines the earlier structured prompt. So which include instructions and policy rules for the assistant with the user's questions like wanting to return a bean etc.

    This combined content acts as the full message the AI will read before

    13:33

    replying. So in more simpler terms, this line is telling the AI, hey, you're a customer service agent for any clock.

    Here's your policy. Now here's a customer question.

    What will be your reply? The AI will read the whole input,

    13:50

    follow the instructions and policies and then return a customerfriendly response. So the response itself will be saved in the variable called response which you can later print or process in your code.

    So let's run this. So I have this

    14:06

    printed response. So I will just run this.

    You can see the response basically. Then we have this refusal input.

    The variable refusal input contains a user query that asks the AI write me a hack about how reasoning modules are great.

    14:23

    So this is wrapped in XML like tags. This purpose of this input is to test the AI's ability to say no.

    According to the structured prompt and policy you provided earlier, the assistant is only allowed to talk about refunds,

    14:39

    complaints, and product information about the storage solutions. A hiq about reasoning models doesn't fall into any of these topics.

    So when this refusal input is combined with the structured prompt and sent to the AI, it should

    14:55

    politely refuse to answer and explain it that it so it can only help with the allowed topics. We will run this.

    So again this line is sending a new message to the AI model but this time using refusal input instead of user input. The client chat

    15:12

    dot completions docreate calls openi API key to get a reply from the AI. Then model is 01.

    Model tells which AI model to use. Then messages we have is sends a message to the AI with.

    Then here the role is a user and the contain

    15:28

    structured prompt plus refusal. It basically combines the rules of the structured prompt with a new question that may be outside the allowed topics.

    Since the structured program says this assistant must refuse questions outside refunds, complaints, product

    15:43

    information. This test checks if the AI correctly refuses to answer.

    So in short, this line tests if the AI follows the policy and politely refuses to answer the question, it's not allowed to handle.

    15:58

    So let's run this. So this line basically displays the AI's reply to the refusal input.

    Uh you can see that this is the output. So now we are ready to for giving up for a base prompt and a legal query.

    So let

    16:15

    me explain the variable base prompt is used to define the role and the behavior of the AI assistant. So in this case the AI is told to act as a lawyer who specializes in the competition law specially helping the business owners understand the legal issues.

    So this

    16:31

    prompt is wrapped in tags like prompt and policy to make it structured and easy for the AI to interpret. So the policy section outlines the important rules the AI must follow.

    It says that the assistant should give clear and accurate information about the

    16:48

    competition law, maintain confidentiality and professionalism and avoid the offering specific legal advice unless there's enough context. The legal query variable contains the actual question from the user.

    The query is a

    17:04

    larger company is offering suppliers incentives not to do business with me. So is this legal?

    So this is wrapped inside a query tag which gives the input structured and consistent with the rest of the prompt format. So in summary, this setup gives the AI a clearly

    17:21

    defined role as a legal expert and a policy to follow. For example, we will now run this.

    So this line then sends a structured legal question to the AI using the open AI API. The goal is to get a helpful and professional response

    17:37

    from an AI that's acting as a competitive law expert. So the method client chart completions is used to generate a chat response.

    As we have discussed earlier here the role is a user and the content is base prompt plus legal query. So we'll now run this.

    So

    17:53

    now we are going to visually display the AI's response with styling and format and it helps the output look clean and organized for the user. So we'll do this and you can see it is a stepby-step guide for the so this is also again a uh

    18:12

    like bigquery. So here the variable example prompt is a structured example prompt that shows how an AI assistant acting as a competition law explored should respond to a user's

    18:27

    legal interest. So it's formatted using special tags like prompt, policy, example, question and response to keep everything organized and machine readable.

    So inside the prompt section, the AI is structured to behave like a lawyer who specializes in the

    18:44

    competition law, helping business owners with their legal questions. So this gives the AI its role and sets the stage for how it should respond.

    The policy section defines the boundaries the AI must follow.

    19:00

    Then we have the example section provides a sample legal scenario and how the AI should answer. So then we have the response that follows is a well structured professional reply.

    It begins by referencing the US antitrust laws

    19:17

    especially the Sherman antitrust act and explains how some types of collaboration like price fixing or market division are automatically illegal. So then we have it also wars the information sharing which can be risky

    19:34

    and potentially lead to antitrust violations. In summary, this example prompt acts as a guide or training example to show how the AI should behave in legal conversations.

    So then I'll draw this.

    19:49

    So here then we are going to again display the output in a formatted style and you can see it is displayed. So let's now move on to the next module and learn about planning with 01.

    20:07

    So in the earlier lessons we explored how to structure prompts and guide the AI models like 01 and 01 mini to behave in specific roles. So whether a customer service agent or a legal advisor.

    So here in this lesson we will learn how to

    20:24

    plan with 01. Uh it will introduce a powerful use case where the AI is not just answering a single question but actually creating a full plan to solve a task.

    So this kind of problem solving is perfect for 01 models especially when

    20:42

    you are working with a fixed set of tools and some constraints. So instead of asking the model to do every step directly which could be slow and expensive, we let 01 mini generate the plan and then use a smaller faster model

    20:57

    like GPD4 mini to execute its step. So this two approach planning with 01 mini and executing with GP4 mini helps us work efficiently.

    So please don't worry if your results don't exactly match the ones in the video tutorial. Now that's

    21:14

    totally normal since AI responses can vary slightly every time you run them. So in short this lesson builds on what you have already learned and now takes it a step further like teaching the AI how to think ahead and plan not just react.

    So let's dive in. So we will

    21:31

    start with the first part of the code. This is about suppressing the warning messages as we have discussed earlier in Python.

    So Python sometimes shows warnings when it deflects something that might not cause an error rightway but could lead to issues in the

    21:47

    future. So this is for this and the second part is focused on importing your OpenAI API key securely.

    So instead of hard coding the key directly into the notebook which is not safe the code calls a helper function from the helper

    22:03

    import get openi API key brings in a function named as get open API key. So we will run this first two lines import the two standard Python modules copy and JSON.

    The copy module is used when you want to duplicate the Python objects,

    22:20

    especially when you need to make changes without affecting the original. The JSON module is used for working with the JSON data, which is common format for sending the receiving structured information, especially when dealing with the APIs.

    22:35

    The next line imports the OpenAI class from the OpenAI package. This is what allows your code to communicate with the open language models like GP4 or 01 mini.

    Then we import the 01 tools from a separate life named utils. py.

    So this

    22:53

    likely contains the some common utility functions or tools. So then next we set up the connection to the open API by creating a client instance.

    So client equal to open AI API. Here the OpenAI uh

    23:09

    API key variable which was retrieved earlier using the get open AAI API key is passed to authenticate your session with the open AI. So this line sets up the client object which you will use to send prompts and receive responses from

    23:24

    the models. So finally two variables are defined to specify which models to use.

    So these are simply model name shortcuts. So instead of writing the full model name each time, you can just refer to 01 model and GPT model.

    So 01

    23:40

    mini is used for planning the task while GP4 mini is likely used for executing the individual steps in the plan. So we will run this.

    So here the code first line initializes an empty list called message list. So this list will likely

    23:56

    be used to store the sequence of messages exchanged with the AI model. Next, we define a dictionary name known as the context, which sets up the initial state of a business environment for a planning task.

    Under the inventory section, we are told that 50 units of

    24:13

    the smart home hub x200 are currently in stock. Then under the orders we see a single order that is 0 RD3001 requesting 200 units of this product for a customer that is having the ID CUSD

    24:30

    9001 located in the LOS Angelus. The available suppliers and the supplier section list two suppliers and the components they can provide.

    Notably only SUPP1101

    24:46

    has a component of COM X200 required to make the X200 product and they have 500 units available. The product capacity then section defines how many units can be produced 100 right away and another 150 next week.

    This helps the model

    25:02

    decide how to meet the demand of 200 units. The shipping options provide two choices for delivering to Los Angeles.

    A standard service that cost $1,000 and takes 5 days and express service that cost dollar 1,500 and takes only two

    25:19

    days. The model can consider this based on the urgency or budget.

    The customer's dictionary holds customer details like the name and the address of the client placing the order. The product section defines what components are needed to

    25:36

    manufacture each product. So in this case, the X200 device requires one unit of com X200 to build each finished product.

    The code sets up a realistic business scenario basically involving the product, inventory, orders, manufacturing, suppliers, and shipping.

    25:52

    All the key elements the AI needs to plan how to fulfill the order effectively. So we'll run this.

    So this is basically a prompt for planning the task. So this block defines a large string called the 01 prompt which acts

    26:08

    as an instructional prompt for an advanced language model like 01 maybe. The goal of this prompt is to guide the model to act as a supply chain management assistant and generate a detailed logical plan for handling a complex order fulfillment task.

    It's

    26:25

    essentially the brain that will break down a business challenge and outline the best steps to solve it. The first paragraph in the prompt sets the context.

    It tells the AI that it will be given a complex task that involves managing customers inventory, production

    26:41

    supplies, and shipping. The next section list all the functions available to the executor AI.

    The functions allow the exeutor to do things like check inventory levels, allocate stocks to the orders, place purchase orders with suppliers, schedule production runs, and

    26:58

    so on. Each function is briefly described along with its required inputs and expected outcomes.

    So this ensures the planning AI knows what tools are available, how they work, and what each one is for. It helps the model build a plan that is realistic and executable by

    27:15

    the downstream agent. So in summary, this 01 prompt sets up a rich and highly structured instruction environment for the planning model.

    It tells the AI to think carefully, reason step by step and create a clear, detailed and logical order plan. One that another AI can

    27:33

    confidently execute using the provided tools. It ensures consistency, safety, and clarity across every step of the supply chain process.

    So let's run this. So this is basically the prompt for the execution process by

    27:50

    the system. So this block defines a variable called GPT40 system prompt which is a string used to guide the behavior of the execution model especially the GPT4 meaning in this case while the 01 mini model is responsible for planning.

    The prompt is

    28:07

    for the model that carries out each step of the plan. The assistant is told as to act as a helpful executor whose job is to strictly follow a given policy related to handling incoming orders.

    The prompt defines a step-by-step workflow

    28:22

    for the model to follow. First, the model should read and understand the full policy.

    Then, it needs to identify which step in the policy is currently working on. So, before acting, it should give a brief explanation of the decision.

    So, finally, it should perform the action which typically involves

    28:39

    calling one of the available functions with the correct inputs. So in summary, the GP41 system prompt sets up the mindset and the behavior of the GP4 mini executor.

    It ensures the model works step by step, explains it

    28:56

    reasoning and only takes action that match the policy creating a control and a transparent execution flow for order handling in a supply chain scenario. So let's now this this code block defines a

    29:11

    list named tools which contains a collection of the functions that AI can use to interact with the supply chain system. So these functions act like a toolbox and each tool or function performs a specific task related to the inventory production suppliers orders

    29:29

    etc. So each tool is defined as dictionary that includes its type that is the function and a function section that deals the function's name, description of what it does and the input parameters it expects.

    Some key functions include get inventory status

    29:46

    to check how much a product is in stock. Then update inventory to add or subtract stock.

    Then fetch new orders to see which orders are pending. Then get product details and check available suppliers to explore what's needed to

    30:02

    make products or find suppliers. Then place purchase order to order components.

    Allocate stock to reserve product for a customer order. Check production capacity and schedule production to plan and carry out manufacturing.

    Then calculate shipping

    30:19

    options and book shipment to handle delivery. Send order update to notify the customer.

    instructions complete to mark that all steps in the plan are finished. So each function also defines the parameter it requires like product ID, order ID and quantity etc.

    So this

    30:36

    is important because AI must pass exactly the right data when calling these tools the use of required fields and additional property. So in summary, the tools list defines a complete set of functions the AI agent can use to operate the supply chain system.

    So

    30:52

    let's now move on to the functions then. So this block defines all the Python functions that simulate the supply chain operations described earlier.

    The first few functions deal with the inventory management. Get inventory status.

    Then

    31:09

    get product details. This features the product information.

    Then get inventory status returns how many units of a specific product are currently in stock. Then update inventory product ID quantity change adjust the inventory count based on a

    31:25

    positive or negative change making sure inventory doesn't go below zero. Then allocate stock tries to reserve the inventory for an order.

    If there's enough stock it reduces inventory and confirms the allocation. If not it

    31:40

    allocates whatever is available and marks the result with an insufficient stock warning. The supplier related functions start with the check available suppliers which returns the list of the suppliers the company can order components from.

    Then comes the

    31:56

    production related logic. Check production capacity.

    Time frame checks how many units can be produced with the given time frame. Then schedule production run checks the capacity schedules program if possible and updates the inventory.

    For shipping, the

    32:13

    function calculate shipping options returns a list of available shipping methods and their cost for the specified destination. Lastly, the function send order update simulates sending a message to the customer to keep them updated on their order status.

    So, at the very end,

    32:30

    there's a dictionary called function mapping which links each function name to the actual Python function. So, let's then run this.

    So guys, knit together the whole process and make the most out of it. Let's now move on to the next module.

    32:47

    Hello guys, welcome back to this new module of coding with one. So initially we will start with the first part that hides the warning messages using the warning store filter warnings ignore as we were using to.

    So this keeps the

    33:04

    output clean by preventing the non-critical warnings from appearing during execution. Next, the code imports get open AAI API key function from the helper file.

    Finally, the function is called and its result is stored in the OpenAI API key variable. So in short,

    33:20

    this setup hides warnings and securely loads your API key for using OpenAI tools. So we'll run this.

    So here then the first line imports the three useful display tools from ipython dot display

    33:36

    image and markdown. These are commonly used in Jupyter notebooks to visually show images format output using markdown or display objects in a cleaner way.

    Next the code imports the openi class from the openi library. So this class is

    33:52

    used to create a client that communicates with the open API to generate responses. run models or perform other AI tasks.

    So then the client client is equal to open AI API key. Open AI API key initializes the

    34:07

    open AI client using the API key we previously stored. So this connects your code to the open AI services.

    So you can start making the API calls. So finally two variables GPD model and 01 model are defined with model names GPD 40 mini and

    34:25

    01 mini. These are likely short and levels are used in your code to specify which AI model you want to work with when making the API request.

    So we'll run this.

    34:40

    So this code defines a function called get chat completion which is used to get a response from an openi chatbot model. It takes two inputs.

    the model the name of the model you want to see like GPD4 mini and prompt the user's message or

    34:57

    question. So inside the function it calls client chart or completioncreate.

    So this is the open AI API method used to send the prompt to the chosen model and generate a chatbased response. So the message is passed in the format

    35:13

    OpenAI expects an array of message objects with the roles like user and assistant. Finally, the function returns just the content of the model's reply using the response dot choices dot message.

    So this keeps the output clean

    35:29

    and gives you only the AI's reply text, not the entire API response structure. So we'll run this.

    Now here the code defines a multi-line string variable called the react demo prompt. So it's a prompt written in the natural language that will be sent to an

    35:46

    AI model like charge GBT to generate a react component. The prompt is asking the AI to create a react component named feedback form for collecting structured interview feedback.

    It outlines four main requirements. Reading candidates

    36:03

    with rubrics. giving specific evidence for each rating or to calculating a recommendation score and guiding the interviewer to provide the behavioral feedback.

    It also includes additional instructions for the AI like using the intelligent UX

    36:19

    features like thoughtful rubrics, smart validation and helpful prompts. Finally, the prompt includes clear coding instructions.

    The component must begin with the use client. In short, this prompt is designed to generate a polished intelligent interview feedback

    36:34

    from using react. So we'll run this.

    So these line calls the get chart completion function which was designed earlier to get response from the open AI model. It passes two arguments GPD model which specifies the

    36:51

    AI model to use like GPT for mini and react demo prompt which is the details instruction asking the model to generate a react component. The function sends this prompt to the model and returns the AI's response which will

    37:08

    be block of code as requested. The result is then stored in the variable called GPT code which now holds the React component code generated by the AI.

    So we'll run this. So let's now simply print the code and you can see

    37:24

    the code is printed. So this line uses the display function from the ipython display to show an image inside a Jupyter notebook or similar environment.

    So image GPT4 app you can see this

    37:40

    image GP4 app image loads an image file named GPT4 app image dotpng from the current directory. Together this command displays the image in the notebook output which can be useful for showing the visual results UI screenshots or

    37:55

    diagrams alongside your code. So let's now move on.

    So it now sends the same react demo prompt to a different model called the 01 model which is likely a lighter or experimental version like the 01 mini. It uses the get chat completion function

    38:12

    to generate the code by calling the openi API. Now 01 holds the output generated by the 01 model for the same prompt allowing you to compare the results across models and you can just

    38:28

    print the one code and you can see now it is printed and you can now display this. Now this is nice and with this the function process orders text in a

    38:45

    list of orders and optional parameters settings debug and notify customer. So it processes each order by validating its fields calculating totals applying discounts and queueing the customer notification.

    It collects the results

    39:00

    errors and notifications in a separate list and returns them in a dictionary. So however these are the few issues in the code.

    First using a mutable default argument settings is risky because it can lead to unexpected behavior. It's better to use none and assign an empty

    39:18

    dictionary inside the function. Also comparisons like if debug is equal to true should be simplified if debug.

    Similarly if items valid is equal to true can be just if items valid. So ensure the function logic works for

    39:35

    basic order processing but should be cleaned up for the Python best practices, performance and maintainability. So let's now run this.

    So basically it creates a prompt string called prompt using the f string. So it

    39:50

    tells the AI to clean up and improve the Python code stored in the code snippet. So the prompt specially asks the AI to return only the updated code.

    Next the get chart completion sends this prompt to the selected GPT model using

    40:06

    previously defined function. So this functions calls the openi API and gets a revised version of the code as a response.

    Finally we are printing the GPT code displays a new version of the code so you can review or use it. Lastly, this

    40:23

    line sends the same prompt to a different model, the 01 model using the get chat completion function. So, this prompt asks the model to clean up and improve the code stored in the code snippet.

    The function processes the request and stores the model's response

    40:40

    in the 01 code variable. So, this response should contain the optimized version of the original code generated by the 01 model.

    The next line we are having print 01 code displays the improved code in the output allowing you

    40:55

    to see the and compare the 01 models version with GPD models output. This is useful for evaluating the differences in the code quality clarity and optimization between the models.

    Lastly, the result is stored in the result

    41:12

    variable and then displayed using the display markdown result which renders the output in the nicely formatted markdown. Great for readability in Jupyter notebooks.

    In short, this step helps you to use the AI

    41:29

    not just for generating code but also for doing a quick side-by-side code review and analysis as well. So let's now move on to the last module.

    Hello guys, welcome back to this last module. So let's now get into reasoning with

    41:46

    images. We will start with importing the warnings and this is a built-in warnings module which is used to manage the warning messages during code execution.

    The second line uses the warnings do filter warnings to suppress all the warnings. This means any non-critical

    42:02

    alerts will be hidden from the output. This is commonly done to keep the output clean especially in the notebooks or scripts where you don't want warning messages to clutter your results.

    So we'll run this. So then here the first line imports the JSON module which is

    42:19

    used to work with JSON data. This is helpful for paring, formatting or storing structured information.

    The second line imports the OpenAI class from the OpenAI package. This allows you to connect to the OpenAI's API and make

    42:34

    requests to generate the text code or other outputs from the AI models. Then the line from ipython dot display import display markdown image imports tools to visually display the markdown text images or formatted output commonly used

    42:52

    in the Jupyter notebook. So after that the code imports the get openAI API key function from a custom file named helper.

    This function retrieves your OpenAI API key securely. So it doesn't need to be hardcoded.

    The function get

    43:08

    OpenAI API key is called and the return key is stored in the OpenAI API key variable. This key is needed to authenticate with the OpenAI API.

    The lines defining the GPT model, GPT 40 mini and the 01 model that is 01 specify

    43:26

    the names of the AI models you want to use for generating responses. So these can later be passed when making the API calls.

    Finally, client is equal to OpenAI creates an OpenAI client instance which you will use to send prompts and

    43:42

    receive completions. So this should ideally be open AI API key.

    include the authentication unless your environment handles the key automatically. So we'll now run this.

    43:59

    So this means the image is expected to be located inside a folder named data in your project directory. The second line uses display image image file path to load and show the image in Jupyter notebook or similar environment.

    It

    44:15

    visually renders the organization chart image. So you can see directly in your notebook output.

    So this is commonly used when working with the visual data or presenting diagrams, charts or anything like that. So let's now move on to the next one.

    The code starts by

    44:32

    importing the B 64 module which is used to encode the binary data like image files into a B 64 string format. This is necessary because APIs like OpenAI's vision models accept images as B 64 encoded strings.

    The function encoded

    44:49

    image that is image path takes an image file path reads it in binary mode that is RP encodes it in using B 64 and returns it as UTF8 string. This prepares the image so it can be safely passed to an AI model in the required format.

    The

    45:07

    main function 01 vision is designed to send both an image and a text prompt to an open AI vision and a built model like GPT4. It first encodes the image using the encode image function.

    So inside the function, it checks whether JSON model

    45:23

    is true or false. If JSON mode is true, the response will be requested in the structured JSON format using response format type and JSON object which is useful for extracting the structure data from the model's reply.

    So if JSON mode

    45:39

    is false, it simply sends the prompt and image together and expects a regular text response from the model. Both the versions use client.chart.comp completions.create method to make the API call.

    Finally, the function returns the draw response

    45:56

    object from the API, allowing the caller to further process or display the model's output. In short, this functions lets you send an image along with the custom prompt to a vision capable open AI model with an option to get the result either as plain text or structure

    46:14

    test. So this line calls the 01 vision function which sends both an image and imprompti vision model for analysis.

    So it passes three arguments file path that is image file path to the image file prompt equal

    46:30

    to what is this? This is asking the model to describe the image and then model is 01 model which specifies the vision capable model to use like 01.

    The function processes the image and the prompt, sends them to the model and then stores the model's response in the

    46:46

    variable response. So this response will contain the AI's interpretation of the image which you can later extract and display.

    So this line is used to display the AI models response in a clean and readable format inside a Jupyter notebook. So this response dot choices

    47:06

    zero dot message.contained contained accesses the actual text contained as generated by the model in the first response choice. So wrapping it with a markdown allows the output to be rendered with the proper formatting such as bold text, bullet points or headings

    47:22

    if the response includes markdown syntax. Finally, display renders the the format markdown content in the notebook output area, making the model's reply easy to read and visually structured.

    So let's now start with

    47:38

    understanding the image. The code defines a variable called the structured prompt which is a string that will be sent as a prompt to the AI model most likely a vision enabled one to extract the structured data from an image.

    The prompt includes special instructions

    47:54

    tags to clearly tell the model what to do. It says that the model is acting as a consulting assistant where job is to process organization chart data from the image.

    So if then specifies the desired output format, a JSON structure that

    48:09

    includes an arbitrary ID for each person, the name, role, and array of ids they report to and an array of ids that report to them. So this is a clean and structured way to actually represent an organization hierarchy.

    48:25

    So finally we are printing the structured prompt says the full prompt string so you can review or verify its contents before using it as a function like the open version. So this code calls the O1 vision function to send both an image and a structured prompt to the open AI vision model is 01 asking it

    48:43

    to extract the organizational hierarchy data. The arguments pass include file path equal to image file path which points to the organization chart image model is equal to 01 specifying the model to use.

    Then we have prompt is equal to structured prompt which

    48:59

    contains a detailed instructions on the output format and the JSON mode is equal to true. Setting JSON mode equal to true tells the model to return the result in a structured JSON format.

    So instead of plain text making it easier to purse and

    49:14

    use programmatically. The final line actually prints the models JSON formatted response from the first message choice which should include the organization structure with ids, names, roles and reporting relationships.

    49:31

    You can see the output. So this line creates a new variable called the clean JSON that stores a cleaned up version of the model's response.

    Sometimes when the models return the JSON code, they wrap it in markdown formatting using the

    49:47

    triple back text like JSON and to indicate a code block. So this line removes those wrappers using dotreplace and dotreplace with inverted commas leaving behind only the raw JSON content.

    The result is a plain JSON

    50:04

    string that can now be purged or processed further without formatting the issues. So this line attempts to purse the model's response directly using the JSON loads which converts a JSON formatted string into a Python dictionary order list.

    So it accesses

    50:21

    the 01 response choices dossage.contain which is expected to contain a JSON output describing the organization chart structure. The result is stored in the variable that is the organization data making it possible to work with

    50:37

    organization hierarchy as a structured Python object. However, the JSON file still contains the markdown wrappers like JSON.

    You should use previously clean version instead for space person. So the code defines a new prompt string

    50:52

    called the analysis prompt which is meant to guide an AI model to answer questions based on a previously extracted organization chart. It starts with a special instructions like you are an organization chart expert assistant.

    Your role is to answer any organization

    51:08

    chart questions with your organization data and we have given the instruction. So this tells the AI what role it plays and how to use the data.

    then embase the structured organization chart data inside organization data tags by using an string. This helps the model clearly

    51:26

    understand that this is the data it should reference when generating answers. So let's now run this here in the code.

    The first line is importing the get open API API key. This function is typically

    51:42

    used to securely retrieve your open API key without hard coding it in the script. The second line calls this function and assigns the result that is a your API key to the variable open API API key so it can be reused throughout

    51:57

    your code for authentication. Finally, the client is equal to OpenAI creates an instance of the OpenAI client using that key.

    This allows you to send the prompts and interact with the OpenAI's models through authorized API calls. So as

    52:12

    discussed before, these are the messages and the role user content analysis report plus who has the highest ranking reports and which manager has the most reports and we will just run this. So then we are finally displaying the

    52:29

    markdown allowing my markdown formatting like bold text, bullet points or headers to be properly displayed rendered. And the first line sets the variable image file path to the path of an image file named as year relationord.png

    52:46

    which is located inside a folder called data. So this image likely contains an entity relationship diagram.

    The second line uses the display image to load and visually display the image with the Jupyter notebook. So we'll just run this

    53:02

    and basically you can now see the output the product ID this is the order product this is the ID the client C uh it also has a ID C ID product name client address product price and client credit card. So basically this was all about

    53:19

    the course that's it. So congratulations on completing your journey with reasoning with 01.

    You have gone from understanding the core foundations what reasoning with 01 is why it's crucial in today's AIdriven world and where it can

    53:35

    be applied to actually putting it into practice. We've began by setting the stage with the fundamentals, then moved on to hands-on exploration, crafting effective prompts, practicing logical and structured reasoning, then writing and refining code with 01, and finally

    53:50

    understanding how each layer builds towards the AI interaction. You didn't just learn the theory, you experienced it through demos, challenges, and the real world examples.

    So whether it was decoding logic, guiding AI through

    54:06

    structured thought or debugging your own process, you have now got a strong grip on how to think with AI, not just about it. This is just the beginning, guys.

    Keep experimenting, keep questioning. The more you reason, the smarter your results become.

    Welcome to our

    54:22

    comprehensive tutorial on creating a fully functional e-commerce application using React Tailwind CSS and Redux with the help of Chad GBT. In this tutorial, we are going to leverage the powerful capabilities of Chad GPT to assist us in

    54:38

    building modern, responsive and featurerich e-commerce application. You will learn how to set up your development environment.

    We will start by setting up the essential tools and libraries required for our project including React, Tailwind CSS, Redux and

    54:55

    React Router. We are also going to design the application layout using Tailwind CSS.

    We will create a visually appealing userfriendly layout that includes the header, navigation bar, product listings, shopping cart and checkout pages. We are also going to

    55:11

    implement the state management with Redux. We will use Redux for managing global state, ensuring our application is scalable and maintainable.

    You will learn how to set up slices for handling user authentication, product data, and the shopping cart. We will also create

    55:26

    reusable components. And throughout this tutorial, we will build a variety of reusable components such as product cards, from inputs, buttons, and models to enhance the modularity and reusability of our code.

    for fetching and managing data. We are going to use

    55:42

    React query so that we are going to fetch data from a backend API and manage the server state efficiently. We will also learn to handle loading states, caching and synchronization with React query.

    With that said guys, watch this video till the end if you want to learn

    55:59

    how to make an e-commerce application using chat GPD. So guys, this is a website which we are going to create with the help of Chat GPD.

    So you can see all over here we have the navigation bar which includes home, about products,

    56:14

    cart. Okay.

    So suppose I want to add this product. Okay.

    This is a lamp and say I want to add in this quantity say three. And if I click it up, you can see there's a notification which is coming up and it's saying item has been added to the cart.

    Now we can also go and

    56:32

    check the cart. But before uh proceeding for checkout you need to also login.

    So suppose I have not created any account. So let us create an account.

    So say this is a mail id and say let us take some

    56:47

    demo mail id and here is some password and you can say email is already taken. So let us give some random mail suppose name at the rate 1 2 3 and say the

    57:03

    password is 1 2 3 4. Okay.

    Now let's login. So it's saying email must be a valid email.

    So I have also put like you can say checkpoints where if the proper validation of the input field is also

    57:19

    given. So name add the rate name 1 2 3 at the rateg gmail.com.

    Now I hope so it is going to work. So email or username has already been taken.

    So let us change our username. So

    57:37

    username name 1 2 3 and let us try to register. So you can see all the input validation has been done.

    So let us give some big. Okay.

    Now you can see it has completely logged

    57:53

    in. Okay.

    So you can save the password. Now let us go to our repeat.

    58:21

    So you can say I'm trying to login all over here and it's been logged in. So now you can see if you click on our products you are going to see the products part like there's a chair there

    58:38

    is a glass table king bed size and all these are the cards of the product. So basically these images have been taken from a link.

    Okay, I will mention you the link and you can do it and inside this you can also search the given product. Suppose say I want I want a

    58:53

    grade lamp or say chick chair. Okay, if I type this and uh if I try to search it you can see this is coming up.

    You can also select the company. Suppose if I just see all over here.

    So the company

    59:10

    name is Luxora. Okay.

    So you can type all over there in the check box. Okay.

    So where the search button is coming up and you can say all these companies are there and with the help of that you can search it. You can also do the sorting from A to Z, Zed to A, high to low.

    59:27

    Okay. So this is all we are going to build using ReactJS, Tailwind CSS, Redux and also use some state management and with the help of Tailwind CSS we are going to create a fully responsive website.

    Now let's get started. So guys

    59:43

    let us open our chat GPT and write a prompt. So guys, the first prompt that I want to create is that say I am creating a tutorial or you can say I'm creating a website

    00:00

    using ReactJS and it's an ecommerce website. So we'll with the help of

    00:16

    or by using vit okay tailwind CSS and redux. Now

    00:33

    the next thing what I have to tell the GPT that can you help me outline the agenda of this project like including the main features and functionality. Okay.

    01:12

    So this is the first thing and you can see all over here. So it says like install and configure wheat integrate tailwind CSS okay setup redux then for the UIUX design it's saying responsive

    01:27

    layout okay then you have to do theme and styling define a consistent theme okay now for authentication and user management you have to do user registration and login okay for product management like product listing product details and product search so we have

    01:42

    seen these functionalities that we have implemented in our website And also you can see there's a shopping cart and checkout option. You have to add to the cart, cart management and check out process.

    For the order management, you can also track the history. And we are not going to target these two things.

    01:58

    Okay. These are some of the additional features.

    But till here we are going to complete it. Okay.

    So not to complicate too much just to give you a brief outline how you can make a website using chat GPT and Tailwind CSS and other tools and dependencies.

    02:14

    So like we have got the brief idea regarding the main features and functionalities. So we have responsive design, user authentication, product catalog, shopping cart, checkout process, order management, admin panel, wish list, reviews and rating, notification and performance

    02:29

    optimization. Now let us write the next part.

    So here I'm going to type my next prompt that will be can you help me explain

    02:45

    the folder structure or can you help me with the folder structure of this React

    03:02

    e-commerce app and also let me know the purpose of each folder.

    03:19

    So let us type this prompt and uh we are getting some brief idea like how our uh things would be there. So it says like you can see the folder structure.

    First we have to create a folder called e-commerce app. Okay.

    And then so guys

    03:37

    for this I will have an assets folder. Okay.

    Then we are going to create a components folder. Inside the components there are going to be various components.

    Okay. So let us start with it and let us first set up our development environment.

    So guys, I'm using VS code as our text

    03:54

    editor and uh in that let us see that you have first all the dependencies installed like node especially with the help of node we are going to install the package for creating a react app. So guys uh you can also type all over here

    04:09

    that uh how to set up my development environment. And if you type this command you can see so there are certain prerequisites it's

    04:25

    going to tell like node and npm code editor visual code. So like you have to create a certain thing like this npm create v latest my e-commerce app then template and then you can go in this app.

    So since I have already created an e-commerce app. So what I'm going to do

    04:43

    uh inside this I'm going to create this and uh let us copy this all over here. So you can see assets I have already created and uh let us go to the terminal.

    Click on view and here's a

    04:58

    terminal and here let's type this. Click on yes.

    So all the necessary dependencies is going to install. So you can see all over here like what I have to choose it says pre-react lit cities solids or

    05:14

    other. So you you can just select all over here that I want to use a react version.

    Okay just click on this and let us keep the JavaScript part all over here and you

    05:32

    can see this has already been created. Now the second would be go to our e-commerce app.

    So this is your folder. Navigate to this and next type npm install

    05:48

    and finally click on okay it's installing the node modules. We are going to wait for some time.

    So guys as I have asked GPT all over here give me the folder structure for our components. So you can see all over here it has given cart item, card item

    06:05

    list, cart totals, checkout forms, complex pageination container and it has given error element, feature products. Then for filtering it has given filter.jx, objects, form checkbox, form input.

    Okay. Form range, form select,

    06:21

    header, hero, loading, navbar, nav, links, order list, pagenation container, product container, then product grid is there, product list is there, then section title and you can also learn about the process or what do you say the

    06:38

    purpose for each component. Okay.

    All over here. Now like product grid what it does.

    So you can ask GPT the same question with a prompt like what is the purpose of this uh given component. So you can see it is going to tell you all over here.

    So guys use it excessively

    06:54

    wherever you filled out and it has also given the code all over here. Okay.

    Now before that since I have told you we'll be using Tailwind CSS. So now let us type the command to set up our Tailwind CSS.

    So type uh

    07:10

    I want to set up the Tailwind CSS in my uh project. Uh help uh uh give me the

    07:27

    can type. Okay.

    So if you type this so you it is going to see that first you have to install the Tailwind CSS and its dependencies. So guys for the same

    07:43

    purpose uh just copy this all over here come to your folder. So now click it all over here that npm install the post CSS auto prefixer.

    So it is going to start the downloading part of it. Okay.

    08:19

    The next process is in the step two you have to initialize the Tailwind CSS. So copy this and go all over here.

    And let us initialize our Tailwind CSS. Pretty fine.

    What is the next step guys?

    08:34

    In the next process, we have to configure our Tailwind CSS. So you can see all over here there is going to be a file called Tailwind config.js and you can configure it

    08:53

    all over here. So let us go in our file and you can see tail bitconfig.js would be there.

    So okay so now we have done this. Now let

    09:08

    us see what is the third thing in the step four. It is saying add the tailwind directive to your CSS file.

    Okay. So there's going to be a file.

    Okay. First you have to create a CSS file for your Tailwind styles.

    Okay. And in that you have to add these directives.

    Okay. So

    09:24

    guys, this is our index.t CSS file and we can also add the directives all over here. Just click on everything and delete it and just add the required directives.

    Okay, so I have added this part. Now let us move to the next part.

    09:39

    So in your app dot CSS, you have to import it. Okay, so whenever you are going to open our app dot js file, so here we have app.t jsx.

    So you can see we are importing this. Okay.

    All over here. And uh in our main.js file, you

    09:56

    can see our index.t CSS is imported. So basically this file is already been imported all over here.

    Okay. Which is basically going to apply the tailwind CSS directives that we are going to use while building our components.

    Now let us build our components one by one. Okay.

    So you have seen all over here

    10:13

    that uh our folder structure was something like this. Okay.

    So we have cart items, cart item list, cart totals, checkout forms, complex pageination container, error element, featured products, filter, form checkbox and form input, form range header. So let me

    10:28

    create all these folder for you. So we'll go all over here.

    Now click on the new folder and say components. Pretty fine.

    Now here we have to create

    10:44

    all the files. So guys, you can see all over here I have created all the components in the meantime.

    So I have card items, card item list, card totals, checkout forms, complex pageination container, error element, featured products, filters, form checkbox, form

    11:00

    input and many more. So the same thing with CHP has given me, I have created all these things.

    Okay, these are the basic things and uh while also creating a website, I would also recommend you to understand like how these components which are usable throughout the process

    11:17

    have been used. Okay, now let us ask Chad GPD to populate this.

    Okay, now this thing has been done. Now let us ask for pages like what pages I want to make in our website.

    So if I navigate to our website, we can see we have a homepage,

    11:33

    we have about page, we have a products page, we have a card, checkout and orders. So let us ask GPD the same.

    I want to create pages for about home,

    11:51

    card, check out error. Okay.

    Orders.

    12:07

    Then we have the home layout login register single product page. Okay.

    12:27

    So like these things you have to identify or the best thing you can do it you can take the screenshot of this. Okay.

    So take uh like all everything you can just take a picture okay snap it up and send it to chat GBD it is also going to recognize it and let me show you this

    12:44

    okay so say I'm going to snip it up so say I want to say I want to create these pages and you can just take an image and just paste it up. So you can see the image is

    12:59

    going to get pasted and you can type a prompt something like this that I want to create pages like this which includes home card checkout orders home layout login register single product page and you can also ask GPT to give a tailwind CSS for these things. So GPT is going to

    13:16

    answer you for the same but as a developer you can take this help but don't rely over it too much. You have to also do little bit of modification by yourself.

    So this thing I have written and let us click it. So so first it is

    13:32

    telling you to install the react router DOM. Okay.

    So now you have to install this. So copy this.

    Okay. So you can see in your package.json file.

    So this will package.json

    13:49

    all over here. And you can see all over here that we have dependencies all installed which is react router doc.

    So you can see all over here it's version is 6.24.1. Now the most important part like while using this project.

    So what dependencies

    14:05

    I have used all over here. So I'm going to give a brief idea regarding this.

    So guys for building this project you are going to require these dependencies like you need to have redux toolkit you need to have tanks tank you need to have react query dev tools okay you need to

    14:22

    have an aios you need to have a djs react react dom react icons we'll be using you need to use react redux for state management react router react toify then there are certain deep dependencies as you can see tailwind's

    14:37

    typography we have to use because what kind of text we are going to put it up on our web page. So this is going to handle it.

    Then we are going to use types types react dom then v plug-in okay this we have it daisy ui okay eslint then we have eslint plug-in react

    14:55

    hooks then we are going to use post css tailwind css and v so what we can do next we can copy all these things okay so since I've already copied now let us go to our gpt and say uh give me the

    15:11

    command and to install these dependencies.

    15:29

    So you can see all over here it has given me all these commands. Just going to go our uh text editor.

    So here's our terminal and just type this and you can see all over here it is

    15:44

    going to start the process of installing all these packages. Similarly you have to install all the D dependencies.

    Now the best thing is that uh what are the dependencies required I will share you in the given video and you can

    16:01

    install it and also for the div dependencies similarly you have to do it just copy it and go all over here and paste it and you can see all the div dependencies are going to be added in your package

    16:16

    dojson file okay since we have used it Now these were the given components and these were some of the pages. Okay.

    Now let us create the pages that we have discussed before.

    16:56

    and also guys uh you can see assets all over here. You have to transfer this all over here.

    Okay. Now I'm going to do this but before that let us create the pages section.

    So create a new folder called pages all over here. And

    17:12

    inside the pages you can see all over here that for pages we have told all about here that they're going to be home layout there going to be error.jsx they're going to be a checkout and about do.jsx JSX. Okay.

    All over here and in

    17:28

    similarly just create this. Now for each of the given page you can check all over here that it is giving an idea about it that you can have to ensure something like this and also you can ask GPT to explain you what we are going to use it.

    Okay. So see here all the headers you're

    17:45

    going to add the links like about card, check out, login, register. So which is basically there on our navigation panel and similarly for applying the CSS just ask GPT to make it something like this provide a proper spacing and also you

    18:02

    can take a screenshot of this and send it to GPT. So it is going to analyze it and give you the required code for this.

    Okay. But the most important part is you need to identify what folder structure that you have to use it.

    And don't worry guys, I will share the required doc file

    18:19

    in the description where you can check what are the components that we have made and similarly you can take the help of GPT and build this project. So guys I have created all the required pages and you can see all over here that we have about cart checkout error for handling

    18:34

    the errors. Okay, home layout index.js landing login orders product management error for handling the errors throughout like 404 page.

    Then we have the landing page. We have the login, orders, products, register.

    Then we have the single product.jsx. Now these are the

    18:51

    required pages that we need to build in order to replicate the website. Okay.

    So if you have any problem like what this given page does, just ask the GPT and it's going to give you the required answer. Now this was building our project structure for the second part.

    19:07

    Now let us move to the third part. Now guys, the next thing which I want to do all over here is that I want to create a utility folder which is kind of reusable throughout the application and it has these functionalities a custom fetch.

    Okay, which means a preconfigured Axios

    19:23

    Axios basically which is going to have an Axio instance for making standardized API calls and also to the backend server. Then we have the format price functionality in which we are going to use the utility function to format the numerical prices that we are using while

    19:40

    buying the given product and also in a USD currency format. Okay.

    So it's a basically a dollar format. Okay.

    And then generate amount options then also we are need to create a utility function to generate the list of quantity options for a selected dropdown. Okay.

    So these

    19:57

    were all the features that I've shown you all over here. So when you go from this suppose say on this given product and if you select this say add to the back say I want seven of them and when you go to your given cart so you can see

    20:12

    all over here the format pricing and all the utility fun repeat and all the utility functions used all over here are given. So you can see GPT has given.

    So it has given custom fetch, format price, generate amount option. JS you can

    20:30

    create three of them but I will not complicate this. Instead of this since the application is pretty simple I will put all of these things under one folder.

    So guys as you can see all over here I have created a folder called utils folder and with a name called index.jsx

    20:46

    and inside this I'm going to add all the three functionalities which I have told you now. And this was the project structure that we need to build in order to make an application like that.

    Now what you can do guys now you can see the functionality of this all over here.

    21:03

    Okay. Now since you'll not get an access to this website what you can also do you can create a demo website like this or you can take a picture of this and send to GPT.

    It is going to give you the required idea like how you can proceed to build the project. So it is very very

    21:20

    helpful but at the end of the day you need to know about react little bit so that you can modify the application based on the given instances. So guys inside our my e-commerce app you can see our project structure is set up.

    So inside the src we have the assets. So I

    21:36

    have pasted these images. Okay.

    So if you click on this so it is going to show you these kind of images. So I'll show you the link for the assets.

    You can use this to create this website. For the same you have components and all the components like cart items, card items totals, checkout forms, form checkbox,

    21:54

    hero, loading, nav links is all being mentioned. So this is for all the components that we'll be needing to build this website.

    For the required pages, we have about card, checkout, error, home layout, landing, login, orders, product, registers, and single product. Okay, so these are the required

    22:10

    pages. And one, we have the utility folder all over here that we have created called index.js.

    JS6 for handling these custom functionalities. Now this was the basic idea regarding the project structure.

    Now what you can do guys since you have got an idea like

    22:26

    what components we'll be using. So ask GPT okay now build each of the components by using this prompt.

    Uh say now help me uh I want to make the

    22:43

    give me the code or you can ask something like this. Uh, give me the code for card component and with the required

    23:02

    Tailwind CSS configuration, the Tailwind CSS snippets or you can say with the Tailwind CSS code. Now if you type this all over here so it is going to give you a demo idea.

    Okay

    23:18

    suppose you have this card JSX and card item.js. So you can see all over here it has started using this CSS which is basically a tailwind CSS and I know it on the first time you're not going to get exactly the same you wanted.

    So on

    23:34

    it what you can do guys that you can take a picture like what the output is coming and send it to GPT and ask it to you have to ask for each of the components. Similarly ask for the pages.

    Okay. So ask like something like this.

    23:51

    So based on the project structure based on the pages. So guys as you can see all over here.

    So guys, it has started giving the codes for the respective pages as you can see

    24:06

    all over here and with the respective Tailwind CSS you can definitely modify it based on your choices and also what you feel like is more responsive and userfriendly for getting an in-depth idea regarding Tailwind CSS you can navigate to its official document

    24:24

    and it will be very very helpful that you can also take a help and ask GPD the same thing. So after you have populated these pages with the respective codes and components okay that we have shared all over here.

    Next thing what you can do all over here that just type all over

    24:43

    here that this command or you can ask GPT how to run my application give me the command. You can type this and it is going to give you the command but I know the command that is npm

    24:59

    rundev or this is the given command that is will be very helpful for running your application. So at each stage suppose you are building one component.

    So keep running this and keep seeing like what changes you are applying on a real time.

    25:15

    So building an application using react or any front-end application is a hit and trial. So based on the modifications that you require.

    So you have to consistently interact with GPT and make an application out of it. But this was an overview of our given thing.

    So just type this all over here. So just type

    25:32

    this and your application is going to start on a given port. So this is your port 5174.

    So if you navigate all over here, so your application will be open. So let me show you like suppose the port which I'm using all over here.

    So it is 5173. Okay.

    So this will be your

    25:48

    application and in this way you can make an application with the help of chat GPD. So I've given you the basic idea and I will also share the given documents for the components pages so that you could have a brief idea and you can ask the GPT to give the required

    26:04

    tailwind CSS for the same. I will also share the assets.

    You can use it or you can download the assets from various other websites where they have like free pick where you can make the images for the given products like you can say lamb, coffee table, coffee which are

    26:21

    easily available and you can start designing all these things. So guys the best thing about GPT4 that you can also share the image.

    So with the help of this you can definitely build an application. So let's first understand what is prompt

    26:38

    tuning. So prompt tuning is a method that enhances the adaptability of large language models for no task by fine-tuning a minimal set of prompt parameters.

    This technique involves appending a tailored prompt before the input text directing the LLM to generate

    26:53

    outputs aligned with specific objectives. Its efficiency and versatility have made it a local or focal point in natural language processing advancements.

    So let's understand prompt tuning. So prompt tuning is the practice of customizing LLMs for new applications by adjusting a

    27:11

    limited subset of parameters known as prompts. So these prompts placed before the input navigate the LLM toward producing the intended output.

    And now we talk about the advantages of prompt tuning. So the number one comes is efficiency.

    So unlike comprehensive

    27:28

    model fine-tuning, prompt tuning modifies only a select few parameters enabling quicker task specific model adjustment. And then comes the versatility.

    So this approach is applicable across a spectrum of task from text generation and analysis in natural language processing to image

    27:44

    recognition and automated coding showcasing its adaptability to different domains. And then comes the interpretability.

    The modifiable prompt parameters allow for an examination of how the model's output generation is influenced offering insights into the model's processing. So this was about

    28:01

    the advantages of prompt tuning. Now we'll move to the key challenges is prompt tuning and we will write some prompts and see how we can prompt tune the chat JP model and we can generate the outputs we want.

    So let's move to

    28:17

    chat GPD and here first we will draft or craft effective prompts. So crafting prompts that effectively guide the model without being overly complex is challenging.

    So to avoid them we will write some prompts and let's see. So we

    28:34

    will take a fresh example for crafting precise prompts. So let's consider a scenario and consider you are part of a financial analysis firm specializing in renewable energy sectors like solar and wind power.

    You aim to leverage a language model for extracting market

    28:50

    trends, investment opportunities and regulatory impacts specific to these sectors. However, you wish to exclude general energy market data or information about fossil fuels and non-renewable sources as well as omit any direct mentions of rival firms or

    29:07

    their investment products. So, what challenge you will face here is your objective is to direct the AI model to furnish targeted insights on renewable energy sectors while filtering out nonrelevant information and avoiding any specific mentions of competitors and

    29:24

    their offerings. So for example uh we can write a prompt that give me all details on renewable energy sectors including solar and wind power but skip all info on other energy types and ignore data about companies like you

    29:40

    could name the companies like eco power ventures and their solar boost investment fund. So this is the ineffective prompt.

    So this prompt is ineffective because it's overly detailed and instructs the model on what to exclude in a way that could be confusing

    29:57

    potentially leading to an over focus on the exclusions rather than the desired content. So if we talk about the optimized prompt what you can write is provide an analysis of current trends investment opportunities and the impact of regulations in the renewable energy

    30:12

    sectors specifically focusing on solar and wind power. exclude general energy market trends and specific investment products.

    So this refined prompt clearly directs the model towards the needed information that is market trends,

    30:27

    opportunities and regulatory impacts within solar and wind energy without cluttering the request with too many exclusions. It certainly implies the need for exclusively by focusing on specifically focusing which naturally guides the AI to omit unrelated sectors

    30:45

    and competitor products facilitating a more focused and relevant output. So now let's see this example and this is the inefficient prompt.

    So we'll copy this and paste it to chat GPT and let's see

    31:01

    what output it will provide to us.

    31:20

    So you could see that it has provided us the output. But this prompt is ineffective because it's overly detailed and instructs the model on what to exclude in a way that could be confusing.

    Now we will write the correct prompt as that is the optimized prompt

    31:38

    what I've told you that is provide an analysis of current trends investment and opportunities and the impact of regulations in the renewable energy sectors specifically focusing on solar and wind power and exclude general energy market trends and specific investment products. So this is the

    31:55

    optimized prompt and it will provide a good result and a good output. So you could see that it has started with the current trends only.

    32:43

    You could see the output and this is the desired output what we need and the charge jupy has proed with the optimized prompt. Now we move to the next challenge that is avoiding overfitting.

    So there's a risk of the model becoming too specialized to the training prompts

    32:59

    reducing its ability to generalize. So strategy for this broad application is to prevent over specialization prompts that should be designed to be broad enough to encompass the range of related queries enhancing the model's versatility.

    So for this also we will

    33:16

    draft an optimized and unoptimized prompt. So let's think of a scenario that you are developing an educational platform that uses a language model to generate study materials for high school biology covering topics from cell biology to ecology.

    Initially you

    33:33

    fine-tune the model with prompts focused on very specific areas like photosynthesis in plants. So initial prompt we can write is explain the process of photosynthesis in plants focusing on chlorophyll's role and light absorption.

    So while the model excels at generating detailed content on

    33:49

    photosynthesis, it struggles when asked about broader or slightly related topics such as cellular respiration or plant adoptions to different environments. So the issue we encounter here is when students query how do plants adapt to low light conditions.

    So the model will

    34:05

    fail to provide comprehensive answers because it's been too narrowly trained on photosynthesis specifics. So the solution for this overfitting challenge is to prevent overfitting and enhance the model's ability to generalize across the broader subject of biology.

    You

    34:21

    should utilize more inclusive prompts that cover a range of topics within the field. So we have the revised prompt here that is provide an overview of plant biology including key processes like photosynthesis, cellular respiration and adaptions to environmental factors.

    So this broader

    34:37

    approach encourages the model to generate content that's relevant across various biology topics improving its utility for educational purposes. So this is the prompt and we'll ask J GBD that is providing a plant biology including key processes like

    34:52

    photosynthesis, cellular respiration and adaptions to environmental conditions. So this will improve its utility for educational purposes.

    35:30

    So let's move to the next concern that would be scalability concern and you could see that it has provided the desired output for the photosynthesis and the environmental factors for plant biology

    35:45

    and as we talk about the scalability concern. So as the training data volume grows maintaining the efficiency of prompt tuning becomes more challenging necessitating innovative solutions to streamline the adaption process.

    So now we'll talk about its prompt also. So

    36:02

    scalability that is implementing AI for diverse customer service inquiries. So let's consider a scenario.

    Your company aims to deploy an AI chatbot to handle customer service inquiries across different departments from technical support to billing and account

    36:17

    management. Initially the chatbot is trained with specific prompts for each category performing well in small case test.

    So what challenge will you face with this prompt is or with this scenario is as you scale the chatboard to handle thousands of inquiries daily across a wide range of topics you find

    36:34

    that the initial approach of using highly specific prompts for each category is unsustainable. So the chatbot struggles to adapt the vast and varied nature of customer queries and we'll face the scaling issue also here that is the specific prompts used during initial training don't cover the breadth

    36:51

    of potential customer inquiries leading to inadequate responses in unanticipated scenarios. So what strategy we can apply for scalability is instead of relying on narrow specific prompts for each inquiry type your transition to using broader more adaptable prompts that can guide

    37:07

    the AI in understanding and categorizing a wide range of customer questions. So we can have an adapted prompt example and I have one that is or we can write here that could be identify the category of the customers inquiry that could be

    37:23

    technical billing account management and provide relevant assistance based on the identified category. So this strategy allows the chatbot to more effectively process a variety of inquiries by understanding the context and categorizing questions before generating

    37:40

    responses making it more scalable and versatile in handling customer service task. So if we address prompt tuning challenges, it is simplifying complex prompts that is begin by clearly identifying the core information or action desired from the LLM.

    Breaking

    37:56

    down multifaceted prompts into simpler, more focused request employ straightforward language and eliminate non-essential elements to clarify the prompt. And then we have utilizing soft prompts.

    So soft forms which are non-extual signals trained from examples

    38:12

    guide the LLM without explicit worded instructions. So these are particularly used for task with limited training data or where interpretability is less critical.

    And then we have innovative optimization techniques. So the development of parameter efficient fine-tuning techniques such as p-tuning

    38:29

    optimizes the search for effective prompts in a continuous space making it feasible to tailor LLM to specific task without extensive retraining. So this was all about the prompt tuning.

    Hello everyone and welcome to the

    38:46

    tutorial on prompt library for all use cases at simply. The prompt library is a comprehensive toolkit for mastering my ad use cases with ease.

    Whether you are delving into programming, honing creative writing skills, or exploring data analysis, this library offers a

    39:02

    versatile array of prompts tailored to your needs. Now, before you move on and learn more about it, I request you guys that do not forget to hit the subscribe button and click the bell icon.

    Now, here's the agenda for our today's session. So, guys, we are going to start with first understanding the prompt structure.

    Moving ahead, we are going to

    39:18

    understand testing and iterating. Then we are going to explore the prompt examples and at the end we are going to conclude our sessions with utilizing prompt libraries and resources.

    So guys in today's video we will be exploring the prompt structure for various use

    39:34

    cases. Now first let us try to understand the prompt structure.

    So guys I'll break down the prom structure. So here first we have the action verbs.

    So guys, think of action verbs like a boss telling Chad GPT what to do. It's like

    39:51

    giving Chad GBT a job. So for example, if I say write, you are telling Chad GBT to put words on the page.

    For example, if I say write a story, I'm telling Chad GPT, hey, I want to you to make up a story for me. So this is something like

    40:08

    this. Now let us ask Chad GPT.

    Hey, so write is your action verb all over here. So this is the first prompt structure that I would like you to apply.

    Now the second one you could give a theme or a

    40:23

    topic about. Now if you say just write a story, chat GPT is going to give any random story.

    So we won't want that. The next thing that we cover basically is topic or theme.

    So what theme or topic you are looking about? This is a part

    40:39

    where you are giving chbd a subject to talk about. Imagine you're telling a friend let's talk about cats.

    So cats are the given topic. So if I say write about your favorite food, I am telling chat GPT tell me about your favorite

    40:54

    food. So you have to always include a topic or theme along with your action verb.

    So here I can include some certain thing like this that write a story

    41:10

    about food. So you could see all over here chat GPT has given two uh responses.

    This is response one and this is response two. Now the third thing that comes up all over here is constraints or limitations.

    41:28

    Think of constraints as a rules or boundaries for Chad Cupid to follow. It's like saying you can talk about cats but only in three sentences.

    So if I say write a poem in 20 words, it's like I'm telling Chad GPT make a short poem using

    41:45

    only 20 words. So this is one of the things that you have to always keep in consideration regarding what task you want to give.

    So always include constraints or limitations. Fourth one is background or information context.

    So,

    42:02

    so this is also one of the most important parameters. Uh what exactly it means is like this part sets the scene for JGBT like giving it a background story.

    Imagine you are telling someone about a movie before they watch it. So if I say imagine you are on a spaceship,

    42:21

    I'm telling Chad GPT pretend like you are flying through the space. So this is also very very important for you to consider to give certain idea regarding your background or information.

    Now the fifth one is conflict or

    42:36

    challenge. Guys, this adds some spices to the prompt.

    It's like a puzzle for a problem for charge to solve. It's like saying talk about cats but tell me why some people don't like them.

    So if I say charp explain why reading is important

    42:53

    but you can't use the word book. I am challenging charging.

    So this is where conflict or challenge you have to give to charge GPT. Now example let us take one example on this.

    So for example, if I say the action verb

    43:09

    as right, we'll highlight this with red and the topic or theme could be like your favorite vacation. If I talk about a background or context like say you are on a beach with your friends or conflict or challenge we can

    43:27

    give all over here something like in just 50 words. So guys this is certain thing to follow while giving a prompt to charge GPD.

    So in this way putting all together you could combine all these three things and form a sentence and

    43:42

    this prompt is going to be very very effective to solve the problem of generic responses. Now with this simple example you can see how different components come together to create engaging prompts for charge GPD to work with.

    So guys, whenever you

    43:57

    are giving a prompt, I would request you to always follow this structure. So it's going to create a map for you to get a more precise answer.

    Now let's take example and elaborate the prompt library with examples to make it more understandable. So guys, let's take

    44:13

    another example of text classification. So for text classification, we'll take the action verb as classify and our text type would be product review.

    Example could be classify the following text as negative, positive or neutral sentiment

    44:32

    and after that you could give like the product review exceeded my expectation. So if you give certain thing like this you would say this is a positive sentence.

    So making your prompts in this manner with a proper

    44:48

    structure you are going to get a very particular response which fits what you need. So always remember this structure whenever you are framing any bra.

    Now let's move to the second part that is testing and validation. Guys testing and iterating are essential steps in

    45:05

    refining prompts and ensuring optimum performance from chat GBD. Let us break down this process.

    The first process is prompt validation. So before using a prompt, it's crucial to test to ensure it that it generates a desired response accurately.

    Then you evaluate the

    45:21

    output. You're going to generate responses using the prompt and evaluate the quality, relevance, and coherence of the output.

    Third, check for errors. Look out for any errors, inconsistencies, or unexpected behavior in the generated responses.

    Compare

    45:36

    against expectations. Compare the generated responses against your expectation or any requirements to verify that they meet your desired criteria.

    The fifth one is solicit feedback. Seek feedback from peers, colleagues or domain experts to validate the effectiveness of the prompt.

    For

    45:53

    example, like analyzing the results. So you would say analyze the results to testing to identify areas of improvement or refining the prompt.

    Next is modifying the prompt. Based on the analysis, make the adjustment to the prompt structure.

    Next, then fine-tune

    46:09

    the parameters. Experiment with different variations of the prompt such as adjusting constraints, changing topics, or refining prompt to assess whether the changes have resulted in improvements in the quality of the generated responses.

    The fourth one is retesting. Test the modified prompt

    46:26

    again to assess whether the changes have resulted in improvements in the quality of the generated responses or not. And the final step is iterate as needed.

    Iterate the testing and modification process as needed until you achieve the desired outcomes and generate high

    46:43

    quality responses consistently. So this structure you have to always follow when you are iterating.

    So I'll give you an example. So like we have given a initial prompt as write a product description for a new smartphone and I would say

    46:59

    include details about features, specifications and benefits and I would say add a constraint all over here that keep the response in 100 words. So this is your initial prompt which you are given.

    Now for testing,

    47:17

    the next comes is testing. Generate product descriptions using the initial prompt.

    Evaluate the quality and relevance of the generated descriptions. Check for errors if inconsistencies or missing information is there.

    Compare the description against the expectations and require. So this process comes under

    47:34

    testing. Okay.

    So give it uh like change your prompt a little bit. give a specific uh description regarding a certain product and you would ask that and just next process would be evaluate the quality and the relevance like what you are uh getting as a response check

    47:51

    for errors like go to Google like see if it's same is coming up then what's the customer expectations regarding that product so if the overall structure is like technical structure is maintained so this gives the first phase of testing next one comes the analysis some

    48:08

    descriptions s lack detail and fail to highlight its key features. Okay.

    So in this scenario the descriptions vary in length and structure leading to kind of inconsistencies. Certain descriptions like here will focus more on technical specifications than the user benefits.

    So overall the

    48:25

    quality and the coherence of the descriptions needs improvement. So you have to take all these parameter and you have to reframe your prompts.

    Okay. Then next comes is iteration.

    You have to modify this prompt to provide like more of to give a clear instructions and

    48:41

    emphasize the user benefits. Write a captivating product descriptions for a new smartphone.

    Okay. Then move to retesting.

    Generate product descriptions using the modified prompt. And for the outcome, you would say that the revised prompt should yield more compelling and

    48:58

    informative product descriptions. So this is how you have to do iterate continuously to get the proper response like which you would be needing.

    Okay guys, now let's move to the final part of this video that is utilizing the prompt libraries. Guys, utilizing prompt

    49:14

    libraries and resources is essential for streamlining the prompt writing process and access a wide range of pre-esigned prompts for various use cases. So you're going to get a library of a predefined prompts.

    Okay. So there's one website like which I want to show you.

    This is

    49:30

    called Anthropic. So Anthropic has recently released a prompt library.

    So guys, they have given a wide data of a prompt library. So if you just click on this, so you're going to get like what are the effective prompts in all these domains.

    So give it a shot. Uh try to

    49:46

    see what are the uh like resources you're going to get all over here. It definitely is going to fine-tune your responses.

    Now let's move to the process. So when we are talking about the prompt libraries, the first step is explore the existing libraries.

    So you can see that I have given a reference to

    50:02

    a prompt library all over here which is released by anthropics team for cloud and also workable for chat GP. Next is you have to understand the available prompts.

    Familiarize yourself with the prompts available in this library and including their structures, topics and

    50:18

    constraints. You have to also analyze how prompts are categorized and organized within the library to quickly locate relevant prompts for your needs.

    Third is adapt to prompts to your needs. Customize existing prompts to suit your specific objectives, audience and use

    50:34

    cases. You can modify prompts by adjusting the action verbs, topics, constraints or background information which aligns with your requirement.

    Create your own prompts like combine different components such as action verbs, topics, constraints to craft

    50:49

    prompts that addresses specific task or challenges. Next process you have to do is sharing and collaborating.

    You will share your prompts with the community to contribute to the collective pool and resources. So this is one way of learning that I really really want you to follow.

    Now you have to keep

    51:06

    experimenting and iterating at the same time. And finally you have to see the documents and organize all your prompts for the scene.

    So what you can do best is see all the existing prompt libraries like I'll show you one more. So prompt

    51:22

    library for chat GPT GitHub for all use cases. So you could see explore various

    51:38

    repositories on GitHub like what are the uh kind of like prompts available like this repo specifically focuses for the academic writing. So just visit this uh repository and uh you could see they have given a lot of thing like for

    51:54

    brainstorming they say so you could see the action verbs all over here. Try to like uh uh try this prompt and see how you are getting a response.

    Then for article sections like what's it's there. So you're going to get a lot of like things and uh more of the experiment and

    52:11

    more uh you are exploring the more idea you are going to get regarding this. So my advice would be just explore as much as libraries you can and depending upon your use cases you have to make an organized prompt structure.

    So following

    52:27

    this format which I've told you follows the action verb the topic or the background information. Then what are the constraints you have to give?

    Okay, it's any particular theme is there. You have to include all those things and use the existing prompt library also.

    So you

    52:43

    can refine your uh prompt and always to get a good response. It's my personal experience that you have to keep fine-tuning, keep testing, iterating, analyzing so that your result comes very fine.

    It's Monday morning and your inbox is flooded with customer emails. Your sales

    53:01

    team is juggling follow-ups and your social media calendar is overflowing. Your support calls keep piling up.

    You know that automation saves ass. But writing code or hiring developers feels overwhelming.

    Now picture this instead.

    53:16

    You open a platform where automation is as simple as connecting apps, dragging and dropping workflows or customizing templates. Without writing a single line of code, you set up email responses, schedule meetings, create social media

    53:32

    post, and even build voice assistants that talk to customer in real time. This is where no code AI tools step in.

    Turning complex process into easy, efficient solutions anyone can use. From customer support to sales, content

    53:47

    creation and call centers, these tools are helping teams work smartly, faster, and more efficient than ever before. In this video, we'll explore the best no code AI agents by workflow.

    Tools that empower you to automate task

    54:04

    effortlessly, boost productivity, and focus on what really matters. Before we dive in, here's a quick question for you.

    What's that one task that eats up most of your time every day? Is it responding to endless emails, following

    54:20

    up with leads or clients, managing social media and content, or is it handling customer support? Now, let's bring this to life with a story.

    Meet Ravi. Every morning, he sits at his desk staring at a screen full of emails and

    54:36

    to-do list. The workload feels overwhelming and lack of automation only makes things harder.

    Important opportunities slip through his fingers because he simply doesn't have the time to follow up. He finds himself

    54:52

    constantly reaching out to experts just to keep things running. Slowly the stress builds up affecting both his productivity and his peace of mind.

    This is a story of many of us known all too well. But what if there was a way out of

    55:08

    the cycle? If Ravi didn't have to feel so overwhelmed every single day, Ravi discovered that by the strategical automation just a few of his routine task, he could finally begin to simplify his dayto-day and significantly reduce

    55:26

    the pressure on himself. Freeing his time, allowing him to shift focus from tedious repetitive work to bigger picture.

    he could finally dedicate his energy to growing his business and making truly impactful smarter decision.

    55:43

    This is the core idea behind today's concept. As a powerful quote at the bottom says, automation isn't just about doing more.

    It's about making room for what truly matters. It's about creating space for creativity, for strategy, and

    56:00

    for the things that you can only do with the right tools. The kind of transformation is not just a dream.

    It's entirely possible. With automation in place, Ravi's entire workflow became more effective and purposeful.

    He went

    56:16

    from feeling overwhelmed to feeling empowered. He felt an indicated acceleration as projects that once took days could now be completed in a fraction of the time.

    The new found speed gave him the new confident to move

    56:32

    forward with seconds guessing every step. It also led to a new level of efficiency.

    He could track progress and spot opportunities he would have otherwise missed. Thanks to the automation insights he was now receiving.

    Best of all this

    56:49

    transformation didn't just impact him. It bought the team collaboration together.

    They began to collaborate better. trusting the new process and focusing on shared goals instead of getting boggled down in manual work.

    57:06

    This is what it means to be truly empowered by technology. It's not about working hard, it's about working smarter.

    Every great journey starts with a single step. And this is where you begin.

    Look around and find the one task that takes up most of your time and

    57:23

    energy. That's the one which is constantly slowing you down.

    Next is to automate. Find a simple no code tool that automates just that one task.

    Remember, even a small change can make a huge difference in your day-to-day work

    57:38

    life. Use the new freed up time to concentrate on what truly matters.

    Growing your business, thinking clearly, and building momentum. This is how you turn overwhelm into progress.

    One step at a time. It's simple process but can

    57:55

    completely transform the way you work. This journey about automation isn't just about solving a problem today.

    It's about unlocking a long-term success. By offering routine task, you will feel less stress like the person meditating

    58:11

    here. Your mind will be clearer and you will have space to think strategically.

    When everyone on your team is free from repetitive task, they can work together, create more and also e and effectively.

    58:26

    And finally, it's a long-term focus. You will be able to set and achieve bigger goals knowing that small daily task are taken care of automatically.

    This is where the real power of no code AI comes in. It transform your workflow so that

    58:43

    you can build a more sustainable and successful future for your business. Now that you have seen power of automation, your next move is simple.

    Firstly, what are the tasks that currently slowing you down every day? Think about it.

    Think

    59:00

    about things you do over and over again that feels repetitive. And second is to look at those task and ask what can you automate?

    And second is to look at those tasks and ask which of these can I automate? Are there simple tools that

    59:18

    can handle these workflows? And finally, don't feel like you have to do everything at once.

    Start with just one small single chain and build from there. Pick one task and one tool to begin.

    Remember, the key here is to empower

    59:34

    yourself, not just to move, but to truly focus on what truly matters. Let's dive into powerful tools that can help you transform your workflow.

    At the center of your workflow, there are different areas where automation can make a huge

    59:50

    impact. For example, customer support tools that in this category helps you automate email responses and handles routine customer queries.

    So you spend less time on repetitive communication in sales and CRM. These are the tools

    00:08

    streamlining leading tracking and follow-ups making sure you never miss an opportunity and are nurtured to relationship effortlessly. With content creation tools, you can automate social media posts and campaigns, keeping you

    00:24

    blend actively without the daily hassle of planning and scheduling. Workflow integration tools let you connect apps and process easily, syncing information across all your tools to reduce manual work and increase accuracy.

    Finally,

    00:41

    voice agents give you the power to create AI powered call assistance that handle customer support, inquiries, appointments, and instantly improving services and freeing up your time. Together, these tools complete ecosystem

    00:56

    that help you shift your focus from repetitive task to strategic thinking, creativity, and growth. Now, let's start with a versatile tool called Lindy AI.

    Lindy is a fantastic example of a non-code AI agent that can act as your

    01:13

    personal assistance, particularly in customer support and sales. Lindi can automate customer emails, drafting replies, and tracking your inbox to handle routine communication.

    It also seamlessly handles meetings, scheduling,

    01:28

    and follow-ups by automatically joining calls, transcribing them, and updating your CRM with key insights and action items. This frees up your sales team to focus on building relationships and closing deals, not the manual data

    01:45

    entry. Next we have a tool that is a cornstone for no core automation that is Zapier.

    Think of Zapier as a central nervous system for all your applications. Allows you to automate workflows by connecting thousands of

    02:01

    different apps. It even provides AI powered solutions to help you build your workflows faster and more efficiently.

    By automating these connections, Zapia saves time to boost productivity across the entire team. For example, you can

    02:18

    create Zap that automatically adds a new lead from a social media to your CRM, then sends them a personalized welcome email and finally notifies your sales team in Slack. All this in one single

    02:33

    automated sequence. Similar to Zapier but with a more visual approach is make.com.

    Think of make as your digital playground for building automation. Make is all about visual workflow

    02:49

    automation. Make is all about visual workflow automation.

    Instead of just a list of steps, you can see your entire workflow as a map, which makes it much easy to understand and build complex sequences. It also has a strong AI

    03:05

    integration, allowing you to incorporate powerful AI capabilities directly into your workflow. And of course, just not other tools in category.

    It connects multiple apps so you can seamlessly transfer information and trigger actions

    03:21

    across all your favorite platforms. Make is perfect for everyone who wants to see their entire automation process laid out visually before they even build it.

    For those who want more control, we have Node N. This is an open-source no

    03:39

    code and low code platform that gives you power to automate almost anything. What makes Node N stand out is its opensource nature which allows you immensely flexible scalability and while

    03:55

    offering extensive app integration to connect with your entire tech stack. Finally, let's talk about future of customer interaction with Synthflow.

    This tool is great example of no code AI

    04:10

    platform. Synth allows you to create incredible realistic humanlike AI assistance that can handle both inbound and outbound phone calls.

    These agents are designed to have natural human-like conversations and offers seamless

    04:27

    integration and scalability to grow with your business. It's a powerful way to automate your call center or sales team and provide a great customer support.

    All these platforms offer powerful automation capabilities without needing

    04:44

    technical knowledge. Whether you're handling customer queries, tracking emails, or managing team task, these tools can save you every week.

    Start exploring them today using the free plans and trials. So if you want to look

    05:00

    at Lindy AI, this is the interface you'll be getting at first. You need to enter basic details of your Google account or you can use any kind of email.

    So there's an option here to try it for free. Let's click on that.

    And we have a call if you want to book and know

    05:16

    about Lindy. And there is a free trial that you can take and decide if you want to use Lindy AI.

    So this is the interface you'll be getting. And you can see we can create personalized website, customer support, email, outbound calls,

    05:31

    lead genen, meeting recording and also we can use lending car detachments here. And we can also create any kind of agent from scratch using new agent.

    So let's see one of the pre-built agents. Let's say customer support email.

    You just

    05:48

    give it a prompt. So after some time you will see an option here called build agent which is already ready.

    You can just click on it. And here you get an option of selecting what kind of triggers should be there for you to start this process.

    So either

    06:04

    it can be any kind of Gmail or any Google calendar activities. Also there are options of performing an action search knowledge base entering a loop condition and enter agent step.

    So you can build an entire agent here based on

    06:21

    these options and after that you can test and turn it on. So it get connected to your Gmail account and helps you automate the task.

    So coming to Zapier when you open the website you will see what are the current plans. If you want

    06:36

    it for a free trial there is a current plan which is available for free. There is a professional one too and you can just click on next.

    Your basic Gmail details would be required. So when you log in with your Gmail, here are the few questions they'll ask to personalize

    06:52

    your entire interface. So you can go ahead and fill them.

    So this is the main interface of Zapier and there are options of automating tables, interfaces, chatboards and canvas. So you can explore.

    There are few templates

    07:08

    also which you can use it. So if you go to the interface option, you'll have a create option here.

    So you can just click on it and you can either start from scratch or export it from a any kind of template and then you can click

    07:23

    on create interface. So from here you can build the interface from scratch where the pages are different and communicative.

    What used to take hours of work together now you can just start within a 2 to 3 minutes and you'll have an entire interface for you to run. So

    07:40

    this is the main interface where you can create multiple agents and you can start from scratch or again export it from a template given below. So when you click on create scenario it means a agent and you can add multiple

    07:57

    apps to connect them easily. So you have Google sheets flow control openai also inbuilt.

    So you just need to click on them, fill the necessity details and connect one app after the other. You can do things like replying

    08:13

    for each comment without even putting in efforts. You have other options at what interval this command should run or agent should run.

    Demo 4. So for node N which is an agentic AI builder you have

    08:29

    different kinds of plans which includes a starter pro and enterprises and it is not free to use you can trial it for free but then you have to pay initially demo file. So the last demo is synth AI which automatically answers your calls

    08:46

    and reply to any kind of customer service which is required. So here you can see an option called create an agent.

    So you can start and you can either browse from templates or again start from scratch. So you can see many other options which include lead qualification seller

    09:03

    utility scout property reconnector. So let's go with a seller call which is lead qualification and here you can see you can give a particular name for an agent and you can give a image also so that it displays you can use AI models which includes

    09:20

    chat GPT 4.01 01 mini and also five which is the latest version and you can fix the time zones which is e you you can set it to either Indian European based on your necessity you can add your knowledge bases you can choose what kind

    09:37

    of customer vocabulary you should be using and filter the words and then we have use realistic filler words like you can use words like m and a also it feels like a human so you can just configure figure this to your email or even calls

    09:55

    and then you will have a voice assistant without even receiving one single call. You'll be able to get customers.

    This is Anubhab and in this video we will create a website using chart GBD. Till now we all have heard about chart GBT so many times.

    So today's video we

    10:12

    will create a website with the help of chart GPT and we will not add a single line of code on our own. We'll ask JP to do all the work and then let's see whether it can create a website independently.

    So let's begin. But before that, if you haven't subscribed

    10:28

    to our channel already, make sure to hit the subscribe button and press the bell icon so you will get all the updates from SimplyLearn. Now let's open the chart GPT and check.

    Let's refresh this. I have already opened my chart GPT.

    You can just log it and then you're good to

    10:45

    go. So let's say hello and see.

    So it's working completely fine right now. Okay.

    So what we'll we do today is we'll open our Visual Studio Code. I have created one folder which is

    11:01

    tailwind. It's an empty folder.

    You can just see that. So in that folder we'll add the Tailwind.

    So first let's ask the chat GPT to help us install or set up the Tailwind. So I'll write

    11:17

    please set up the tail one. Let's see.

    Okay.

    11:33

    It's asking us to install NodeJS first and then it's asking us to check the version of that. Okay.

    It's saying to install these new pixel JavaScript module

    11:52

    to add these component in the CSS file. So this is saying to add this in the CSS file.

    Okay. So let's check whether it is telling us

    12:07

    something useful or not. post class auto prefixer.

    Okay. Okay.

    Now, simultaneously let's check

    12:24

    the tailman website. How to install this?

    So here it's saying to install this command npm install d tailwind CSS and then init that project then it's saying to

    12:40

    configure the template path to add this to your tailwind configure dot file then it's saying to create a folder input dot CSS and then just start the process

    12:56

    so as of now chat that GPT has not given us the correct answer. It's correct but not up to the mark as you can clearly see the commands that are given on the tailwind.

    CSS official website. Uh so according to

    13:16

    me many people think that uh ch because of charge the uh so many jobs of the developers are in the risk because charge GPT can create websites all of the stuff. So I don't think so according

    13:32

    to me it's not that good. It's very good for the uh helping any developer purpose u but it's not that good.

    But let's see we are in this together. So let's just

    13:47

    open the visual studio code and install our tailwind. Now we'll ask it to create so many other things.

    So let's see whether it can create that or not. So I have opened my terminal.

    So many people

    14:02

    also comment us uh that how you could open the terminal. So I just used one shortcutt control + back tick to open the terminal.

    You can also go here and create one terminal.

    14:19

    Okay. Now I'll paste the line to install Tin CSS in my system.

    So let's wait a couple of minutes. As you can see it's installed in our system.

    All the packages and modules are installed. Now it will tell us to edit this project.

    So

    14:37

    let's do that. Okay.

    It has successfully created the configure file. Now inside the content we'll copy paste this one as it is

    14:53

    saying to add these line configure your template path. Add the path to all your template file in the file.

    Okay. So let's just copy this thing

    15:08

    and paste in the content. Now let's save it.

    Okay. So now next step is add the Tailwind directives to your CSS.

    Now what will we do here is

    15:26

    let's go back to Visual Studio Code and create a folder inside this src. Okay.

    Inside this src we'll create a

    15:41

    file input CSS. Here we'll paste the code.

    Okay. We haven't copied that code yet.

    So let go be let's paste it here and it's done. Okay.

    16:01

    So I don't think so again that this can replace the developers because this is not the correct way to tell although it has given us all the steps but any developer can do the task in a very

    16:18

    short period of time. It can just help any developer but not take the job of the developer.

    Okay. So let's move ahead and step would be

    16:35

    to go back. Next step to build the process.

    Let's copy this one and start the Tailwind CLI build process. Okay.

    Let's copy and paste it here. Right?

    16:53

    As you can see, it's done in 273 milliseconds. The output dot CSS is generated in this list file folder.

    Right

    17:08

    now, we just have to use this file output. CSS.

    Okay. Let's see.

    Now let's create a file here

    17:24

    and name it as index dot html. Okay.

    And inside that I'll write my boiler code just pressing shift n1 for the exclamation mark. And then the

    17:39

    boiler code is done. So we can change the name of this and let's write here as side using chat.

    18:06

    Okay. Now one main thing that I wanted to tell you that I have installed one uh extension in my system which is the live server.

    This extension you can download and this is very helpful. Okay.

    18:25

    Let's open this one. Now what we'll do is we'll link the output CSS file in our HTML.

    CSS. So for that I'll write here link.

    18:41

    Okay. And inside this string I will have or you can just write link and then click and then write clear style sheet.

    19:00

    Okay. this one cut.

    What we'll do is we'll give the path of the output. GSS.

    For that we'll write dot slash

    19:17

    list / output.gs. It's just showing out.

    Okay. Close this link tag and let's save it

    19:35

    and let's just right click and here you'll see open with live server and you will install the extension live server. So let's open this one.

    19:53

    So this is working completely fine. It's showing us the title website using JT.

    Okay. Or something I'll write here welcome to simply learn and then let's save it

    20:11

    and you'll see it's showing welcome to simply learn. Okay.

    Now the next step would be to create Now let's open our charge CBD and

    20:29

    here we'll write create a responsive header using

    20:45

    element. Let's first copy this one.

    Let's see.

    21:04

    Okay. So, let's see whether it can give us a right answer this time.

    It can

    21:20

    Let's see. Okay, it's giving us the code now for the header.

    [Music] Okay.

    21:56

    So it is showing us in the above code we use Tailwind CSS classes to style the header. We set the background color to gray.

    This one is the property of gray color and then the text color would be a white color in that nav bar part. Inside

    22:12

    the header we have a nav element with the flex and justify between. Okay.

    classes to horizontal center the element space them out evenly. Okay, fine.

    We also use px4 and py3 to add padding to

    22:27

    the navigation bar. Cool.

    For the mobile menu, we used a button with the empty hidden class to hide it on. Okay, let's see the code now.

    Okay, now

    22:42

    let's copy this HTML file. Let it write everything.

    Show here. We don't need this one.

    So, let's delete this. And let's here write header.

    And let's comment it down

    23:03

    this. Okay.

    Let's raise the code and check whether it's let's save it and let's go back to our file. So this is what charg created for

    23:21

    us. It's looking very good actually.

    So let's inspect it and let's see is it responsive or not. So this is completely responsive guys.

    23:38

    Very good. Yeah.

    So this is really very good. Okay.

    On Surface Dio S20 ultra this is what will show on the

    23:55

    mobiles. Okay.

    Now we'll go back to the chat GPT and let's close this one and stop generating and let's here write create a

    24:15

    section using D CSS. Now let's see something went wrong.

    Okay,

    24:36

    let's see. Let's first copy this one and refresh this and let's see how it will create the hero section using tell me CSS.

    So, it has created this nar really very

    24:51

    good. I didn't expect this from Jed but it's really really looking very good.

    It's simple and uh you can add these to another pages just using the anchor tag here. We can just give the path.

    So I'll

    25:08

    show you www.clearn.com. [Music] So this one is okay.

    So whenever I click on this one, it will redirect us to the simply learn. But uh

    25:24

    the URL is wrong. We can just go back to simply.com and let's copy the let's go back to our code and let's here write the URL.

    Okay, save

    25:43

    it and let's see. Yeah, this is how this works.

    Let's close this one. Okay, now let's see what is the code.

    Okay, let's copy this code and let's see

    26:00

    how this is working. So, it is also giving us the complete code uh how it's working, how this step is working and all.

    As you can see, it's giving us the max width 7 into 1. Utility sets as max

    26:16

    width for the container at 27 XL, which is 12 112 rem or 1792 pixels. It's giving us all the explanation for the code it has written for us.

    So, let's check it. And after the header, I'll

    26:34

    paste that code and let's save it and see. Okay, this is really very good guys.

    Yeah, I didn't expect this from J GBT but it's looking very good. So

    26:50

    u till if I am a developer so if I had to write this much of code it'll take me around 2 3 hours at least but with the help of chart GPT it only created in 5 10 minutes this is very good actually now let's

    27:08

    open the chat GP again and here we'll create a information section okay information section

    27:23

    using Tment. Let's see now.

    Till now it's working really very good.

    27:39

    Now it is writing a code for us. So as you can see very good.

    28:08

    Now let's copy this one and paste it under this. Save it and let's check.

    So this is the information section this has created for us.

    28:24

    Right? Important information, location, office hours, contact us and all.

    Right? We can also regenerate it for something else.

    Okay. Now where it will create something else for us

    28:42

    right we don't like something we can just regenerate and create another one so it's just let's copy this one now it's telling us everything about the

    28:57

    code and here after that I'll write this and save it let's check what's this okay this is the information section it has created for us that this is something J data and if you don't like this one also

    29:13

    we can again regenerate the response this is really an amazing feature of chat GBT okay so now it's creating the third option we can countlessly ask him to uh create again and again for us the

    29:30

    response let's copy this one. Okay.

    And let's remove all of it because it's looking very clumsy. Now let's see.

    Okay. This information

    29:47

    section is information section. Welcome to our website.

    This is what it's written. Okay.

    Now let's ask him to create something else. Okay.

    30:02

    Let's copy this text create and paste it using Tailwind with one image. I'll check whether it can a

    30:20

    response to my things. If I want to create one image on the left or right side and then the text on the other side.

    Let's see whether it can create it for us or not. Okay.

    Create informative section using chart with one image on

    30:35

    the right side and the text on the left side. Now let's see

    30:53

    whether it can create that or not for us. Right, it's just an simple command.

    So we'll check whether it can create or add the image. Okay, it's using the classes for the

    31:12

    text but okay it took the image class also and it's taking the source from an API called unsplash that I have covered this unsplash API on my previous videos.

    31:29

    Okay. Okay.

    Now let's copy this and paste it on our code. Let's delete this one and paste this one on the website.

    Okay. So the image is quite large.

    We

    31:46

    can reduce the sizes. But in the starting only I have told you I'll not add anything in the code.

    So I'm not doing anything in the code. Right?

    So we can change the size. It's really good.

    as I have told him to add the image on the right side and the text on the left

    32:02

    side. It's doing the same for the user.

    So this is really very helpful for the developer. But as I have already told you, it will cannot replace the developers because uh I'm the developer.

    I only knows how to uh do things as I I

    32:19

    have showed you that it has given me so many uh wrong answers before that before this one. This has given me three answers but it was not up to the mark.

    So I had to told him that to uh create uh like this image on the right and I've

    32:37

    explained it to uh charged this one for us. Okay.

    And whenever I'll re refresh this page the image will get changed. This is the unplash API how this works.

    Okay. We can change the

    32:53

    images also. This is random.

    Just write here coding. Okay.

    Now you will see the images that will come here. Okay.

    33:08

    Let's see. Let's check it.

    33:24

    That's not taking any other name. I don't know why it's not working but let's not waste time on this.

    Random is also fine.

    33:40

    Till now it's looking really very good. Okay.

    Now the next step would be to add the service section. Okay.

    So let's ask RGB to create service section for us. Okay.

    34:11

    Now let's see it's coming. Okay.

    class [Music]

    34:27

    of really very good. So look how beautifully it has been created right.

    34:47

    Yeah. Okay.

    Let's see. It's done.

    Now, let's copy the code and go to our editor.

    35:04

    And here we'll write this code and let's paste it. Let's check.

    Okay. So, it's not aligned straight, but we can align it.

    So it's showing us that

    35:20

    the services that this website is providing what we offer and this is some junk data web design web marketing and all. So let's check whether it is Yeah.

    Okay. So what this one is doing here is

    35:39

    it's using only three classes. is using and the center center these all are using the same class but let's find out why the

    35:56

    image is coming like that right we can align it you guys want me to align this we can align this it. Okay.

    36:15

    So that's just a line on it's just one line. Okay.

    We can align that. So let's dig deep into this code.

    Okay.

    36:32

    This image classes. Okay.

    The image in the same justify center and then it's using the flex shrink function.

    36:48

    Let's round it large. Okay.

    56. Let's make it 50 and check.

    [Music] Very bad.

    37:13

    Go back to the code. The size of this change.

    37:31

    Okay, let's go back to the chart again. Again, regenerate the new response and let's see how's the second response is looking.

    Right. Let's wait a couple of seconds more.

    38:05

    Let's see the response that it is creating. We need a minimalist design.

    Okay. I was just observing the code and see how it's looking.

    38:22

    We can also change the color of the customiz uh this website on our own. But it's really very good.

    I'm really impressed with this. This really will help developers creating websites.

    Okay.

    38:40

    Let's go back to Stilling. After that, we'll create the about section for the page.

    Okay.

    38:55

    Okay. It's again giving us that design and all part.

    If I'll tell Chad GP to create uh this service section and this is my requirement and all it will definitely

    39:13

    create that I guess now I'm impressed really impressed with the code clean code it's writing and it's also responsive. I haven't told it to create the responsive uh code but

    39:29

    it's really very good. It's the responsive I'll show you inspect.

    I was just seeing that only whether it is responsive or not. If you will see okay let me just cut this part.

    39:50

    Let's just Yeah. So this is how we check the responsive tag and let's see that these three uh home about

    40:06

    contacts are vanished and this is the header bar that we created and then the up one is the nav bar. This is the information section and services section is also there.

    This

    40:22

    is all responsive. Okay.

    So, we'll check how it's looking on Samsung Galaxy S20. This is how it's looking.

    40:39

    Very good. Yeah.

    Let's move on to the code. Copy this code.

    And coming back to the code editor. Let's delete this section.

    40:54

    Okay. Now let's paste it and see.

    Okay. It's giving us an error.

    41:11

    This is what it created for us. It's I guess uh little less or it was already writing the code and I stopped.

    It's fine. It's not its mistake.

    Let's

    41:28

    stop and regenerate the response. Let's copy this again.

    41:44

    Coming back to the chart. It's not working.

    Let's refresh RGB again. Service section.

    Let's see. It's really

    42:02

    fast now. Okay, let's observe this one first.

    Uh our services, what we offer, web design, there must some lines about digital

    42:18

    marketing and I'm thinking there is also uh one part which I cut down not mistake it was mine because I stopped generating the response.

    42:34

    Okay, very good. Let's copy this text now and paste it under.

    Let's delete this and paste it. Let's check it this time.

    42:51

    So, this is how this one is looking. Now, this is also really looking very good.

    You can decrease the size of this image. Do not worry about that.

    Uh this is the service or services and some chunk data. Here we can write uh

    43:09

    this uh website is providing this and that. Okay.

    Service one that we provide this service, we provide this service and we provide this service. Okay.

    This is all about that, right? Really very good.

    Really very

    43:24

    good. It's really very good.

    Okay. We can change everything from it.

    Customize it on our our needs thing. Let's just go back.

    Okay,

    43:39

    this one is looking good. Okay, now let's ask Charg to create the about page for us.

    So, we'll write create

    44:08

    So I think uh meanwhile it's writing the code for us. I wanted to tell you that chatty is great for uh for everyone but as I know the code how this code is working.

    I know what is section, what is dev class and what are the function this

    44:24

    grid system that's using max weight uh it's using the excel function I know everything about it uh and if you are a fresher or you don't know about anything about the coding uh I think you should first uh know about the language or the

    44:39

    framework then only ask uh this to charg right uh if you don't know about the language or the framework bug and you'll just tell chat dy to create this and this for us. It's easy.

    It will create

    44:56

    for you but uh it will not uh help you in your future. So according to me just uh use it to learn not to work.

    Okay.

    45:15

    So let's move ahead. And the path is given.

    Okay, it was again managed. Let's ask it again.

    45:34

    Responsive about section using table. Okay, I just missed but it is very

    45:51

    intelligent. I misspelled the tailman spelling and then it corrected it.

    Very good.

    46:10

    So this is not the complete code for the about page. Uh I lost it to let's see whether it will take the command or not.

    46:30

    Right. Great.

    So let's see after this about section we'll create this contact page and then we'll create finally this website right

    46:48

    so this is really very helpful for the developers I must say so let's copy this and code and paste it under this section.

    47:11

    This is how the about page is looking really very good. The image size is 500 into 500.

    It's really very big. So we can change the size of the image.

    47:27

    Let's see. Where is that?

    250

    47:44

    and then it will not come again. Okay, let's just delete this source.

    Copy the unsashi

    48:03

    for the random image, right? That command is really useful.

    Okay, I forgot where to add that. Yeah.

    48:18

    Yeah. This is how the this about is looking really very good guys.

    Really very good. If I'll tell uh charge to create this on the right side or this image I want on the left hand side or right hand side it will definitely do

    48:35

    for us. As I have given him the uh task in this section information section it definitely worked.

    So I must say it's really very good. So now let's move on

    48:51

    to the contact page. Okay, let's here write create a responsive

    49:08

    contact section using element and I'll ask him to to add some sections

    49:24

    for us. So let's see whether it can take up that or not.

    Okay, let's give it in a second shot. Let's first see how is the contact section is looking then we'll tell you

    49:41

    to improvise that. So it's just taking a section class inside that section class it's using some classes which is flex ra and all and it's giving us it's really good.

    50:05

    Let's copy this text again and paste it here. So the contact us is also looking very good.

    Get in touch and then it will

    50:23

    redirect us to the that really good. The images sizes are uh large but we can reduce it.

    It's not a big deal but it's all looking very good. Right.

    So one

    50:40

    thing is left is the footer part. Let's ask it to create a footer for us.

    Now this is the last and final step for this create

    51:01

    section using enter. This is really very good.

    Looking very good. Let's add the

    51:20

    splash and then use it here. So this is how

    51:36

    it's looking now. So overall if I see it really given us a good good output really.

    So uh I will say chart GBT can only help uh us that's it as this code that is created by chart

    51:53

    GBT for us. We can't use it directly.

    This structure that charg created for us is good but we need to add so many things before deploying it. Right?

    Uh if I'll see this one, we need to add this uh uh services in the center and we need

    52:11

    to uh uh arrange the text and the services we need to change the size of the services. If uh the client wants it uh uh horizontally then we have to change that.

    We need to change this

    52:28

    image size as in front of you. I change this image sizes.

    I use the different APIs. So overall it's very good but there are uh is so many things uh that we have to do.

    Okay. And for that

    52:45

    developers are needed. We'll start with our project and for that first we'll create a folder in Python projects

    53:01

    and name it as Telegram port using chat JD. Okay.

    And inside this we'll open the

    53:16

    command prompt and open our ID that is I would be using the Visual Studio code and you can use any ID that you have hands on. And now we will go back to our chat JPT and

    53:34

    we'll start here. But before that uh let's talk about Telegram board.

    So a telegram board is a program that interacts with users via the telegram messaging app. Uh the prerequisite is you should have a telegram account and

    53:50

    boards can be used for a wide range of purposes such as customer support, news delivery and even games and chat juby. So, Jad Jet is a large language model trained by OpenAI that is based on GBT 3.5 architecture and chat JB is capable

    54:07

    of generating humanlike response to textbased inputs making it a great tool for building chatboards. And now if we talk about prerequisites, you should have a telegram account Python installed on your system.

    And we need a Python telegram board library

    54:24

    that I will show you what to install and that chat jity will tell us like what to install. So we'll just ask chat JPY to create telegram board

    54:40

    using Python. Okay, so it says error.

    We'll just refresh the page and ask again. Create telegram board

    54:59

    using Python. Okay, we'll see what he states.

    So create a telegram board. You need a telegram talking to the board father.

    Yeah, we have to go to the portfol. I will show

    55:15

    you guys how to do that. Install the required libraries.

    Next one you need to install. Okay.

    And write this code. Okay.

    55:32

    So in this script they have started with the start function. So it will just says hello when it would be started.

    55:48

    Okay. So we will add some more functionalities and we will ask uh where to find the your API token here.

    So I know like we have to go to the portfather but we will ask

    56:06

    chargity also. Where can we find the API token?

    So it states that to get to Telegram

    56:22

    board API token you need to create a new board by token to the board father on telegram. Open telegram and search for board father.

    Okay. And send the board father message that is / new board and the board father will ask you for the name of the board and it will ask you

    56:38

    about the username and the fifth is the board father will then generate a token for your board. This token is a string of characters that uniquely identifies your board on telegram.

    So keep this token secure and I will also blur it so you guys won't be able to see it. Okay.

    56:56

    So moving to a telegram we'll just search board father here and you can see this is the board father and we'll just click on start

    57:13

    and they asked the jd asked us to write / new board we'll just click on this and we'll get so write a new board how are we going to call it please choose a name for the code.

    57:28

    So write simply learnersore new board. Okay,

    57:44

    good. Let's choose a username.

    So simply learn one port and it states that that your

    58:02

    username should end in port. So we have ended with board and you could see the token here and here you can just access your board.

    So we'll get back to our ID and create a

    58:18

    new file and I will name it as new py only or you can name it as board py or anything you want. We'll get back to our uh chat jpd.

    But before that first we

    58:35

    need to install the library. For that you can go to the command prompt or you could use the terminal of your ID that is uh in Visual Studio Code you could use the terminal to install the libraries.

    58:51

    You could see that the requirement is already satisfied as this library is already installed on the system. So moving back library is installed and now we'll copy this code and paste it

    59:07

    here and we will just change the token. We'll go back to board further.

    We'll copy this token.

    59:27

    Come back and paste it here. Okay.

    59:42

    Now we will run this and see whether our board is working or not. So it has successfully executed.

    We'll get back to our board further and just click on simply learn one board. So

    00:01

    we click on start and see it says hello I'm your board. So it's working fine.

    So if we write hello it won't respond as there are no functionalities. So we will ask JPD to add functionalities.

    00:19

    Please add some more functionality and response to the port.

    00:39

    Let's see. So, sure.

    Here's an example of how you can add some more shell. So they used port updat dispatcher.

    Okay.

    00:55

    Define the help command handler. Okay.

    01:22

    So chat jeopardy has defined three functions that is eco and what does eco do? It will just give you the same thing what you give to the board or what you write to the board.

    caps that

    01:38

    he has also declared a help function and in which you can see like what functionalities does the board have and the caps will do it will convert the message to all caps eco uh echo the message back to you it will give you the same message / start it will start the

    01:55

    board and /help to get help and now unknown uh If there is something you ask out of these things, it will just say sorry.

    02:11

    Okay, I will let you guys understand this code also. But first we'll see whether it's working or not.

    So for that we have pasted it here and we'll paste our API token again.

    02:45

    So I pasted it here. Now uh first we will close this terminal and get a new terminal and then run the program.

    We'll get back to the port further. And this is our port.

    So we'll just write /

    03:05

    start. And it says hello, I'm your board.

    Now we'll say hi to the bot.

    03:24

    Okay, it's not responding. Okay, we just close the terminal and we have pasted the keys also.

    Okay, done it again.

    03:44

    Now we will see whether it's working or not. Start.

    Hi So the code is not working

    04:04

    and just see the code again. Okay.

    Uh here we don't have any response to iOS so what we'll do we'll use the help and

    04:20

    to call the help what we have /h help. So these are the commands for what it will respond.

    So we'll use /h help. Okay.

    Now you can see that / start to start the board that

    04:35

    we have done. Help and eco eco the message back to you.

    So we'll write hi don't write back. So we'll write / eco hi.

    04:52

    And then he has given us the output that is hi. So we can write / eco how are you?

    So it has given you back and same we

    05:08

    have slashcaps. So we'll write slashcaps and we'll write something in small caps.

    That would be greatly built.

    05:27

    Okay, now you can see it has returned in caps. So you can add some more functionality to it.

    And before that I will get you guys understood the code.

    05:48

    So now we'll see what does this code do. So first we have imported the necessary modules that is the classes from the Python telegram board library that we'll need to create our board.

    So telegram contains the main board class while

    06:04

    updator command handler message handler and filters are classes that we use to handle incoming updates and messages from telegram. Okay.

    Now these are like uh we have created an instance of the board class that is using our telegram board API

    06:21

    token as well as an updater instance that will continuously fetch new updates from telegram and pass them to the appropriate handlers. And we have used context equal to true that tells the updator to use the new context based API

    06:36

    introduced in version 12 of the Python telegram board library. And we also used a dispatcher object that would handle incoming updates.

    After that we have created a start function and passed update and context.

    06:52

    So we have defined a function that will handle the start command. And the update parameter contains information about the incoming update from Telegram.

    While the context parameter contains some useful methods and objects that we can use to interact

    07:08

    with the Telegram API. In this case, we have used uh context.

    Send dot uh send message to send a message to the chat with the id specified by update doe effective chat do ID. And after that we

    07:25

    have created a help function. So we use a multi-line string to define the help message which contains a list of available commands.

    And then we have used the context.board send message to send the help message to the chat

    07:43

    and after that we have the eco function. So we use context dot arguments to get the message sent by the user after the / eco command.

    So to use this we have to use the / eco and after that we have to

    07:59

    write the message and then use join to join the message back together to a single string. We then uh we have used the context.board dot send message to send the message back to the chat.

    Then we have caps. So this function defines

    08:15

    that will handle the caps command. And we have again used the context dot arguments to get the message sent by the user after the /caps command.

    And then the we have used the upper function to convert the message to all caps. And

    08:30

    then we have used the context.board dot send message to send the message back to the chat. Then we have unknown function that is this function defines that will handle any command that the board doesn't recognize.

    So we have used

    08:45

    context.board.end. %.

    So it will just say sorry. Okay.

    So these are the start handler, help handler, eco handler and caps handler. So these are the commands and we have the we have added the add

    09:03

    handler and to start the board we have used update dot start polling. So this is how we have created the port with the help of Python and JJD.

    So

    09:20

    we are done with the project. You can add more functionalities also to the board.

    You can just ask chat JBD to get more functionalities to play music in the telegram board or you could just ask him how to send messages to a particular

    09:37

    user by the port only. And you could also send media files.

    ask the media files from the board and you can train a fully board by the help of Chad JPD and this

    09:52

    is called Chad JPD scripting. So with the help of Chad JPT you can just ask him and he will guide you with all the code and processes you just have to like make them in a sequence and use them to full of your use.

    10:08

    So for that first we'll open the command prompt and run the file main. For that we'll write the command go run main.go.

    So this is the file that is written in

    10:24

    the go language and we're going to run with the command gor run main.go. For that you need to have golang installed on your system.

    So I will guide you with all the process but currently we are seeing the demo. So here this command

    10:40

    will generate a QR code that will scan with the device which we want to integrate the chat GPT on. So we'll wait for the file to get executed and after this we will execute the server.

    py file

    10:56

    and that will open the chat jp on the Firefox browser. You can also use other browsers that is Chromium and other if you want but we'll use Firefox to skip the one-step verification that chat

    11:11

    jeepy ask us whether if we are a bot or a human. So we'll run the file again as there was some error.

    So this time yeah it run perfectly. Now we'll take the device and open WhatsApp

    11:28

    on it in which we want to integrate the chat JPD. So I'm using one device to just capture this QR code.

    So this is the device and you can see that my device has captured this QR code

    11:46

    and you can see here that WhatsApp me it has been activated. So now we'll run another file that is server.py file and that is the python file.

    For that we'll open the command prompt again and that

    12:02

    would be another command prompt. And to run that file we'll write the command python server.

    py and you can see in the firefox browser

    12:18

    jar jeopardy has opened and I have logged in already. So we didn't it didn't ask me to login again.

    Now we'll take another device and we'll message on the mobile device which has

    12:34

    been integrated with the chat jet. So from this device I will write hi and you can see on the screen that chat jippity replies hello how can I assist you today and the same you could see on

    12:50

    the WhatsApp chat. So today we will ask chat jeopardy what is the capital of India.

    So you can see that the church is typing the capital of India is New Delhi

    13:07

    and it has been responded to our mobile device. So this is how we can integrate chat JPD on a WhatsApp and this would be the simple tutorial in this you don't need to code any uh this

    13:23

    there would be another tutorial if you know what's behind the code or what behinds the integration part so you could watch that video and know how we have integrated but in this tutorial I will guide you with how to

    13:39

    download the files and how to run the files and how you can integrate charge JPD on your device. To start integrating charge JPD on our device.

    So for that first you have to download this repository and it contains some files

    13:55

    and I will upload some more files here. You just have to download it.

    download the zip file and after downloading it, you just need to extract it in a folder.

    14:24

    So we'll extract in C drive in Python projects mainly we create the folder here only. So here we'll create a folder.

    Okay. Integrate chat and WhatsApp main or yeah right integrate

    14:44

    chat GPD. That's it.

    So inside this folder we'll extract the files and I think it's been done. So we'll just visit C drive uh Python projects and inside it

    15:02

    we have integrate chity and we have these two files. So to run these two files what you need is you need Python installed on your system and Golang installed on your system.

    So I hope you guys know how to install Python and

    15:18

    Golang. If you don't I will just give you a quick tutorial.

    So to download the Python you just need to visit the python.org website and move to the download section and you will see the download the latest version for

    15:34

    Windows. You just need to click that and the download will start for you.

    The package has been downloaded. So I will provide you the link on how to download the Python in the description box and also the link for the GitHub repository.

    15:52

    So you don't have to search it anywhere also you can search it on the browser just write integrate chaiip and WhatsApp and abisar auja you will get the github link and just write github also in the search bar you will just redirect it to

    16:09

    this and now you have downloaded the python so just open the exe file and start with the installation so you can choose add python exea to path. You can choose that

    16:27

    and customize installation and just tick on the Python test suit and the next you can add the Python to environment variables. So you have to takeick both these options and then you

    16:42

    will click on the install. As I already installed it so I won't need to install it again.

    You just need to click on the install button and here you will get the Python installed. Okay.

    And the other thing you need is Golang

    16:58

    Go language. So to download it you have to go sorry you have to go to its official website and go to the download section and here you will see the Microsoft Windows as I'm working on the Windows operating system.

    So I will

    17:13

    download for the Windows. I've already downloaded and installed it.

    So you guys can download it and I will show you like this is simple how to install Golang. You don't need to add anything.

    So it's been downloaded

    17:34

    where to guide through the installation. Okay.

    So we just waiting for the setup to be initialized so that we can install it. Now you can see the next button is

    17:49

    available. Just click on that and a previous version of Go programming is currently installed.

    Yes, you can see that it's already installed. So you don't need to do anything in installing the Go language.

    Just click on install and it would be installed for you. So I

    18:05

    won't be installing it again as I already installed on my device. So moving on.

    Now what we'll do first we'll run the server. py file.

    So for that we will go to the folder where we have

    18:20

    extracted our files and here we will open the command prompt and run the file server. py.

    So to run a python file we have to write the command python and then the name of the file that is server and its extension that is py. So we have

    18:38

    initiated that. Okay.

    Uh Firefox is already running as we have not closed what we have opened for the demo. Uh I think uh

    18:55

    it's an error. I will again run the command prompt.

    I will just close the previous command prompts. Yeah, I have closed them.

    Now I will

    19:11

    open the new one and you should open the command prompt the new command prompt after installing the Python and the Go language. I will also assist you in installing the GCC compiler because you would be needing that for Go language.

    19:27

    So first we'll run the server file. So you could see that by running the python server.

    py file you could see that the chat jpd has opened up in the

    19:45

    firefox browser. So I will show you the code.

    What we need for that you need to install the flask system os modules playright module and here in the 16th line you could see

    20:03

    that we have used firefox. You could use chromium for chrome but the case is you need to do the onestep verification that is the capture thing for like initializing the chat jpd but we don't need that so we are using the

    20:20

    firefox so you should have firefox on your system now what I want you to do is just install these modules as if you if these modules would not be installed on your system it would show an error it would give you an error in the command

    20:35

    prompt only as I have already installed them. So it's not giving me any error.

    So I will tell you the commands to install them. So to install flask you can just write py pi space flask in any browser and it

    20:53

    will direct to the website. So you could open the first link that would be pypi.org and this is a command pip install flask.

    So you can just copy that and open your command prompt. First we'll create it.

    21:08

    Ctrl C. And now you could see that we are in the folder integrate charge JPD.

    And here you could paste this command and press enter. It states that that the requirement is already satisfied as I have already

    21:26

    installed these modules. And the other module you need is playright.

    So just copy the command, go to your command prompt and paste this command and press enter. It would be installed

    21:42

    for you. And another module you need is virtual environment.

    So just copy this and paste in your command prompt. And another module you need is the OS module.

    OS

    21:58

    sys system uh and we'll open the py pi that is the official website the pip command website and you just copy the command go to the command prompt paste the command and press enter and this

    22:14

    module would also be installed for you guys and now you have installed the go language the python language and the modules you need to run the server py file. Now what you need to do is run the go file.

    But before running the Golang

    22:31

    file, what you need is GCC compiler. So to download it, I will provide you the link for the GCC or you could just see here.

    I'll provide you the link in the description box below. You could just click on that.

    You would be redirected

    22:46

    to this page. Just click on this release that was 24th May.

    And what you need to download is this 64 + 32-bit. Just download it.

    And as it gets downloaded, just open it

    23:05

    and click on the create button. And after that, the second option, then next.

    And the directory you want to install in as I already installed this. I will choose

    23:22

    the C directory only in C I would install in Python projects. Okay.

    23:40

    Okay. I didn't took the folder in C.

    Oh, sorry. Select Python projects and in the same folder that was integrate chat JVD.

    24:02

    Okay. So I will click on next.

    There is no time. I want to install it.

    Yes, I want to install here. And when you click on install, it would get installed for you guys.

    I've already installed it.

    24:17

    and the process started so it will get installed again. So now what you have to do is you have got the all the requirements to integrate the charge JPD on WhatsApp that is all the modules for the server.

    py file and the

    24:32

    main go file for that you have installed the go language and the GCC compiler. So you could run both the files and for that we will close the command prompts and open the new command prompts.

    24:51

    And this is a folder. So we will open one command prompt here.

    Sorry. And we have to open different command prompts for both the files for the uh

    25:06

    golang file and the python file. Okay.

    So we will open the command prompt here. And now to run the golen file we need to write the command go space run space the

    25:24

    name of the file and the extension that is main.go. And we have executed the file and this file will provide us with a QR code.

    And we'll scan this QR code from our first device in which we want to integrate the chat JPD. And before that

    25:42

    we'll run our another file that is server. py file.

    And for that we'll open another command prompt. And to run this command and to run this file we'll write the command python space the name of the

    25:58

    file that is server and the extension py. So we'll see whether a code is generated or not.

    And I can see

    26:14

    that it's been linked to the previous device as we have done in the demo. So we haven't logged it out.

    So I will check with the device. Yeah,

    26:29

    it's been active. So I will just log out from that device and run it again.

    or I will open another command prompt to

    26:46

    run the Golang file again. So to run it, we'll open the command prompt and write the command cmd for that.

    And here we'll write the command to run the go file.

    27:08

    And we have done that. And if we see the server py file, yeah, it has been perfectly executed.

    27:24

    Uh but we not able to. Yeah, our ch is running fine.

    Now what we have to do is use our first device to scan this QR code so that charge JPD gets linked to

    27:42

    our first device and then we'll use another device to chat with the charge JPD. So now we have opened our first device and open the WhatsApp and click on the link devices and here

    28:00

    we'll scan this QR code and you can see that it's logging in. Now it's logged in and now from the another device

    28:17

    we'll ask a query to charge JPD and we will ask charged to write a code to add two integers and that to Python.

    28:34

    So we just misspelled Python but we hope that our JPD understood that. So here we can see that the chity code the command and it has

    28:50

    run a good example. So here's an example code to add two inteious in Python.

    So you can see that chat jeopardy has been integrated and we'll see its response. Yeah, we got the response.

    29:09

    And now we'll ask another question and that would be what is the currency of United States?

    29:27

    Let's see what it responds. The curren of United States is the United States dollar.

    It is the most commonly used currency in international transaction and is the words primary reserve currency. So we can see that till it writes all the lines or sentences

    29:46

    it's been executed on the browser and after it completes or it stops generating the answer it sends it to the WhatsApp chat. So here we are done with a project.

    30:02

    Now you guys have understood how to integrate chat JPD with WhatsApp and what you have done is we have downloaded the repository and we have to extract all the files that are present in the repository. I will update all the files.

    You just have to extract them into a

    30:18

    folder and then run the main.go file and the server.py file. And before executing these files, you need to have Golang and Python installed on your system.

    And for the Python you need some modules that we have seen that is the playright module,

    30:34

    the flask module, the OS system module and the virtual environment module. So we have seen how to install them and when you will just execute the file on the command prompt you would get errors if these files or modules are not installed on your system.

    And for Golang

    30:50

    we have installed the GCC compiler. And after installing all these requirements then you have to just run both the files.

    And when you run the Golang file you will get the QR code just scan it with your device on which you want to integrate the chat JD. And after that

    31:07

    from any device you can just message on that number on which you have integrated the chat JD and the chat GPD would answer all your queries. We are going to automate WhatsApp using Python by piratekit library and with the help of

    31:22

    chat jet. And before starting I have a quiz question for you guys.

    And your question is how much did Meta that is formerly Facebook spend to acquire WhatsApp in 2014? And your

    31:38

    options are your first option is dollar $10 billion. Second option is $19 million.

    Third option is $20 billion and the fourth option is $21 billion. Please answer in the comment section below and we'll update the correct answer in the

    31:53

    pin comment. You can pause the video, give it a thought and answer in the comment section.

    So moving on now we'll move to create a project. So first we will create a folder for a

    32:10

    project and for that we will create a folder in Python projects

    32:34

    and we name it as automate WhatsApp. type using chat JBD.

    Okay. And inside this we'll open the command prompt

    32:51

    and open our ID that is we want to automate the WhatsApp using Python and with the help of CH JPD. We won't write the code on our own.

    We will ask ChD to automate it.

    33:07

    We will create the file and name it as main. py.

    And now we'll move to chat jpd and ask chat jity to write a code to send messages through WhatsApp using

    33:22

    python and the private kit library. So we will give a command send message through WhatsApp using Python and

    33:39

    pivot kit. Let's see what chat ch responds to us.

    And we have also created the automate WhatsApp using Python video. I will just link in the I button.

    You can check that

    33:56

    out and we'll see what chat GPD tells us. Okay.

    So, turn off country code message. Yeah, it could work.

    34:16

    And first you need to install the private kit library. running pinstrip private kit in your terminal or command prompt.

    Okay. Replace target phone number with the target phone number you want to send the message to.

    Country code with the country code of the target phone number.

    34:31

    Message with the message you want to send. R with the R 24-hour format and you want to send message and minute with the minute you want to send the message.

    Okay. For example, yeah.

    Okay. Got it.

    So we'll copy this code

    34:48

    and paste in our ID. But before that first you need to install the pyrokit library.

    And for that you can go to the command prompt and write the command

    35:07

    pip space install space podkit and press enter. It states that the requirement is already satisfied as I have already installed this module and you can

    35:25

    install it by writing the simple command and you'll get it installed and if you face any error installing it just comment down and we would be very happy to resolve your queries.

    35:45

    Okay. So as JJ GPD states, we'll just enter the things it want from us.

    So it's asking the target phone number and the country code without plus sign.

    36:02

    Okay. So here we'll just write all the things but uh as you can

    36:17

    see it has also given us an example to see and work on the code. So here we'll just clear it

    36:34

    I will write the phone number to whom I want to send the message. So it would be and I would just blur this number so you guys won't be able to see.

    Okay.

    36:49

    And here we'll just add the country code and that is without plus sign. Okay.

    So my country code is

    37:07

    you can search it that is I live in India and the message I want to send to him is hello

    37:22

    how are you and now we'll set the R and minute. So the current time is 15:14.

    So we'll set

    37:44

    16. Okay, we'll save this and run it.

    37:59

    So it says that the country code is missing. So we'll just copy the error and give it to chat jpd

    38:16

    as we are taking the health of chat jpd in this video. So he has given us the code.

    So we'll just provide the error to him. Let's see what it states.

    If it's not able to resolve this then we will

    38:31

    resolve it indicator country code present the phone number you're trying to send. Okay.

    Yes. And the country code is 91 to send the message correctly.

    You need to make sure. Okay.

    38:49

    1 2 3 4 5 6 7 8 mention the course the provided phone number is this and the country code is 91. Okay,

    39:07

    you need to make sure the country code is prefixed to the phone number like this. 2 3 4 5 6 7 8 9 Okay.

    Uh

    39:24

    okay. We don't have to make a string.

    Yeah. And now we'll run it again.

    39:47

    Okay, it's a string only.

    40:04

    So I will write the phone number again. This three.

    Okay. Now we'll see

    40:23

    what I will do. I will write the country code here.

    We'll save this. And the time is 1517 now.

    So we just turn it to 1518.

    40:40

    Save this and run it. So our code has been executed successfully that is in 20 second WhatsApp will open and after 15 seconds message will be delivered.

    So we just have to see that we have enter the time

    40:58

    as 1518 is the seconds available for the code to get executed. Yeah has opened the WhatsApp.

    It will take time as my WhatsApp has

    41:14

    loads of chats and contacts. Okay, I have to scan it.

    I don't think it would be able as we have reached the 1518.

    41:34

    We have scanned it. Let's see if it would deliver the message or not.

    Else we have to change the time.

    42:00

    So, we just have to wait for 15 seconds. Let's see.

    42:24

    Okay, just stop the terminal and run it again for 1521. So we will save this and run it again.

    42:41

    So it states that in 85 seconds WhatsApp will open and after 15 seconds message will be delivered. So we'll get fast forwarded here.

    43:04

    So we are still waiting. Let's see when it will open the WhatsApp.

    43:38

    Okay, it states that the phone number shared by URL is invalid. Okay, we'll just check the phone number again.

    [Music]

    43:58

    Okay, I entered the wrong phone number. Sorry guys.

    So, I will just update the time again and it would be 1522. We'll make it

    44:15

    fast. We'll run this.

    Okay, it say that the call time must be greater than. So we'll write 1523.

    Save it.

    44:30

    We will make the time as 1527. Save it and run it.

    Okay, it states that in 40 seconds WhatsApp will open and after 15 seconds

    44:46

    message will be delivered. It has opened the WhatsApp and it has started the chat.

    45:10

    Okay, it has written the and we have sent it to hello, how are you? That's good.

    So, here's the open a documentation and you could see the new features introduced with the chat GP forum. So, these are the improvements.

    45:28

    uh one is the updated and interactive bar graphs or pie charts that you can create and these are the features that you could see here. You could change the color you could download it and what you have is you could update the latest file versions directly from Google Drive and

    45:43

    Microsoft one drive and we have the interaction with tables and charts in an new expandable view that I showed you here that is here. You can expand it in the new window and you can customize and download charts for presentations and documents.

    Moreover, you can create the

    45:59

    presentation also that we'll see in further. And here we have how data analysis works in chat GBT.

    You could directly upload the files from Google Drive and Microsoft one drive. I will show you guys how we can do that

    46:14

    and where this option is. And we can work on tables in real time.

    And there we have customized presentation ready charts that is you can create a presentation with all the charts based on a data provided by you

    46:31

    and moreover a comprehensive security and privacy feature. So with that guys we'll move to chat GPT and here we have the chat GPT for version.

    So this is the pin section or the insert section where you can have the options to connect to Google drive, connect to Microsoft one

    46:48

    drive and you can upload it from the computer. This option was already there that is upload from computer and you can upload at least or at max the 10 files that could be around excel files or documents.

    So the max limit is 10 and if

    47:05

    you have connected to Google drive I'll show you guys uh I'm not connecting you but you guys can connect it too and you could upload it from there also and there's another cool update that is

    47:20

    ability to code directly in your chat. Uh so while chatting with chat GPT I'll show you guys how we can do that and you could find some new changes that is in the layout.

    So this is the profile section. It used to be at the left

    47:36

    bottom but now it's moved to the top right and making it more accessible than ever. So let's start with the data analysis part and the first thing we need is data.

    So you can find it on kegle or you could ask chatgypt forro to

    47:51

    provide the data. I will show you guys.

    So this is the kegle website. You can sign in here and click on data sets.

    You can find all the data sets here that would be around computer science, education, classification, computer vision or else you could move back to

    48:07

    chat GP and you could ask the chat GP forum model to generate a data and provide it in Excel format. So we'll ask him we'll not ask him can you we'll just ask him provide a data set

    48:23

    that I can use for data analysis and provide in CSV format. So you could see that it has responded that I can provide a sample data set and

    48:38

    he has started generating the data set here. So you could see that he has provided only 10 rows and he is saying that I will now generate this data set in CSV format.

    First he has provided the visual presentation on the screen and now he's

    48:55

    generating the CSV format. So if you want more data like if you want 100 rows or thousand rows you could specify in the prompt and chat GP will generate that for you.

    So we already have the data. I will import that data.

    You could import it

    49:11

    from here or else you can import it from your Google drive. So we have a sales data here.

    We will open it. So we have the sales data here.

    So the first step we need to do is data cleaning. So this is the crucial step to

    49:27

    ensure that the accuracy of our analysis is at its best. So we can do that by handling missing values.

    That is missing values can distort our analysis. And here chat GBT4 can suggest methods to impute these values such as using the mean, median or a sophisticated approach

    49:44

    based on data patterns. And after handling the missing values, we will remove duplicates and outlier detection.

    So we'll ask JPD clean the data if needed.

    50:01

    So we can just write a simple prompt that would be clean the data if needed. And this is also a new feature.

    You can see the visual presentation of the data here that we have 100 rows here and the columns provided that is sales ID, date,

    50:16

    product, category, quantity and price per unit and total sales. So this is also a new feature that okay uh we just headed back.

    We'll move back to our chat GP chat

    50:32

    here. Okay, so here we are.

    So you could see that Judge Gypy has cleaned the data and

    50:47

    he has provided that it has checked for missing values, checked for duplicates and ensured consistent formatting and he's saying okay. Okay.

    So now we will ask him that execute

    51:05

    these steps and provide the clean data as chip has provided that these are the steps to clean the data and let's see

    51:20

    so he has provided a new CSV file with the clean sales data we will download it and Ask him to use the same file only. Use this new cleaned

    51:38

    sales data CSV file for further analysis. So you could see that he is providing what analysis we can do further.

    But

    51:55

    once our data is clean, the next step is visualization. So visualizations help us understand the data better by providing a graphical representation.

    So the first thing we will do is we will create a prompt for generating the histograms and

    52:10

    we'll do that for the age distribution part. So we'll write a prompt that generate a histogram.

    Generate a histogram to visualize the distribution of customer ages. to visualize

    52:27

    the distribution of customer ages. And what I was telling you guys is this code button.

    If you just select the text and you would find this reply section.

    52:43

    Just click on that and you could see that it has selected the text or what you want to get all the prompts started with chat GPD. So we'll make it cross and you could see that it

    52:58

    has provided the histogram here and these are the new features here and we could see that he's providing a notification that interactive charts of this type are not yet supported. that is histogram don't have the color change option.

    I will show you the color change

    53:16

    option in the bar chart section. So these features are also new.

    You can download the chart from here only. And this is the expand chart.

    If you click on that you could see that you could expand the chart here and continue chat with chat gypy here. So this is the

    53:31

    interactive section. So you could see that he has provided the histogram that is showing the distribution of customer ages and the age range are from 18 to 70 years with the distribution visualized in 15 bins that he has created 15 bins here.

    And

    53:49

    now moving to another visualization that we'll do by sales region. So before that I will open the CSV file that is provided by the chat GPT.

    So you guys can also see what data he has provided.

    54:08

    So this is the clean sales data and you could see that we have columns sales ID, date, product, category, quantity, price per item, total sales region and salesperson. So now moving back to chat.

    So now we will create a bar chart showing total

    54:24

    sales by region. So we'll enter this prompt that create a bar chart showing total sales by region.

    54:39

    So what we are doing here is we are creating bar charts or histogram charts but we can do that for only two columns. If we want to create these data visualization charts we need two columns to do so.

    So you could see that he has provided the response and created the

    54:55

    bar chart here. And this is the interactive section.

    You could see that here's an option to switch to static chart. If we click on that, we can't like we are not getting any information.

    If we scroll on that and if I enable this option, you could see that I can

    55:13

    visually see how many numbers this bar is indicating. And after that, we have the change color section.

    You can change the color of the data set provided. So we can change it to any color that is provided here.

    Or you could just write

    55:29

    the color code here. And similarly we have other two options that is download and under is the expand chart section.

    And if you need uh what code it has done to figure out this bar graph. So this is

    55:45

    the code. You could use any ID to do so.

    If you don't want the presentations or the visualizations of the bar charts here, you could use your ID and use the Python language and he will provide the code for you. Just take your data set

    56:02

    and read it through pandas and generate the bar charts. So moving to next section that is category wise sales section.

    So here we will generate a pie chart showing the proportion of sales for each product

    56:17

    category. So for that we'll write a prompt generate a pie chart showing the proportion of sales

    56:35

    for each product category. So you could see that it has started generating the pie chart and this is also an interactive section.

    56:52

    If you click on that you would be seeing a static pie chart. And if you want to change the color you can change for any section that could be clothing, electronics, furniture or kitchen.

    And similarly we have the download section and the expand chart section. So

    57:08

    this is how this new chat JPD 4 model is better than chat JPD4. that you could use a more interactive pie charts.

    You could change the colors for that and you can just hover over these bar charts and found all the

    57:24

    information according to them. So after this data visualization, now we'll move to statistical analysis.

    So this will help us uncover patterns and relationships in the data. So the first thing we'll do is correlation analysis and for that we'll write the prompt

    57:41

    analyze the correlation between age and purchase amount. So this correlation analysis help us understand the relationship between two variables.

    So this can indicate if older customers tend to spend more or less. So we will find out that by analyzing

    57:58

    the data and we'll provide a promp to chat gypy that analyze the correlation between age and purchase amount.

    58:13

    So let's see what it provides. Uh so here's the response by chity.

    You could see a scatter plot that shows the relationship between customer age and total sales. That is with a calculated correlation coefficient of

    58:30

    approximately 0.16. So this indicates a weak positive correlation between age and purchase amount suggesting that as customer age increases there's a slight tendency for total sales to increase as well.

    So you could just see the scatter

    58:46

    plot here that if the age increases so it is not correlated to sales as you could see an empty graph here. So till 40 to 50 years of age or the 70 years of age you could find

    59:03

    what amount they have spent here that is the total sales accumulated by these ages. So now moving to sales trend.

    So here we will perform a time series analysis of purchase amount over the given dates. So

    59:18

    what does this do is time series analysis allows us to examine how sales amount changes over time helping us identify trends and seasonal patterns. So for that we'll write a prompt

    59:34

    perform a time series analysis of purchase amount over given dates.

    59:56

    So you could see that Chad Gypty has provided us the response and here's the time series plot showing total sales over the given dates and each point on the plot represents the total sales for a particular day. So through this you can find out and the

    00:13

    businesses find out which is the seasonal part of the year and where to stock up their stocks for these kind of dates. And after that you could also do customer segmentation.

    So what does this do is so we can use clustering here to

    00:30

    segment customers based on age, income and purchase amount. So clustering groups customers into segments based on similarities.

    This is useful for targeted marketing and personalized services. And after that we have the advanced usage for data analysis.

    Here we can

    00:47

    draw a predictive modeling table and do the market basket analysis and perform a customer lifetime value analysis. So we will see one of those and what we'll do is we'll perform a market basket analysis and perform an association rule mining to find

    01:04

    frequently bought together products. So the theory behind this is the association rule mining helps identify patterns of products that are often purchased together aiding in inventory management and cross-selling strategies.

    01:21

    So for that we'll write a prompt that so perform an association rule mining to find frequently bought together products. So for that we'll write a prompt here.

    perform an association rule mining to find frequently bought

    01:40

    products together. So let's see for this prompt what does chat GP4 respond to us.

    Uh so you could see that he's providing a code here but we don't need a code

    01:58

    here. We need the analysis.

    Don't provide code. Do the market basket analysis

    02:14

    and provide visualizations.

    02:29

    So you could see that uh chat gypy has provided the response that given the limitations in this environment. So he's not able to do the market basket analysis here.

    So but he can help us how we can perform this in an id. So he's providing you can

    02:47

    install the required libraries then prepare the data and here is providing the example code. So you could see there are some limitations to chat gypy 4 also that it can't do advanced data analysis.

    So you could use the code in your ID and

    03:04

    do the market basket analysis there. So there are some limitations to chat gypy 4 also.

    And now we will ask chatgity can you create a presentation based on the data set and we'll provide a data set to it also.

    03:22

    So we'll provide uh sample sales data and we'll ask him can you create a presentation

    03:37

    or PowerPoint presentation based on this data set and only provide data visualization.

    03:54

    graphs. So you can see that JIP4 has started analyzing the data and he is stating that and he will start by creating a data visualization from the provided data set and compile them into

    04:10

    PowerPoint presentation. So you could see that charge JF4 has provided us the response and these are all the presentations or the bar graphs

    04:28

    that he has created and now we have downloaded the presentation here. We will open that and here's the presentation that is created by chat GPT for on July 25th.

    04:45

    Open AAI introduced search GBT, a new search tool changing how we find information online. Unlike traditional search engines which require you to type in specific keywords, Sergeibility lets you ask question in natural everyday

    05:00

    language just like having a conversation. So this is a big shift from how we were used to searching the web.

    Instead of thinking in keywords and hoping to find the right result, you can ask now search GBD exactly what you want to know and it will understand the

    05:16

    context and give you direct answers. It designed to make searching easier and more intuitive without going through links and pages.

    But with this new way of searching, so there are some important questions to consider. Can Sergey compete with Google, the search

    05:33

    giant we all know? What makes sgpttity different from AIO views another recent search tool and how does it compare to chat GPT open AI popular conversational AI.

    So in this video we are going to explore these questions and more. We

    05:49

    will look at what makes s compares to other tools and why it might change the way we search for information. Whether you are new into tech or just curious this video will break it down in simple words.

    Stick around to learn more about search. So

    06:05

    what is search GPT? Search is a new search engine prototype developed by OpenAI designed to enhance the way we search for information using AI.

    Unlike a typical chatbot like chat GPT, search isn't just about having a conversation. It's focused on improving the search

    06:22

    experience with some key features. The first one is direct answer.

    Instead of simply showing you a list of links, Sergeyd delivers direct answer to your question. For example, if you ask what is the best wireless noise cancellation headphone in 2024, search will summarize

    06:38

    the top choices highlighting their pros and cons based on expert reviews and user opinions. So this approach is different from the traditional search engines that typically provide a list of links leading to various articles or videos.

    The second one is relevant

    06:53

    sources. Serptt responses come with clear citations and links to the original sources ensuring transparency and accuracy.

    So this way you can easily verify the information and del deeper into the topic if you want. The third one conversational search GPD allows you

    07:10

    to have a back and forth dialogue with the search engine. You can ask follow-up questions or refine your original query based on the responsive you receive making your search experience more interactive and personalized.

    Now let's jump into the next topic which is surgd

    07:25

    versus Google. So searchg is being talked about a major competitor to Google in the future.

    So let's break down how they differ in their approach to search. The first one is conversational versus keyword based search.

    Search GPT uses a conversational

    07:42

    interface allowing user to ask question in natural language and refine their queries through follow-up questions. So this creates a more interactive search experience.

    On the other hand, Google relies on keyword based search where a user enter specific terms to find

    07:57

    relevant web pages. The second thing is direct answer versus list of links.

    So, one of the search GP's standout feature is its ability to provide direct answers to the question. It summarizes information from the various sources and clearly sites them.

    So, you don't have

    08:13

    to click through multiple links. Google typically present a list of links leaving user to sift through the result to find the information they need.

    The third one AI powered understanding versus keyword matching. Search GPs uses AI to understand the intent behind your

    08:30

    question offering more relevant result even if your query isn't perfectly worded. Google's primary method is keyword matching which can sometimes lead to less accurate result especially for complex queries.

    The fourth one, dynamic context versus isolated searches. So, search maintains content

    08:47

    across multiple interaction allowing for more personalized responses. Whereas Google treats each search as a separate query without remembering previous interaction.

    And the last one, realtime information versus index web pages. Search aim to provide the latest

    09:04

    information using real-time data from the web. Whereas Google V index is comprehensive but may include outdated or less relevant information.

    So now let's jump into the next topic which is serdity versus AI overviews. So SRGBD and AI overviews both use AI but they

    09:21

    approach search and information delivery differently. It's also worth noting that both tools are still being developed.

    So their features and capabilities may evolve and even overlap as they grow. So here are the differences.

    The first one is source attribution. Search GBT provides clear and direct citation

    09:37

    linked to the original sources making it easy for user to verify the information whereas AI overviews include links. The citation may not always be clear or directly associated with specific claims.

    The second one is transparency control. Search promises greater

    09:53

    transparency by offering publishers control over how their content is used including the option to opt out of AI training. AI overviews offers less transparency regarding the selection of content and the summarization process used.

    The next one is scope and depth.

    10:11

    SGBT strives to deliver detailed and comprehensive answers pulling from a broad range of sources including potential multimedia content and in AI overviews offers a concise summary of key points often with links for further exploration but with a more limited

    10:27

    scope. Now let's jump into the next part.

    Serge GPD versus chat GPD. Serge GPT and Chat GBD both developed by OpenAI share some core features but serve different purposes.

    So here are some differences. The first one is primary purpose.

    Serge GPT designed for

    10:44

    search providing direct answer and sources from the web. Whereas SGPT focus on conversational AI generating text responses.

    The second one is information sources. SGPD relies on realtime information from the web whereas SGPD

    10:59

    knowledge based on the training data which might not be correct. The third one is response format.

    Search GPT prioritize concise answers with citation and source links. So whereas SGT is more flexible generating longer text summarizes creative content code and etc.

    The next feature is use cases.

    11:17

    Search GPD idle for factf finding research and task requiring up-to-date information whereas GPD is suitable for creative writing brainstorming drafting emails and other open-ended task. So now question arises when will ser be

    11:33

    released. Sergey is currently in a limited prototype phase meaning it's not yet widely available.

    OpenAI is testing with a select group to gather feedback and improve the tool. So if you are interested in trying sir GPD so you can join the wait list on its web page but

    11:50

    you will need a chat GPD account. A full public release by the end of 2024 is unlikely as open hasn't set a timeline.

    Today we are diving into an exciting topic how to make money using charge an AI powered tool that can help you generate passive income. If you are

    12:07

    eager to start earning effortlessly keep watching. Are you looking for ways to generate passive income with minimal effort thanks to the advancement in artificial intelligence and chatbot?

    You can now earn money using these technologies. So in this video we will explore some of the most effective

    12:24

    methods to generate passive income with chat jeopardity. Chat jeopardy known as the world's smartest generative AI is changing how people make money online.

    With this incredible free tool you can start earning with the little skill and no initial investment required. So we

    12:39

    are in exciting new era of artificial intelligence and now is the perfect time to get involved and seize this opportunity. People are using Chad GBT for YouTube, blogging, freelancing and many other ways to make money.

    So now let's dive in and discover how you can

    12:54

    leverage Chad GPT to generate various streams of passive income. So there are numerous ways to monetize Chad GPD's capabilities.

    So in this video we will explore some few effective strategies or you can say categories by giving prompts. So this is my chat GBT forum.

    13:10

    I'm using the premium version. Right?

    So the first category is get businesses idea from charge. So you can discover how chubby can generate personalized business ideas by understanding your interest, talents and challenges.

    So now

    13:26

    let's ask Chibbit for business ideas tailored to a computer science engineer with experience in digital marketing and sales. Okay?

    Or not even computer science engineer. You can ask as a graphic designer or as a sales marketer anything right.

    So I'm giving here

    13:43

    prompt I am a graphic designer with a neck for digital marketing. Okay.

    So I will write what side what

    14:02

    side hustle can I start to generate? Okay, I will give here $500 income per day

    14:20

    with minimal investment dedicating 6 to 8 hours or you can write 9 to 10 hours or 1 to 2 hours. 6 to 8 hours

    14:35

    daily hours daily. Okay.

    So now let's see what GPT says. So here given your skills in a graphic designer and do some marketing here are some side hustle ideas that you can potentially generate $500

    14:52

    per day. See first is freelance graphic designer.

    Second is print on demand. Third is social media management.

    Sell digital product online. Online coach consultation.

    Affiliate marketing. You can do content creation for YouTube and social media.

    So not I'm not saying you can earn like

    15:11

    in next day itself but it will take time but you can take ideas for your business okay as per your need as per your skills you can just write the prompt and chip will tell you the answer or it will give you some ideas okay so once you have

    15:28

    some great ideas so dive deeper with chat GPT to develop a plan and consider important factors Okay, you can ask to brief freelance graphic designer or print on demand social media management like this. Okay, so our second category

    15:44

    is freelancing itself. Okay, so and you can enhance your freelancing career with chat GPT.

    So this advanced AI tool chat GPT help professional earn extra income by producing highquality content that impresses clients. Like you can write

    16:00

    blog or website content. You can translate languages.

    You can provide email writing services. You can craft compelling headlines and calls to action.

    You can create social media content. You can write captivating short stories or you can conduct hashtag research.

    Okay. So let me give you a

    16:18

    small prompt. Okay.

    So, write me a blog on Great Wall of China in thousand words or you can write in

    16:33

    mutual funds, you can write in stocks, whatever you want. Okay.

    So, as you can see, the Great Wall of China, Marvel of ancient Engineering. So, this is your title.

    Okay. So, the Great Wall of China.

    This this the historical overview then architectural

    16:50

    marvel everything it will give you. Okay.

    So so the third category is build software. Okay.

    So you can use chat GPT to develop software solution for common problems faced by the online businesses. Okay.

    Create software tools using the

    17:06

    codes provided by Chad GPT and sell them to make money. Okay.

    So first what you can do you can create one your portfolio online portfolio website. Okay.

    So there you can mention services as a software developer. Okay.

    Or you build software.

    17:23

    Okay. So the first thing is identify common issues in your needs.

    Okay. So you can use chargity to list the most common problem in e-commerce business such as inventory management, customer support or cart abandonment.

    Okay. The

    17:39

    second thing is use chat GP to generate code and develop software solution. Okay.

    Let me give you example. So here you can write generate generate a Python script

    17:56

    for an inventory management right system for an okay spelling mistake. system for an e-commerce

    18:13

    store. Okay.

    So, it will generate you a Python script. Okay.

    See, it's very easy to earn money using charge GBD. You have to just give a prompt.

    Okay. With your perfect thought.

    What do you want? What

    18:28

    your client wants? Right?

    This is how you have to give the prompt. Okay?

    So this is uh Python code for the inventory management right see its feature its usage everything is here you have to

    18:45

    just give your prop and the third thing is in this build software category and the third thing is market your software to the target audience like you can use chat GBT to create a marketing strategy including promotional content social media post and email campaigns right so

    19:03

    Here I will uh write one prompt for this. So write okay write a marketing plan to promote

    19:20

    an inventory management software for sorry for small

    19:36

    econ. commerce business or businesses right.

    So as here you can see see market plan for promoting inventory management software for e-commerce business. Okay,

    19:51

    see market is the target audience, small e-commerce owners with annual revenue this competitive analysis you can. So this is your marketing plan how you can market your product or your service.

    Okay. So ball I repeat so by following

    20:07

    these steps and utilizing chat GPT's capabilities you can create valuable software tools and successfully market them to your target audience and you can earn a hefty of money. Okay.

    So our next category is email marketing

    20:23

    with Chad GPT. How you can do cold emailing.

    How you can do a perfect email to your client so he or she can impressed with your services or your mail. Okay.

    So you can boost your affiliate marketing efforts with Chaty

    20:39

    email expertise. Okay.

    So the first step is choose an affiliate program that aligns with your niche. The second thing is build an email list of potential customer.

    Okay. The third thing is use strategy to craft engaging emails that

    20:54

    drive conversion. Okay.

    So I will give you example. See, I am a digital marketer looking to promote

    21:11

    a new project management software. So, can you write a compelling

    21:27

    email that will attract potential customers and persuade

    21:45

    them to make a purchase? Okay.

    See first subject transform your project with a cutting edge project management software. So dear this I hope this email

    22:00

    found well. I'm thrilled to announce see key features of particular see benefits and don't just take a word for it.

    Here is what our satisfied clients see these are the testimonials you can write. Okay.

    And the fun fact is if you don't

    22:16

    like this email, you can ask for the next email. Okay, I want something different.

    It will give you again with a different concept. Okay, with a different thought.

    22:33

    Right. The next thing is you can leverage chat GPT for blogging success.

    Right? So already I have wrote one blog.

    Okay. Again we let's dive into it.

    So Chad GBT can elevate your blogging journey by

    22:49

    assisting in content generation, editing, proofreading and SEO optimization. The first thing is you can generate ideas from chat GPT outlines and draft.

    The second thing is enhance readability and reduce errors. The third thing is optimize for search engines

    23:05

    with keyword suggested and SEO tips. The fourth thing is engage with your audience through personalized content.

    Okay. So just give me let me give you example.

    Write a blog post

    23:20

    on the US economy. Okay.

    and optimize

    23:35

    optimize it see understanding the current state of the US economy and in-depth analysis you can write anything okay this is just an example right

    23:51

    so the next thing is affiliate marketing with charge so what you can do you can just select a medium to build your audience Ask RGBT to help you decide whether to focus on articles, audio content like podcast or videos based on your strength and target

    24:07

    audience. So let me give you example.

    So you can write what are the pros and cons of using articles,

    24:25

    audio. Sorry.

    Audio content content and video for affiliate

    24:44

    marketing. Okay.

    So which medium would be best for promoting

    25:01

    tech products. You can ask this.

    See so it will give you the pros and cons for the articles. Okay.

    Then audio content like podcast. Then the video.

    25:18

    So now you can after reading this now you can decide what do you want what are your skills right and the second thing is you can use charge to craft engaging content that promotes your affiliate products let's suppose you chose video

    25:34

    so you can target video skills okay or let's suppose you chose articles so you can write a small prompt like create a compelling article outline for promoting an affiliate product like fitness tracker or a bottle or a watch,

    25:52

    anything. And the third thing is implement a consistent affiliate marketing strategies like use strategy to develop comprehensive marketing strategy that includes content schedules, promoting tactics and tracking metrics like you can write help

    26:07

    me create a consistent I will write here help me create a consistent affiliate marketing strategy.

    26:26

    including a content calendar and promotional tactics for social media. Okay.

    26:44

    So it will give you marketing strategies. See for the see select product content creation build a website or blog email writing content creator week one you can do this week two you can do and you can ask for the like day wise also no issues.

    So by following

    27:01

    these steps and utilizing chaty capabilities you can like effectively build your audience create engaging promotional content and implement a successful affiliate marketing strategy. Okay.

    So now let's suppose you have a YouTube channel. So what you can do?

    You

    27:19

    can ask LGBT to generate video ideas and a script making content creation easier. Okay, you have to just write I want to create a video on what is machine learning.

    27:37

    So give me so write write write a script for me in 1,000 words. Okay.

    So it will write

    27:54

    into th00and see you can see opening scene background music starts soft text on the screen this. So it makes content creation easier charge right.

    So you can use AI power platform like pictori.ai inv video.io to convert your script into

    28:11

    professional video. So even we have multiple GPTs here.

    Okay. See for writing you can use these GPTs.

    Okay. I guess these GPS are free only with the premium version.

    28:27

    I don't know about the 3.5 which is free. Okay.

    So for the productivity you can use Canva. Okay, you can use the diagram thing and you can generate the images.

    See video GPT by V. It's very

    28:44

    easy. Let me show you something.

    Okay. See, generate text to video maker.

    Let's try this. Okay.

    Start a jet. Create a video on what is machine

    29:03

    learning. Target target audience is college

    29:20

    students and I am aiming for the engagement.

    29:37

    So you can just fill these details. So later on it will give you the script and the video itself.

    Okay. So this is how you can use charge to you know earn money.

    Charge can help you express your ideas creatively making your video

    29:54

    articles anything relatable to it. Okay.

    So these prompts and strategies illustrate how versatile charge is in helping you to make money across various field. Okay.

    And to earn money using

    30:09

    charge is very simple. It will take time but it is very simple.

    Okay. It is less time consuming.

    Hello everyone. Welcome to this session.

    I moan from simply learn and today we'll talk about interview questions for

    30:25

    machine learning. Now, this video will probably help you when you're attending interviews for machine learning positions.

    And the attempt here is to probably consolidate 30 most commonly asked uh questions and to help you in

    30:42

    answering these questions. We tried our best to give you the best possible answers.

    But of course what is more important here is rather than the theoretical knowledge you need to kind of add to the answers or supplement your

    30:58

    answers with your own experience. So the responses that we put here are a bit more generic in nature so that if there are some concepts that you are not clear this video will help you in kind of getting those concepts cleared up as

    31:14

    well. But what is more important is that you need to supplement these responses with your own practical experience.

    Okay. So with that let's get started.

    So one of the first questions that you may face is what are the different types of

    31:30

    machine learning? Now what is the best way to respond to this?

    There are three types of machine learning. If you read any material you will always be told there are three types of machine learning.

    But what is important is you would probably be better off emphasizing

    31:46

    that there are actually two main types of a machine learning which is supervised and unsupervised. And then there is a third type which is reinforcement learn.

    So supervised learning is where you have some historical data and then you feed that

    32:02

    data to your model to learn. Now you need to be aware of a keyword that they will be looking for which is labeled data.

    Right? So if you just say past data or historical data, the impact may not be so much.

    You need to emphasize on

    32:18

    labeled data. So what is labeled data?

    Basically, let's say if you are trying to do train your model for classification, you need to be aware of for your existing data which class each of the observations belong to. Right?

    So that is what is labeling. So it is

    32:34

    nothing but a fancy name. You must be already aware.

    just make it a point to throw in that keyword labeled so that will have the right impact. Okay.

    So that is what is supervised learning. When you have existing labeled data which you then use to train your model

    32:52

    that is known as supervised learning and unsupervised learning is when you don't have this labeled data. So you have data it is not labeled.

    So the system has to figure out a way to do some analysis on this. Okay.

    So that is unsupervised learning and you can then add a few

    33:09

    things like what what are the ways of performing uh supervised learning and unsupervised learning or what are some of the techniques. So supervised learning we we perform or we do uh regression and classification and unsupervised learning uh we do

    33:25

    clustering and clustering can be of different types. Similarly, regression can be of different types.

    But you don't have to probably elaborate so much. If they are asking for just the different types, you can just mention these and just at a very high level, you can.

    But if they want you to elaborate, give

    33:41

    examples, then of course I think there is a different question for that. We will see that later.

    Then the third, so we have supervised, then we have unsupervised and then reinforcement. and you need to provide a little bit of information around that as well because it is sometimes a little difficult to come up with a good definition for

    33:57

    reinforcement learning. So you may have to a little bit elaborate on how reinforcement learning works.

    Right? So reinforcement learning works in in such a way that it basically has two parts to it.

    One is the agent and the environment. And the agent basically is

    34:13

    working inside of this environment and it is given a target that it has to achieve and uh every time it is moving in the direction of the target. So the agent basically has to take some action and every time it takes an action which

    34:28

    is moving uh the agent towards the target right towards a goal. Uh a target is nothing but a goal.

    Okay. then it is rewarded and every time it is going in a direction where it is away from the goal then it is punished.

    So that is the way

    34:44

    you can little bit explain and uh this is used primarily or very very impactful for teaching the system to learn games and so on. Examples of this are basically used in Alph Go.

    You can throw

    34:59

    that as an example where Alpha Co used reinforcement learning to actually learn to play the game of Go and finally it defeated the Go world champion. All right, this much of information that would be good enough.

    Okay, then there could be a question on overfitting. Uh

    35:17

    so the question could be what is overfitting and how can you avoid it? So what is overfitting?

    So let's first try to understand the concept because sometimes overfitting may be a little little difficult to understand. Overfitting is a situation where the

    35:33

    model has kind of memorized the data. So this is an equivalent of memorizing the data.

    So we can draw an analogy so that it becomes easy to explain this. Now let's say you're teaching a child about recognizing some fruits or something like that.

    Okay? and you're teaching

    35:50

    this child about recognizing let's say three fruits apples, oranges and pineapples. Okay.

    So this is a a small child and for the first time you're teaching the child to recognize fruits. Then so what will happen?

    So this is very much like that is your training

    36:05

    data set. So what you will do is you'll take a basket of fruits which consists of apples, oranges and pineapples.

    Okay. And you take this basket to this child and uh there may be let's say hundreds of these fruits.

    So you take this basket

    36:21

    to this child and keep showing each of this fruit and then first time obviously the child will not know what it is. So you show an apple and you say hey this is apple then you show maybe an orange and say this is orange and so on and so and then again you keep repeating that right?

    So till that basket is over. This

    36:38

    is basically how training work in machine learning also that's how training works. So till the basket is completed maybe 100 fruits you keep showing this child and then in the process what has happened the child has pretty much memorized these.

    So even before you finish that basket right by

    36:56

    the time you are halfway through the child has learned about recognizing the apple orange and pineapple. Now what will happen after halfway through initially you remember it made mistakes in recognizing but halfway through now it has learned.

    So every time you show a fruit it will exactly 100% accurately it

    37:13

    will identify it will say the child will say this is an apple this is an orange and if you show a pineapple it will say this is a pineapple right so that means it has kind of memorized this data now let's say you bring another basket of fruits and it will have a mix of maybe

    37:30

    apples which were already there in the previous set but it will also have in addition to apple it will probably have a banana or Maybe another fruit like a jack fruit. Right?

    So this is an equivalent of your test data set which the child has not seen before. Some

    37:46

    parts of it it probably has seen like the apples it has seen but this banana and jack fruit it has not seen. So then what will happen in the first round which is an equivalent of your training data set towards the end it has 100% it was telling you what the fruits are

    38:01

    right apple was accurately recognized orange were was accurately recognized and pineapples were accurately recognized right so that is like a 100% accuracy but now when you get another a fresh set which were not a part of the original one what will happen all the

    38:17

    apples maybe it will be able to recognize correctly but all the others like the jackf fruit or the banana will not be recognized by the child. Right?

    So this is an analogy. This is an equivalent of overfitting.

    So what has happened during the training process? It is able to recognize or reach 100%

    38:34

    accuracy. Maybe very high accuracy.

    Okay? And we call that as very low loss, right?

    So that is the technical term. So the loss is pretty much zero and accuracy is pretty much 100%.

    Whereas when you use testing there will be a huge error which means the loss will be

    38:50

    pretty high and therefore the accuracy will be also low. Okay.

    This is known as overfitting. This is basically a process where training is done training process is it goes very well almost reaching 100% accuracy but while testing it really drops down.

    Now how can you avoid

    39:07

    it? So that is the extension of this question.

    There are multiple ways of avoiding overfitting. There are techniques like what do you call regularization that is the most common technique that is used uh for uh avoiding overfitting and within

    39:23

    regularization there can be a few other subtypes like dropout in case of neural networks and a few other examples but I think if you give example or if you give regularization as the technique probably that should be sufficient. So, so there

    39:39

    will be some questions where the interviewer will try to test your fundamentals and your knowledge and depth of knowledge and so on and so forth. And then there will be some questions which are more like trick questions that will be more to stump

    39:54

    you. Okay.

    Then the next question is around the methodology. So when we are performing machine learning training, we split the data into training and test, right?

    So this question is around that. So the question is what is training set and test set in machine learning model

    40:10

    and how is the split done. So the question can be like that.

    So in machine learning when we are trying to train the model. So we have a three-step process.

    We train the model and then we test the model and then once we are satisfied

    40:26

    with the test only then we deploy the model. So what happens in the train and test is that you remember the labeled data.

    So let's say you have thousand records with labeling information. Now one way of doing it is you use all the

    40:43

    thousand records for training and then maybe right which means that you have exposed all these thousand records during the training process and then you take a small set of the same data and then you say okay I will test it with this. Okay and then you probably what

    40:59

    will happen you may get some good results. All right but there is a flaw there.

    What is the flaw? This is very similar to human beings.

    It is like you are showing this model the entire data as a part of training. Okay.

    So obviously it has become familiar with the entire data. So when you're taking a

    41:16

    part of that again and you're saying that I want to test it obviously you will get good results. So that is not a very accurate way of testing.

    So that is the reason what we do is we have the label data of this thousand records or whatever. We set aside before starting

    41:32

    the training process we set aside a portion of that data and we call that test set and the remaining we call as training set and we use only this for training our model. Now the training process remember is not just about

    41:48

    passing one round of this data set. So let's say now your training set has 800 records.

    It is not just one time you pass this 800 records. What you normally do is you actually as a part of the training you may pass this data through the model multiple times.

    So this

    42:03

    thousand records may go through the model maybe 10 15 20 times till the training is perfect till the accuracy is high till the errors are minimized. Okay.

    Now, so which is fine, which means that your that is what is known as the model has seen your data and gets

    42:20

    familiar with your data and now when you bring your test data, what will happen is this is like some new data because that is where the real test is. Now you have trained the model and now you are testing the model with some data which is kind of new.

    That is like a situation

    42:35

    like like a realistic situation because when the model is deployed that is what will happen. It will receive some new data not the data that it has already seen.

    Right? So this is a realistic test.

    So you put some new data. So this data which you have set aside is for the model it is new and if it is able to

    42:52

    accurately predict the values that means your training has worked. Okay the model got trained properly.

    But let's say while you're testing this with this test data you're getting a lot of errors. That means you need to probably either change your model or retrain with more

    43:07

    data and things like that. Now coming back to the question of how do you split this?

    What should be the ratio? There is no fixed uh number.

    Again, this is like individual preferences. Some people split it into 50/50, 50% test and 50%

    43:22

    training. Some people prefer to have a larger amount for training and a smaller amount for test.

    So, they can go by either 6040 or 7030 or some people even go with some odd numbers like 6535 or uh 63.33 and 33 which is like 1/3 and

    43:40

    2/3. So there is no fixed rule that it has to be something that's the ratio has to be this.

    You can go by your individual preferences. All right.

    Then you may have questions around uh data handling, data manipulation or what do you call data management or preparation.

    43:57

    So these are all some questions around that area. There is again no one answer one single good answer to this.

    It really varies from situation to situation and depending on what exactly is the problem, what kind of data it is, how critical it is, what kind of data is

    44:14

    missing and what is the type of corruption. So there are a whole lot of things.

    This is a very generic question and therefore you need to be little careful about responding to this as well. So probably have to illustrate this again.

    If you have experience in doing this kind of work in handling

    44:29

    data, you can illustrate with examples saying that I was on one project where I received this kind of data. These were the columns where data was not filled or these were the this many rows where the data was missing.

    That would be in fact

    44:44

    a perfect way to respond to this question. But if you don't have that obviously you have to provide some good answer.

    I think it really depends on what exactly the situation is and there are multiple ways of handling the missing data or correct data. Now let's take a few examples.

    Now let's say you

    45:01

    have data where some values in some of the columns are missing and you have pretty much half of your data having these missing values in terms of number of rows. Okay, that could be one situation.

    Another situation could be that you have records or data missing

    45:19

    but when you do some initial calculation how many records are correct or how many rows or observations as we call it has this missing data let's assume it is very minimal like 10%. Okay.

    Now between these two cases how do we so let's

    45:35

    assume that this is not a mission critical situation and in order to fix this 10% of the data the effort that is required is much higher and obviously effort means also time and money right so it is not so mission critical and it

    45:51

    is okay to let's say get rid of these records. So obviously one of the easiest ways of handling the data part or missing data is remove those records or remove those observations from your analysis.

    So that is the easiest way to do but then the downside is as I said in

    46:06

    as in the first case if let's say 50% of your data is like that because some column or the other is missing. So it is not like every in every place in every row the same column is missing but you have in maybe 10% of the records column one is missing and another 10% column 2

    46:22

    is missing another 10% column 3 is missing and so on and so forth. So it adds up to maybe half of your data set.

    So you cannot completely remove half of your data set then the whole purpose is lost. Okay.

    So then how do you handle then you need to come up with ways of filling up this data with some

    46:38

    meaningful value. Right?

    That is one way of handling. So when we say meaningful value, what is that meaningful value?

    Let's say for a particular column, you might want to take a mean value for that column and fill wherever the data is missing, fill up with that mean value so

    46:53

    that when you're doing the calculations, your analysis is not completely way off. So you have values which are not missing first of all.

    So your system will work. Number two, these values are not so completely out of whack that your whole analysis goes for a task.

    Right? There

    47:09

    may be situations where if the missing values instead of putting mean maybe a good idea to fill it up with the minimum value or with a zero. So or with a maximum value.

    Again as I said there are so many possibilities. So there is no like one correct answer for this.

    You

    47:25

    need to basically talk around this and illustrate with your experience. As I said that would be the best otherwise this is how you need to handle this question.

    Okay. So then the next question can be how can you choose a classifier based on a training set data

    47:41

    size. So again this is one of those questions uh where you probably do not have like a one sizefits-all answer.

    First of all you may not let's say decide your classifier based on the training set size. Maybe not the best

    47:58

    way to decide the type of the classifier. And uh even if you have to there are probably some thumb rules which we can use but then again every time.

    So in my opinion the best way to respond to this question is you need to try out few classifiers irrespective of

    48:14

    the size of the data and you need to then decide on your particular situation which of these classifiers are the right ones. This is a very generic issue.

    So you will never be able to just by if somebody defines a a problem to you and

    48:29

    somebody even if they show the data to you or tell you what is that data or even the size of the data I don't think there is a way to really say that yes this is the classifier that will work here. No that's not the right way.

    So you need to still uh you know test it

    48:46

    out, get the data, try out a couple of classifiers and then only you will be in a position to decide which classifier to use. You try out multiple classifiers, see which one gives the best accuracy and only then you can decide.

    Then you can have a question around confusion

    49:02

    matrix. So the question can be explain confusion matrix.

    Right? So confusion matrix I think the best way to explain it is by taking an example and drawing like a small diagram otherwise it can really become tricky.

    So my suggestion

    49:17

    is to take a piece of pen and paper and uh explain it by drawing a small matrix and confusion matrix is about to find out this is used especially in classification uh learning process and when you get the results when the our

    49:33

    model predicts the results you compare it with the actual value and try to find out what is the accuracy. Okay.

    So in this case let's say this is an example of a confusion matrix and uh it is a binary matrix. So you have the actual

    49:49

    values which is the labeled data right and which is so you have how many yes and how many no. So you have that information and you have the predicted values how many yes and how many no right.

    So the total actual values the

    50:05

    total yes is 12 + 1 13 and they are shown here and the actual value nos are 9 + 3 12. Okay.

    So that is what this information here is. So this is about the actual and this is about the predicted.

    Similarly the predicted values there are yes are 12 + 3 15 yeses

    50:23

    and no are 1 + 9 10 nos. Okay.

    So this is the way to look at this confusion matrix. Okay.

    And um out of this what is the meaning conveyed here? So there are two or three things that needs to be explained outright.

    The first thing is

    50:38

    for a model to be accurate the values across the diagonal should be high like in this case right that is one. Number two the total sum of these values is equal to the total observations in the test data set.

    So in this case for

    50:53

    example you have 12 + 3 15 + 10 25. So that means we have 25 observations in our test data set.

    Okay. So these are the two things you need to first explain that the total sum in this matrix the numbers is equal to the size of the test data set and the diagonal values

    51:12

    indicate the accuracy. So by just by looking at it you can probably have a idea about is this uh an accurate model?

    Is the model being accurate? If they're all spread out equally in all these four boxes, that means probably the accuracy

    51:27

    is not very good. Okay.

    Now, how do you calculate the accuracy itself? Right?

    How do you calculate the accuracy itself? So, it is a very simple mathematical calculation.

    You take sum of the diagonals, right? So, in this case, it is 9 + 12 21 and divide it by

    51:42

    the total. So, in this case, what will it be?

    Let me uh take a pen. So your your diagonal values is equal if I say D is equal to 12 + 9.

    So that is 21 right and the total data set is equal to right

    51:58

    we just calculated it is 25. So what is your accuracy?

    It is 21 by your accuracy is equal to 21 by 25 and this turns out to be about 85%. Right?

    So this is 85%. So that is our

    52:15

    accuracy. Okay.

    So this is the way you need to explain draw a diagram give an example and maybe it may be a good idea to be prepared with an example so that it becomes easy for you don't have to calculate those numbers on the fly right so a couple of uh hints are that you take some numbers which are with which

    52:32

    add up to 100 that is always a good idea so you don't have to really do this complex calculations so the total value will be 100 and then diagonal values you divide once you find the diagonal values that is equal to your percentage okay all right so the next question can be a related question about false positive

    52:49

    and false negative. So what is false positive and what is false negative?

    Now once again the best way to explain this is using a piece of paper and pen otherwise it will be pretty difficult to to explain this. So we use the same

    53:04

    example of the confusion matrix and uh we can explain that. So a confusion matrix looks somewhat like this.

    And um when we just take yeah it looks somewhat like this. And we continue with the previous example where this is the

    53:20

    actual value. This is the predicted value.

    And uh in the actual value we have 12 + 1 13 yeses and 3 + 9 12 nos. And the predicted values there are 12 + 3 15 yeses and uh 1 + 9 10 nos.

    Okay.

    53:37

    Now in this particular case which is the false positive. What is a false positive?

    First of all the second word which is positive okay is referring to the predicted value. So that means the system has predicted it as a positive

    53:54

    but the real value. So this is what the false comes from but the real value is not positive.

    Okay that is the way you should understand this term false positive or even false negative. So false positive.

    So positive is what your system has predicted. So where is that

    54:11

    system predicted? This is the one positive is what?

    Yes. So you basically consider this row.

    Okay. Now if you consider this row, so this is this is all positive values.

    This entire row is positive values. Okay.

    Now the false positive is the one which where the

    54:26

    value actual value is negative. Predicted value is positive but the actual value is negative.

    So this is a false positive. Right?

    And here is a true positive. So the predicted value is positive and the actual value is also positive.

    Okay, I hope this is making

    54:41

    sense. Now let's take a look at what is false negative.

    False negative. So negative is the second term.

    That means that is the predicted value that we need to look for. So which are the predicted negative values?

    This row corresponds to predicted negative values. All right.

    So

    54:57

    this row corresponds to predicted negative values. And what they are asking for false.

    So this is the row for predicted negative values and the actual value is this one right? This is predicted negative and the actual value

    55:13

    is also negative. Therefore this is a true negative.

    So the false negative is this one. Predicted is negative but actual is positive.

    Right? So this is the false negative.

    So this is the way to explain and this is the way to look at false positive and false negative.

    55:29

    Same way there can be true positive and true negative as well. So again positive the second term you will need to use to identify the predicted row right.

    So if we say true positive positive we need to take for the predicted part. So

    55:44

    predicted positive is here. Okay.

    And then the first term is for the actual. So true positive.

    So true in case of actual is yes right. So true positive is this one.

    Okay. And then in case of actual the negative.

    Now we are talking

    56:00

    about let's say true negative. true negative negative is this one and the true comes from here.

    So this is true negative, right? 9 is true negative.

    The actual value is also negative and the predicted value is also negative. Okay?

    So that is the way you need to explain

    56:16

    this the terms false positive, false negative and true positive, true negative. Then uh you might have a question like what are the steps involved in the machine learning process or what are the three steps in the process of developing a machine learning

    56:33

    model right so it is around the methodology that is applied. So basically the way you can probably answer in your own words but the way the model development of the machine learning model happens is like this.

    First of all, you try to understand the problem and try to figure out whether it

    56:49

    is a classification problem or a regression problem. Based on that, you select a few algorithms and then you start the process of training these models.

    Okay. So, you can either do that or you can after due diligence you can

    57:05

    probably uh decide that there is one particular algorithm which is most suitable. Usually it happens through trial and error process but at some point you will decide that okay this is the model we are going to use.

    Okay. So in that case we have the model algorithm and the model decided and then you need

    57:22

    to do the process of training the model and testing the model and this is where if it is supervised learning you split your data the label data into training data set and test data set and you use the training data set to train your

    57:37

    model and then you use the test data set to check the accuracy whether it is working fine or not. So you test the model before you actually put it into production.

    Right? So once you test the model, you're satisfied, it's working fine, then you go to the next level which is putting it for production and

    57:54

    then in production obviously new data will come and uh the inference happens. So the model is readily available and only thing that happens is new data comes and the model predicts the values whether it is regression or classification.

    Now so this can be an iterative process. So it is not a

    58:09

    straightforward process where you do the training do the testing and then you move it to production. No.

    So during the training and test process there may be a situation where because of either overfitting or or things like that the test doesn't go through which means that you need to put that back into the

    58:26

    training process. So that can be an iterative process.

    Not only that, even if the training and test goes through properly and you deploy the model in production, there can be a situation that the data that actually comes, the real data that comes with that this model is failing. So in which case you

    58:42

    may have to once again go back to the drawing board or initially it will be working fine but over a period of time maybe due to the change in the nature of the data once again the accuracy will deteriorate. So that is again a recursive process.

    So once in a while

    58:57

    you need to keep checking whether the model is working fine or not and if required you need to tweak it and modify it and so on and so forth. So netn net this is a continuous process of um tweaking the model and testing it and making sure it is up to date.

    Then you might have question around deep

    59:13

    learning. So because deep learning is now associated with AI artificial intelligence and so on.

    So can be as simple as what is deep learning? So I think the best way to respond to this could be deep learning is a part of machine learning and then then obviously

    59:30

    the the question would be then what is the difference right? So deep learning you need to mention there are two key parts that interviewer will be looking for when you are defining deep learning.

    So first is of course deep learning is a subset of machine learning. So machine learning is still the bigger let's say

    59:46

    uh scope and deep learning is one one part of it. So then what exactly is the difference?

    Deep learning is primarily when we are implementing these our algorithms or when we are using neural networks for doing our training and

    00:02

    classification and regression and all that right. So when we use neural network then it is considered as deep learning and the term deep comes from the fact that you can have several layers of neural networks and these are called deep neural networks and

    00:17

    therefore the term deep you know deep learning. Uh the other difference between machine learning and deep learning which the interviewer may be wanting to hear is that in case of machine learning the feature engineering is done manually.

    What do we mean by

    00:33

    feature engineering? Basically when we are trying to train our model we have our training data right so we have our training label data and uh this data has several let's say if it is a regular table it has several columns now each of

    00:49

    these columns actually has information about a feature right so if we are trying to predict the height weight and so on and so forth so these are all features of human beings let's say we have census data and we have all the so those are the features now there may probably 50 or 100 in some cases there

    01:06

    may be 100 such features. Now all of them do not contribute to our model.

    Right? So we as a data scientist we have to decide whether we should take all of them all the features or we should throw away some of them because again if we

    01:21

    take all of them number one of course your accuracy will probably get affected but also there is a computational part. So if you have so many features and then you have so much data it becomes very tricky.

    So in case of machine learning we manually take care of identifying the

    01:37

    features that do not contribute to the learning process and thereby we eliminate those features and so on. Right?

    So this is known as feature engineering and in machine learning we do that manually. Whereas in deep learning where we use neural networks the model will automatically determine

    01:54

    which features to use and which to not use and therefore feature engineering is also done automatically. So this is a explanation.

    These are two key things probably will add value to your response. All right.

    So the next question is what is the difference

    02:10

    between or what are the differences between machine learning and deep learning. So here this is a a quick comparison table between machine learning and deep learning.

    And in machine learning learning enables machines to take decisions on their own based on past data. So here we are

    02:26

    talking primarily of supervised learning. and um it needs only a small amount of data for training and then works well on low-end systems.

    So you don't need large uh machines and most features need to be identified in advance and manually coded. So basically the feature engineering part is done

    02:43

    manually and uh the problem is divided into parts and solved individually and then combined. So that is about the machine learning part.

    In deep learning, deep learning basically enables machines to take decisions with the help of artificial neural network. So here in

    02:58

    deep learning we use neural lines. So that is the key differentiator between machine learning and deep learning and usually deep learning involves a large amount of data and therefore the training also requires usually the training process requires high-end machines uh because it needs a lot of

    03:14

    computing power and the machine learning features are the or the feature engineering is done automatically. So the neural networks takes care of doing the feature engineering as well.

    And in case of deep learning therefore it is said that the problem is handled end to end. So this is a quick comparison

    03:31

    between machine learning and deep learning. In case you have that kind of a question then you might get a question around the uses of machine learning or some real life applications of machine learning in modern business.

    The question may be worded in different ways

    03:47

    but the the meaning is how exactly is machine learning used or actually supervised machine learning. It could be a very specific question around supervised machine learning.

    So this is like give examples of supervised machine learning use of supervised machine learning in modern business. So that

    04:04

    could be the next question. So there are quite a few examples or quite a few use cases if you will for supervised machine learning.

    The very common one is email spam detection. So you want to train your application or your system to

    04:20

    detect between spam and non-spam. So this is a very common business application of a supervised machine learning.

    So how does this work? The way it works is that you obviously have historical data of your emails and they

    04:37

    are categorized as spam and not spam. So that is what is the labeled information and then you feed this information or the all these emails as an input to your model right and the model will then get trained to detect which of the emails

    04:54

    are to detect which is spam and which is not spam. So that is the training process and this is supervised machine learning because you have label data.

    You already have emails which are tagged as spam or not spam and then you use that to train your model. Right?

    So this

    05:09

    is one example. Now there are a few industry specific applications for supervised machine learning.

    One of the very common ones is in healthcare diagnostics. In healthcare diagnostics, you have these images and you want to

    05:25

    train models to detect whether from a particular image whether it can find out if the person is sick or not, whether a person has cancer or not. Right?

    So this is a very good example of supervised machine learning. Here the way it works

    05:41

    is that existing images it could be X-ray images it could be MRI or any of these images are available and they are tagged saying that okay this X-ray image is defective or the person has an illness or it could be cancer whichever illness right so it is tagged as

    05:58

    defective or clear or good image and defective image something like that. So we come up with a binary or it could be multiclass as well saying that this is defective to 10%, this is 25% and so on.

    But let's keep it simple. You can give an example of just a binary

    06:14

    classification that would be good enough. So you can say that in healthcare diagnostics using image we need to detect whether a person is ill or whether a person is having cancer or not.

    So here the way it works is you feed labeled images and you allow the

    06:31

    model to learn from that so that when new image is fed it will be able to predict whether this person is having that illness or not having cancer or not right so I think this would be a very good example for supervised machine learning in modern business all right

    06:47

    then we can have a question like so we've been talking about supervised and um unsupervised and so there can be question around semi-supervised machine learning. So what is semi-supervised machine learning?

    Now semi-supervised learning as the name suggests it falls

    07:05

    between supervised learning and unsupervised learning. But for all practical purposes it is considered as a part of supervised learning.

    And the reason this has come into existence is that in supervised learning you need

    07:20

    labeled data. So all your data for training your model has to be labeled.

    Now this is a big problem in many industries or in many under many situations. Getting the label data is not that easy because there's a lot of effort in labeling this data.

    Let's take

    07:36

    an example of the diagnostic images. We can just let's say take X-ray images.

    Now there are actually millions of X-ray images available all over the world. But the problem is they are not labeled.

    So the images are there but whether it is

    07:53

    defective or whether it is good that information is not available along with it right in a form that it can be used by a machine which means that somebody has to take a look at these images and usually it should be like a doctor and uh then say that okay yes this image is

    08:10

    clean and this image is cancerous and so on and so forth. Now that is a huge effort by itself.

    So this is where semi-supervised learning comes into play. So what happens is there is a large amount of data maybe a part of it is labeled then we try some techniques

    08:27

    to label the remaining part of the data so that we get completely labeled data and then we train our model. So I know this is a little long winding explanation but unfortunately there is no uh quick and easy definition for semi-supervised machine learning.

    This

    08:43

    is the only way probably to explain this concept. We may have another question as um what are unsupervised machine learning techniques or what are some of the techniques used for performing unsupervised machine learning.

    So it can

    09:00

    be worded in different ways. So how do we answer this question?

    So unsupervised learning you can say that there are two types clustering and association. And clustering is a technique where similar objects are put together and there are

    09:17

    different ways of finding similar objects. So their characteristics can be measured and if they have in most of the characteristics if they are similar then they can be put together.

    This is clustering. Then association you can I think the best way to explain association is with an example.

    In case

    09:34

    of association you try to find out how the items are linked to each other. So for example, if somebody bought a maybe a laptop, the person has also purchased a mouse.

    So this is more in an

    09:49

    e-commerce scenario for example. So you can give this as an example.

    So people who are buying laptops are also buying the mouse. So that means there is an association between laptops and mouse or maybe people who are buying bread are also buying butter.

    So that is a

    10:06

    association that can be created. So this is unsupervised learning one of the techniques.

    Okay. All right.

    Then we have very fundamental question. What is the difference between supervised and unsupervised machine learning?

    So machine learning these are the two main

    10:22

    types of machine learning. Supervised and unsupervised.

    And in case of supervised and again here probably the key word that the person may be wanting to hear is labeled data. Now very often people say yeah we have historical data and if we run it it is supervised and if

    10:38

    we don't have historical data yes but you may have historical data but if it is not labeled then you cannot use it for supervised learning. So it is it's very key to understand that we put in that keyword labeled.

    Okay. So when we have labeled data for training our model

    10:55

    then we can use supervised learning and if we do not have labeled data then we use unsupervised learning and there are different algorithms available to perform both of these types of uh trainings. So there can be another

    11:10

    question a little bit more theoretical and conceptual in nature. This is about inductive machine learning and uh deductive machine learning.

    So the question can be what is the difference between inductive machine learning and deductive machine learning or somewhat

    11:26

    in that manner. So that the exact phrase or exact question can vary.

    They can ask for examples and things like that but that could be the question. So let's first understand what is inductive and deductive training.

    Inductive training is induced by somebody and you can illustrate that with a small example. I

    11:43

    think that always helps. So whenever you're doing some explanation try as much as possible as I said to give examples from your work experience or give some analogies and that will also help a lot in in explaining as well and for the interviewer also to understand.

    11:58

    So here we'll take an example or rather we will use an analogy. So inductive training is when we induce some knowledge or the learning process into a person without the person actually experiencing it.

    Okay. What can be an example?

    So we can probably tell the

    12:16

    person or show a person a video that fire can burn the fing burn his finger or fire can cause damage. So what is happening here?

    This person has never probably seen a fire or never seen anything getting damaged by fire. But

    12:32

    just because he has seen this video, he knows that okay fire is dangerous and if fire can cause damage, right? So this is inductive learning.

    Compared to that what is deductive learning? So here you draw conclusion or the person draws

    12:49

    conclusion out of experience. So we will stick to the analogy.

    So compared to the showing a video let's assume a person is allowed to play with fire right and then he figures out that if he puts his finger it's burning or if throws something into the fire it burns. So he

    13:05

    is learning through experience. So this is known as deductive learning.

    Okay. So you can have applications or models that can be trained using inductive learning or deductive learning method.

    All right, I think uh probably that explanation

    13:21

    will be sufficient. The next question is are KN&N and K means clustering similar to one another or are they same?

    Right? Because that the letter K is kind of common between them.

    Okay. So let us take a little while to understand what

    13:36

    these two are. One is KNN and another is K means.

    KN&N stands for K nearest neighbors and K means of course is the clustering mechanism. Now these two are completely different except for the letter K being common between them.

    KNN

    13:51

    is completely different. K means clustering is completely different.

    KN&N is a classification process and therefore it it comes under supervised learning whereas K means clustering is actually a unsupervised. Okay.

    When you

    14:07

    have K andN when you want to implement KNN which is basically K nearest neighbors the value of K is a number. So you can say K is equal to three you want to implement KN&N with K is equal to three.

    So which means that it performs the classification in such a way that

    14:23

    how does it perform the classification? So it will take three nearest objects and that's why it's called nearest neighbor.

    So basically based on the distance it will try to find out its nearest objects that are let's say three of the nearest objects and then it will check whether the class they belong to

    14:40

    which class right. So if all three belong to one particular class obviously this new object is also classified as that particular class.

    But it is possible that they may be from two or three different classes. Okay.

    So let's say they are from two classes. And then

    14:55

    if they are from two classes now usually you take a odd number you assign a odd number to. So if there are three of them and two of them belong to one class and then one belongs to another class.

    So this new object is assigned to the class to which the two of them belong. Now the

    15:11

    value of K is sometimes tricky whether should you use three should you use five should you use seven that can be tricky because the ultimate classification can also vary. So it's possible that if you're taking K as three, the object is probably in one particular class.

    But if

    15:29

    you take K is equal to 5, maybe the object will belong to a different class. Because when you're taking three of them, probably two of them belong to class one and one belong to class two.

    Whereas when you take five of them, it is possible that only two of them belong

    15:46

    to class one and three of them belong to class two. So which means that this object will belong to class two right so you see that so this the class allocation can vary depending on the value of K.

    Now K means on the other hand is a clustering process and it is unsupervised where what it does is the

    16:04

    system will basically identify how the objects are how close the objects are with respect to some of their features. Okay.

    And but the similarity of course is the the letter K. And in case of K means also we specify its value and it could be three or five or seven.

    There

    16:20

    is no technical limit as such but it can be any number of clusters that uh you can create. Okay.

    So based on the value that you provide the system will create that many clusters of similar objects. So there is a similarity to that extent that K is a number in both the cases but

    16:38

    actually these two are completely different processes. We have what is known as naive base classifier and people often get confused thinking that naive base is the name of the person who found this uh classifier or who developed this classifier which is not

    16:53

    100% true. Base is the name of the person b a ys is the name of the person but naive is not the name of the person.

    Right? So naive is basically an English word and that has been added here because of the nature of this particular classifier.

    Kna based classifier is a

    17:11

    probability based classifier and uh it makes some assumptions that uh presence of one feature of a class is not related to the presence of any other feature of maybe other classes. Right?

    So which is not a very strong or not a very what do

    17:27

    you say accurate assumption because these features can be related and so on. But even if we go with this assumption this whole algorithm works very well even with this assumption and uh that is the good side of it but the term comes from that.

    So that is the explanation

    17:42

    that you can then there can be question around reinforcement learning. It can be paraphrased in multiple ways.

    One could be can you explain how a system can play a game of chess using reinforcement learning or it can be any game. So the

    17:57

    best way to explain this is again to talk a little bit about what reinforcement learning is about and then elaborate on that to explain the process. So first of all, reinforcement learning has an environment and an agent.

    And the agent is basically

    18:13

    performing some actions in order to achieve a certain goal. And this goals can be anything.

    Either if it is related to game, then the goal could be that you have to score very high, score a high value, high number or it could be that

    18:29

    your uh number of lives should be as high as possible. Don't lose lives.

    So these could be some of them. A more advanced examples could be for driving the automotive industry self-driving cars.

    They actually also make use of reinforcement learning to teach the car

    18:45

    how to navigate through the roads and so on and so forth. That is also another example.

    Now how does it work? So if the system is basically there is an agent in the environment and every time the agent takes a step or performs a task which is taking it towards the goal the final

    19:02

    goal let's say to maximize the score or to minimize the number of lives and so on or minimize the deaths for example it is rewarded and every time it takes a step which goes against that goal right contrary or in the reverse direction it is penalized okay so it is like a carrot

    19:19

    and stick system Now, how do you use this to create a game of chess? So, to create a system to play a game of chess.

    Now, the way this works is, and this could probably go back to this AlphaGo example where AlphaGo defeated a human champion. So, the way it works is in

    19:35

    reinforcement learning, the system is allowed, for example, if in this case we're talking about chess. So, we allow the system to first of all watch playing a game of chess.

    So it could be with a human being or it could be the system

    19:51

    itself. There are computer games of chess, right?

    So either this new learning system has to watch that game or watch a human being play the game because this is reinforcement uh learning is pretty much all visual. So

    20:07

    when you're teaching the system to play a game, the system will not actually go behind the scenes to understand the logic of your software of this game or anything like that. It is just visually watching the screen and then it learns.

    Okay. So, reinforcement learning to a

    20:22

    large extent works on that. So, you need to create a mechanism whereby your model will be able to watch somebody playing the game and then you allow the system also to start playing the game.

    So, it pretty much starts from scratch. Okay.

    20:39

    And as it moves forward, it it it's at right at the beginning. The system really knows nothing about the game of chess.

    Okay. So, initially it is a clean slate.

    It just starts by observing how you are playing. So, it will make some

    20:55

    random moves and keep losing badly. But then what happens is over a period of time.

    So you need to now allow the system or you need to play with this system not just 1 2 3 four or five times but hundreds of times thousands of times

    21:11

    maybe even hundreds of thousands of times and that's exactly how Alph Go has done it played millions of games between itself and the system right so for the game of chess also you need to do something like that you need to allow the system to play chess and uh then

    21:28

    learn on its own over a period of repetition So I think you can probably explain it to this much to this extent and it should be sufficient. Now this is another question which is again somewhat similar but here the size is not coming

    21:43

    into picture. So the question is how will you know which machine learning algorithm to choose for your classification problem.

    Now this is not only classification problem it could be a regression problem. I would like to generalize this question.

    So if somebody asks you how will you choose how will you know which algorithm to use? The

    21:59

    simple answer is there is no way you can decide exactly saying that this is the algorithm I'm going to use. In a variety of situations there are some guidelines like for example you will obviously depending on the problem you can say whether it is a classification problem

    22:16

    or a regression problem and then in that sense you are kind of restricting yourself to if it is a classification problem there are you can only apply a classification algorithm right to that extent you can probably let's say limit the number of algorithms but now within

    22:32

    the classification algorithms you have decision trees, you have SPM, you have logistic regression. Is it possible to outright say yes?

    So for this particular problem since you have explained this. Now this is the exact algorithm that you can use that is not possible.

    Okay. So

    22:49

    we have to try out a bunch of algorithms see which one gives us the best performance and best accuracy and then decide to go with that particular algorithm. So in machine learning a lot of it happens through trial and error.

    there is uh no real possibility that

    23:06

    anybody can just by looking at the problem or understanding the problem tell you that okay in this particular situation this is exactly the algorithm that you should use. Then the questions may be around application of machine learning and this question is specifically around how Amazon is able

    23:23

    to recommend other things to buy. So this is around recommendation engine.

    How does it work? How does the recommendation engine work?

    So this is basically the question is all about. So the recommendation engine again works based on various inputs that are provided.

    Obviously something like uh

    23:39

    you know Amazon a website or e-commerce site like Amazon collects a lot of data around the customer behavior who is purchasing what and if somebody is buying a particular thing they're also buying something else. So this kind of association right so this is the

    23:55

    unsupervised learning we talked about. They use this to associate and link or relate items and that is one part of it.

    So they kind of build association between items saying that somebody buying this is also buying this. That is one part of it.

    Then they also profile

    24:12

    the users right based on their age, their gender, their geographic location, they will do some profiling and then when somebody is logging in and when somebody is shopping kind of the mapping of these two things are done. They try to identify obviously if you have logged

    24:28

    in then they know who you are and your information is available like for example your age maybe your gender and uh where you're located what you purchased earlier right so all this is taken and the recommendation engine

    24:43

    basically uses all this information and comes up with recommendations for a particular user. So that is how the recommendation engine work.

    All right. Then the question can be uh something very basic like when will you go for classification versus regression, right?

    25:01

    When do you do classification instead of regression or when will you use classification instead of regression? Now yes.

    So so this is basically going back to the understanding of the basics of classification and regression. So classification is used when you have to identify or categorize things into

    25:19

    discrete classes. So the best way to respond to this question is to take up some examples and use it.

    Otherwise it can become a little tricky. The question may sound very simple but explaining it can sometimes be very tricky.

    In case of regression we use of course there will

    25:34

    be some keywords that they will be looking for. So just you need to make sure you use those keywords.

    One is the discrete values and other is the continuous values. So for regression if you are trying to find some continuous values you use regression.

    Whereas if you're trying to find some discrete

    25:49

    values, you use classification. And then you need to illustrate what are some of the examples.

    So classification is like let's say there are images and you need to put them into classes like cat, dog, elephant, tiger something like that. So that is a classification problem or it

    26:07

    can be that is a multiclass classification problem. It could be binary classification problem like for example whether a customer will buy or he will not buy that is a classification binary classification it can be in the weather forecast area.

    Now weather

    26:23

    forecast is again combination of regression and classification because on the one hand you want to predict whether it's going to rain or not. That's a classification problem.

    That's a binary classification right? Whether it's going to rain or not rain.

    However, you also have to predict what is going to be the

    26:39

    temperature tomorrow. Right?

    Now, temperature is a continuous value. You can't answer the temperature in a yes or no kind of a response.

    Right? So, what will be the temperature tomorrow?

    So, you need to give a number which can be like 20°, 30° or whatever, right? So, that is where you use regression.

    One

    26:56

    more example is stock price prediction. So, that is where again you will use regression.

    So, these are the various examples. So you need to illustrate with examples and make sure you include those key words like discrete and continuous.

    So the next question is more about a

    27:12

    little bit of a design related question to understand your concepts and things like that. So it is how will you design a spam filter?

    So how do you basically design or develop a spam filter? So I think the main thing here is he's looking at probably understanding your

    27:28

    concepts in terms of uh what is the algorithm you will use or what is your understanding about difference between classification and regression uh and things like that right and the process of course the methodology and the process. So the best way to go about responding to this is we say that okay

    27:46

    this is a classification problem because we want to find out whether an email is a spam or not spam so that we can apply the filter accordingly. So first thing is to identify what type of a problem it is.

    So we have identified that it is a classification. Then the second step may

    28:01

    be to find out what kind of algorithm to use. Now since this is a binary classification problem, logistic regression is a very common very common algorithm.

    But however, right as I said earlier also we can never say that okay for this particular problem this is

    28:17

    exactly the algorithm that we can use. So we can also probably try decision trees or even support vector machines for example SPM.

    So we will kind of list down a few of these algorithms and we will say okay we want to we would like to try out these algorithms and then we

    28:35

    go about taking your historical data which is the labeled data which are marked. So you will have a bunch of emails and uh then you split that into training and test data sets.

    You use your training data set to train your model that or your algorithm that you

    28:52

    have used rather the model actually. So and you actually will have three models.

    Let's say you are trying to test out three algorithms. So you will obviously have three models.

    So you need to try all three models and test them out as well. See which one gives the best

    29:08

    accuracy and then you decide that you will go with that model. Okay.

    So training and test will be done and then you zero in on one particular model and then you say okay this is the model we use we will use and then go ahead and implement that or put that in production. So that is the way you

    29:24

    design a spam fit. The next question is about random forest.

    So what is random forest? So this is a very straightforward question.

    However, the response you need to be again a little careful. While we all know what is random forest, explaining this can sometimes be tricky.

    So one thing is

    29:41

    random forest is kind of in one way it is an extension of decision trees because it is basically nothing but you have multiple decision trees and uh trees will basically you will use for doing if it is classification mostly it is classification you will use the the

    29:57

    trees for classification and then you use voting for finding the the final class. So that is the underlyings.

    But how will you explain this? How will you respond to this?

    So first thing obviously we will say that random forest is one of the algorithms and the more important thing that you need to

    30:13

    probably the interviewer is is waiting to hear is ensemble learner right so this is one type of ensemble learner what is ensemble learner ensemble learner is like a combination of algorithms so it is a learner which consists of more than one algorithm or

    30:30

    more than one maybe models okay so in case of random forest the algorithm is the name but instead of using one instance of it we use multiple instances of it and we use so in a way that is a random forest is an ensemble learner.

    30:45

    There are other types of ensemble learners where we have like we use different algorithms itself. So you have one maybe logistic regression and a decision tree combined together and so on and so forth or there are other ways like for example splitting the data in a certain way and so on.

    So that's all

    31:00

    about ensemble. We will not go into that but random forest itself.

    I think the interviewer will be happy to hear this word ensemble learners and so then you go and explain how the random forest works. So if the random forest is used for classification then we use what is

    31:16

    known as a voting mechanism. So basically how does it work?

    Let's say your random forest consists of 100 trees. Okay.

    And each observation you pass through this forest and each observation let's say it is a classification problem binary classification 0 or one and you have 100

    31:32

    trees. Now if 90 trees say that it is a zero and 10 of the trees say it is a one you take the majority you may take a vote and since 90 of them are saying zero you classify this as zero then you take the next observation and so on.

    So that is the way the random forest works

    31:49

    for classification. If it is a regression problem, it's somewhat similar but only thing is instead of what what we will do is so in regression remember what happens you actually calculate a value right?

    So for example you're using regression to predict the temperature and you have 100 trees and

    32:06

    each tree obviously will probably predict a different value of the temperature. They may be close to each other but they may not be exactly the same value.

    So these 100 trees. So how do you now find the actual value the output for the entire forest right?

    So

    32:22

    you have outputs of individual trees which are a part of this forest but then you need to find the final output of the forest itself. So how do you do that?

    So in case of regression you take like an average or the mean of all the 100 trees. Right?

    So this is also a way of

    32:37

    reducing the error. So maybe if you have only one tree and if that one tree makes a error it is basically 100% wrong or 100% right.

    Right? But if you have on the other hand if you have a bunch of trees you are basically mitigating that error reducing that error.

    Okay. So that

    32:53

    is the way random forest works. So the next question is considering the long list of machine learning algorithms how will you decide on which one to use?

    So once again here there is no way to outright say that this is the algorithm

    33:08

    that we will use for a given data set. This is a very good question.

    But then the response has to be like again there will not be a one-sizefits all. So we need to first of all you can probably shorten the list in terms of by saying

    33:24

    okay whether it is a classification problem or it is a regression problem. To that extent you can probably uh shorten the list because you don't have to use all of them.

    If it is a classification problem you only can pick from the classification algorithms. Right?

    So for example if it is a

    33:40

    classification you cannot use linear regression algorithm there or if it is a regression problem you cannot use SVM or maybe no you can use SVM but maybe a logistic regression right so to that extent you can probably shorten the list but still you will not be able to 100% decide on saying that this is the exact

    33:58

    algorithm that I'm going to use so the way to go about is you choose a few algorithms based on what the problem is you try out your data you train some models of these algorith algorithms check which one gives you the lowest error or the highest accuracy and based

    34:14

    on that you choose that particular algorithm. Okay.

    All right. Then there can be questions around bias and variance.

    So the question can be what is bias and variance in machine learning. Uh so you just need to give out a definition for each of these.

    For

    34:30

    example, bias in machine learning it occurs when the predicted values are far away from the actual value. So that is a bias.

    Okay. And whereas they are all all the values are probably they are far off but they are very near to each other though the predicted values are close to

    34:47

    each other. Right?

    While they are far off from the actual value but they are close to each other. You see the difference.

    So that is bias. And then the other part is your variance.

    Now variance is when the predicted values are all over the place. Right?

    So the variance is high. That means it may be

    35:04

    close to the target but it is kind of very scattered. So the points the predicted values are not close to each other.

    Right? In case of bias the predicted values are close to each other but they are not close to the target.

    But here they may be close to the target but they may not be close to each other.

    35:19

    So they are a little bit more scattered. So that is what in case of a variance.

    Okay. Then the next question is about again related to bias and variance.

    What is the tradeoff between bias and variance? Yes, I think this is a interesting question because these two

    35:36

    are heading in different directions. So for example, if you try to minimize the bias, variance will keep going high and if you try to minimize the variance, bias will keep going high and there is no way you can minimize both of them.

    So you need to have a trade-off saying that

    35:52

    okay this is the level at which I will have my bias and this is the level at which I will have variance. So the trade-off is that pretty much uh that you you decide what is the level you will tolerate for your bias and what is the level you will tolerate for variance

    36:08

    and a combination of these two in such a way that your final results are not way off and having a trade-off will ensure that the results are consistent right so that is basically the output is consistent and which means that they are close to each other and they are also

    36:24

    accurate but that means they are as close to the target as possible. possible right so if either of these is high then one of them will go off the track define precision and recall now again here I think uh it would be best to uh draw a diagram and take the

    36:42

    confusion matrix and it is very simple the definition is like a formula your precision is true positive by true positive plus false positive and your recall is true positive by true positive

    36:58

    plus false negative. Okay, so that's you can just show it in a mathematical way that's pretty much uh you know that can be shown that's the easiest way to define.

    So the next question can be about uh decision tree. What is decision tree pruning and why is it?

    So basically

    37:15

    decision trees are really simple to implement and understand but one of the drawbacks of decision trees is that it can become highly complicated as it grows. Right?

    And the rules and the conditions can become very complicated

    37:31

    and this can also lead to overfitting which is basically that during training you will get 100% accuracy but when you're doing testing you'll get a lot of errors. So that is the reason pruning needs to be done.

    So the purpose or the

    37:46

    reason for doing uh decision tree pruning is to reduce overfitting or to cut down on overfitting. And uh what is decision tree pruning?

    It is basically that you reduce the number of branches because as you may be aware a tree

    38:02

    consists of the root node and then there are several internal nodes and then you have the leaf nodes. Now if there are too many of these internal nodes that is when you face the problem of overfitting and pruning is the process of reducing those internal nodes.

    All right. So the

    38:19

    next question can be what is logistic regression? Uh so basically logistic regression is um one of the techniques used for performing classification especially binary classification.

    Now there is something special about

    38:35

    logistic regression and there are a couple of things you need to be careful about. First of all the name is a little confusing.

    It is called logistic regression but it is used for classification. So this can be sometimes confusing.

    So you need to probably

    38:51

    clarify that to the interviewer if if it's really you know if it is required and they can also ask this like a trick question right so that is one part second thing is the term logistic has nothing to do with the usual logistics that we talk about but it is derived

    39:07

    from log so that the mathematical derivation involves log and therefore the name logistic regression. So what is logistic regression and how is it used?

    So logistic regression is used for binary classification and the output of a logistic regression is either a zero

    39:24

    or a one and it varies. So it's basically it calculates a probability between 0 and one and we can set a threshold that can vary and typically it is 0.5.

    So any value above.5 is considered as one and if the probability

    39:40

    is below 0.5 it is considered as zero. So that is the way we calculate the probability or the system calculates a probability and based on the threshold it sets a value of zero or one which is like a binary classification 0 or one.

    Okay then we have a question around k

    39:57

    nearest neighbor algorithm. So explain k nearest neighbor algorithm.

    So first of all what is a k nearest neighbor algorithm? This is a classification algorithm.

    So that is the first thing we need to mention and we also need to mention that the k is a number. It is an

    40:15

    integer and this is variable and we can define what the value of k should be. It can be 2 3 5 7 and usually it is an odd number.

    So that is something we need to mention. Technically it can be even number also but then typically it would be odd number and we will see why that

    40:31

    is. Okay.

    So based on that we need to classify objects. Okay, we need to classify objects.

    So again, it will be very helpful to draw a diagram. You know, if you're explaining, I think that will be the best way.

    So draw some

    40:46

    diagram like this. And let's say we have three clusters or three classes existing.

    And now you want to find for a new item that has come. You want to find out which class this belongs to.

    Right? So you go about as the name suggests, you go about finding the nearest

    41:03

    neighbors, right? the points which are closest to this and how many of them you will find that is what is defined by K.

    Now let's say our initial value of K was five. So you will find the K the five nearest data points.

    So in this case as

    41:19

    it is illustrated these are the five nearest data points but then all five do not belong to the same class or cluster. So there are one belonging to this cluster one, the second one belonging to this cluster two, three of them belonging to this third cluster.

    Okay.

    41:35

    So how do you decide that's exactly the reason we should as much as possible try to assign a odd number so that it becomes easier to assign this. So in this case you see that the majority actually if there are multiple classes then you go with the majority.

    So since

    41:51

    three of these items belong to this class we assign which is basically the in in this case the green or the tennis or the third cluster as I was talking about right so we assign it to this third class so in this case it is uh

    42:06

    that's how it is decided okay so k nearest neighbor so first thing is to identify the number of neighbors that are mentioned as k so in this case it is k is equal to five so we find the five nearest points and then find out out of these five which class has the maximum

    42:23

    number in that and and then the uh new data point is assigned to that class. Okay, so that's pretty much how k nearest neighbors work.

    All right, and with that we have come to the end of our full course. If you have any doubts or questions, ask them in the comment section below.

    Our team of experts will

    42:39

    reply you as soon as possible. Thank you and keep learning with Simply Learn.

    Hi there. If you like this video, subscribe to Simply Learn YouTube channel and click here to watch similar videos.

    To n up and get certified, you can check the description box below.