Day-30 | KUBERNETES IS EASY | INTRODUCTION TO KUBERNETES| #k8s #devopscourse #kubernetes #devops

🚀 Add to Chrome – It’s Free - YouTube Summarizer

Category: DevOps Training

Tags: ContainersDevOpsDockerKubernetesOrchestration

Entities: AbhishekAmazonBorgcncfDevOpsDockerFlipkartGoogleKubernetesNetflixPodmanPrometheus

Building WordCloud ...

Summary

    Introduction to Kubernetes
    • Kubernetes is a crucial component in DevOps, often appearing in job descriptions as a key skill.
    • It is considered the future of DevOps, especially for those aiming for long-term careers in the field.
    • Kubernetes is a container orchestration platform, distinct from Docker, which is a container platform.
    Docker vs. Kubernetes
    • Docker is a container platform that simplifies container management but lacks advanced orchestration features.
    • Kubernetes addresses several limitations of Docker, such as single host dependency, lack of auto-healing, and auto-scaling capabilities.
    • Kubernetes is designed to support enterprise-level applications, unlike Docker, which is more minimalistic.
    Key Features of Kubernetes
    • Kubernetes operates as a cluster, allowing for better resource management and fault tolerance.
    • It offers auto-healing, automatically restarting containers that fail.
    • Kubernetes supports auto-scaling to handle increased load, unlike Docker.
    • Enterprise-level support in Kubernetes includes features like load balancing and firewall integration.
    Takeaways
    • Learn Kubernetes to stay relevant in DevOps and access better job opportunities.
    • Understand the differences between Docker and Kubernetes to effectively use both.
    • Focus on mastering Kubernetes basics to build a strong foundation for advanced learning.
    • Utilize Kubernetes' auto-scaling and auto-healing features to manage applications efficiently.
    • Explore Kubernetes' enterprise features to enhance application security and performance.

    Transcript

    00:00

    hello everyone my name is Abhishek and welcome back to my channel so today we are at day 30 of our complete devops course and in this class we will start our journey with kubernetes so firstly kubernetes is easy Yes you

    00:16

    heard it right so many people worry about kubernetes and whenever we talk about devops you know people focus most of their time on cicd Solutions so I've seen many people focusing a lot on building pipelines you know talking about live CI CD project and even if you

    00:33

    search on YouTube you'll find bunch of cicd related stuff on YouTube and everybody is focused on it but the key player in the market is kubernetes take 20 resumes search for devops related jobs if you don't find kubernetes you can definitely come back and in the

    00:49

    comment section ask me okay I did not see kubernetes in one of the videos but you are saying kubernetes is important trust me that will never happen because in all of the job descriptions you will see kubernetes because kubernetes is the future

    01:04

    kubernetes is a future of devops so if you want to do a marathon journey in devops right if you want to do a short Sprint or if you just want to get into devops and you know you just want to be around in devops then yeah you can

    01:20

    survive yourself by just learning about simple devops topics you can do projects on CI CD and you can you might also find some uh simple uh devops job roles which offers you some positions but it's not devops it is basically build and release engineering or it is basically some

    01:37

    release engineer roles which offers you jobs on just CI CD and other devops stuffs but if you want to like I told you do a marathon in devops or if you want to do a long run in devops Journey kubernetes is the future now why I am saying this because everybody knows that

    01:54

    these days people have started moving towards microservices now not these days people have started moving towards micro services from last six to seven years very actively and you all know from last six classes we have been talking about Docker and containers right so I have

    02:11

    explained enough why containers are very important if you haven't watched our previous videos from day 24 to date 30 uh day 29 if I am not wrong we just talked about containers evolution of containers introduction to Containers importance of containers and we also did

    02:27

    some live projects on containers that is using Docker so if you haven't watched that videos I highly recommend you to watch those videos before you jump on to kubernetes because the only prerequisite for learning kubernetes is docker now why do I say Docker not containers

    02:43

    because most of the people are very acquainted to the term called Docker they relate themselves more when we say a Docker but in general I am talking about containers so if you want to be a key player in this space you need to understand the concept of containers

    02:59

    very well it's not like when I say understand containers don't talk about simple Docker commands Okay I'm not talking about the docker commands at all whenever I speak to you in my videos what I always tell you is get strong with your Basics so I am only talking

    03:16

    about the basics of the containers that means you need to understand what container is bringing into the picture how is it different from a virtual machine what is your networking isolation what is your namespace isolation right and why containers are very lightweight in nature you all know that containers are lightweight but why

    03:32

    are they lightweight how do you secure your containers we talked about destroys we talked about multi-stage Docker builds so learn about all of these things before you start your journey with kubernetes now I am assuming all of you have watched my previous videos from day 24 to day 29 or you already know the

    03:48

    concept of containers and Dockers very well okay so this is the assumption that I'm making and I'm going forward with today's video so I'm again saying you if you haven't watched the previous videos or if you do not have the concept or the understanding of containers then don't go forward but instead go back and watch

    04:04

    the previous videos okay now the first question that you should get before learning kubernetes is what is the difference between Docker and kubernetes okay because from day 24 to day 29 I explained you

    04:20

    how to build projects on Docker we deployed some real-time applications we you know secured the containers using displayless images multi-stage builds and we saw about the life cycle of talker and containers and how talker which is a container platform what was

    04:37

    Docker I also explained in the previous class Docker is a container platform that means Docker is making you the interaction with the containers or you know Docker is making your container Journey very easy because it provides a complete container life cycle whether it is using the docker engine or using the

    04:53

    docker CLI it makes you the I mean it makes your container Journey very easy but now what is kubernetes then so if you already have a container platform and if the container platform is offering you a lot of things what is kubernetes so for the textbook definition kubernetes is nothing but a

    05:10

    container orchestration platform okay so what is kubernetes kubernetes is a container orchestration platform so this is for just a test book definition of understanding that means just for your one word answer you can say that Docker is a container platform whereas

    05:27

    kubernetes is a container orchestration platform that does not make you understand anything right if I just tell you here that okay learn about kubernetes kubernetes is just a container orchestration platform and if I close this topic here you will not understand anything if you are just a beginner with kubernetes so

    05:44

    let's try to understand the Practical implication or what I am like you know buy container orchestration platform how is it different from Docker platform or a container platform let's try to see now if you have worked with containers before one thing that you will notice is

    05:59

    containers are ephemeral in nature I use this term previously also if you haven't I mean if this term hasn't got registered with you ephemeral is nothing but something that is short

    06:16

    life or you know short living in nature that means containers can die and you know revive anytime so what do I mean by this

    06:33

    basically if you have a host let's assume that this is your host on top of which you have installed Docker on top of which you created let's say some 100 containers okay now one of these containers is taking a lot of memory okay all of a sudden we started taking a

    06:49

    lot of memory so this one impacts your 99th or you know it impacts your 50th container why it impacts because this container is not getting enough resources so it will die or if it is already not scheduled what happens is this container will not get started

    07:07

    right so basically the life of containers is very short if there is a lack of memory resources or you know if there is some issue with the container like the image is not getting pulled in any of the cases the container will immediately die because there is only

    07:23

    one host here and on top of that I installed a 100 containers one of the container there is no relation between container number one and there is no relation between container number 100 but still what is happening here is the container one which is consuming a lot

    07:38

    of memory is killing the container 100 it is not directly killing but because of the linear how Linux works okay so if you want to understand this thing in deep what happens is there is priority uh that is allocated for each process in Linux so now what happens is that when

    07:54

    one of the uh you know process is taking a lot of memory depending upon your Linux and kernel rules the process like Linux have to Define right or kernel has to decide which process to kill let's say there are 100 process it cannot randomly kill

    08:09

    process number 50 right so there is a particular algorithm in kernel using which it deletes one of the processes so here in this case what I am telling you is container one was taking a lot of resources because of which container 100 is not at all getting created or it is

    08:25

    directly dying okay so this is one use case what is the problem here because you have only one host and on top of on top of it you have installed docker and you have created 100 containers one of the container is dying because of the other thing so what is one problem that

    08:40

    we have learned here single host okay so this is problem number one now let's move to problem number two what is problem number one the nature of Docker or the nature of this container platform is scope to one single host so

    08:57

    because it is only one single host the containers which are there it is impacting each another and if one of the container is impacting the other container there is no way that this container can come up okay so this is problem number one now the problem number two let's say

    09:13

    somebody has killed one of your container killed container now what will happen if somebody kills the container immediately the application that is running inside the container will be not

    09:29

    accessible right why is what is the reason for it being not accessible because the container got killed and unless there is a user or a devops engineer who starts this container somebody has to act upon the container that got killed okay so

    09:46

    this behavior is called as Auto healing okay so what is auto healing Auto healing is a behavior where without users manual intervention okay so without the user's manual intervention your container should start by itself

    10:04

    but does it happen in the docker let's say you are playing around with your Docker containers on your personal laptop does the container come up automatically no it does not come up now there are hundreds of reasons why your container can go down okay so there are hundreds of reasons

    10:22

    why your container can go down now the problem is that a devops engineer cannot continuously monitor some 10 thousands of containers okay so in your personal laptop you might have just one single container but when you are working in

    10:38

    production or when you are working with your organization you will see some 10 thousands of containers now you can always not do Docker PS right always a devops engineer cannot perform the docker PS command and see which containers are in running state so there has to be a mechanism that is

    10:55

    called as Auto healing so Auto healing is a very important feature that Docker or any container platform by itself is missing so what is problem number two Auto healing what was problem number one single host nature of your Docker container now let's move and see what is

    11:12

    problem number three so this problem number three is called as Auto scaling now let us try to understand what is this problem number three so again let's take the same example you have a

    11:27

    easy to instance or a physical host on top of which you install docker okay on top of which you have let's assume you just created one container okay just for easy understanding in usual terms you will not just create one container but let's say that you just

    11:43

    created one container and this container you know it has some resources called as okay this host let's say it is a 4GB RAM and 4 CPU now this container can maximum consume up to 4G 4 CPU and 4 GB Ram

    12:01

    because it is the maximum capacity of your host in general your container will not get all the required resources from your host because the host itself has a lot of processes that is running right but for easy understanding let's say maximum this container can go up to 4 CPU and 4GB okay or you know for some

    12:19

    reason what has happened is your user whoever is the user okay so you have some 10 000 users your application has some 10 000 users but what happened is during a festival season let's say some Christmas or

    12:36

    dasara or any Festival season your users all of a sudden went from 10 000 to 1 lakh so this happens all the times right so let's say a movie is released on Netflix and this is a very uh popular movie something like Marvel or Avengers or any

    12:53

    popular actors movie so usually Netflix might receive load from 10 000 users but on this particular occasion Netflix will receive some 10 000. sorry some 1 million or one lakh or whatever the load gets increased so to satisfy

    13:10

    this increasing load or to you know uh for the container of the application to act upon the increasing load you need to have a specific feature which is called as Auto scaling now what is auto scaling as soon as the load gets increased okay there are two ways one is manually

    13:29

    you increase the application there is only one container right so manually you increase the container count from one to let's say 10 okay because the load is increased by 10 times I am just giving an example so manually you increase the load from one to ten or the containers similar containers like similar to C1 C1

    13:46

    what you will do is you will create 10 different C1 containers or it has to happen automatically okay as soon as this sees the load uh the docker container it has to immediately understand that oh okay so

    14:01

    the load is getting increased so I have to scale up myself but Docker does not support both of the things right so let's say you have one container called C1 okay so manually what you will do is let's say you want to increase from 10 000 to 20 000 user request what you have

    14:18

    to do you have to create another container called C1 and apart from this you have to configure load balancing okay if there is no load balancing you know you cannot tell user that okay for first 10 000 users access my application URL as 172.16.3.4 and for next 10 000 your

    14:35

    users access my application on 170.16.3.5 that is not possible Netflix will never tell you this right all that you do as an end user is just access netflix.com okay so you just access netflix.com and your favorite movie so now what is

    14:52

    happening behind the hood is there is a load balancer which is actually sending you the load whether you are doing it manually or automatically whether you increase these containers count uh manually from one to ten or automatically if your platform is increasing from 1 to 10 behind the hood

    15:07

    there is a load balancer okay so this load balancer basically says that oh okay now I understood that instead of one container there are two or instead of two there are three or there are ten so let me equally split the load okay so there has to be this mechanism of load

    15:23

    balancer which supports your auto scaling so this is feature number three which is missing in your docker okay so what are the three different things that I told you the first one that I told you is the problem that we had with the containers is it is a single host one

    15:39

    that is the docker platform basically relies on one single host whether you install it on your laptop or ec2 instance what you are doing is on one specific ec2 instance you are installing Docker and top of which you are installing containers whether the containers are like you know if you have

    15:55

    10 containers 100 containers you are just simply installing on that specific host and you are serving the traffic okay now the problem number two that we have is auto healing what is happening with auto healing is that your containers are not able to heal

    16:11

    automatically if the container is dying then devops engineer should keep track of this 10 000 or 1 lakh or 1 million containers and he has to start by himself or when customers report that your application has gone down you have to start which is a very bad user experience now the problem number three

    16:27

    I just explained is about Auto scaling and the problem number four or the final problem that I that I want to bring here is talker is a very minimalistic or very simple platform okay what is simple platform by default

    16:44

    Docker does not support any of your Enterprise level application support okay Docker does not provide any Enterprise level support now

    16:59

    what do I mean by that let's say you don't know Docker at all what are the minimal things that you require for running your application even on a virtual machine okay so it's not like running your college project if you want to run your college project you can just run it on your laptop but when we deal with Enterprise applications or when we

    17:16

    deal with Enterprise Solutions you have a lot of things to deal with so one of the example is your application should have a load balancer let's keep writing firewall okay without load balancer your application is not Enterprise ready without firewall your

    17:32

    application is not Enterprise ready or you know there are some such parameters let's keep uh writing them okay and what is the other thing your application has to Auto scale or at least it has to support scaling your application has to Auto heal or at

    17:47

    least it has to support healing okay and then you know your application has to support some API gateways if you keep writing this number will go it will keep increasing okay so what are

    18:02

    these things called some Enterprise level standards but Docker does not support this Enterprise level standards by default okay or you know instead of you going to Docker spam or any other high-level Docker concepts by default if

    18:19

    you're just using Docker these all things are not supported so it's a very simple or minimalistic platform so who is solving all of these problems let's write all the four problems here and let us see how kubernetes is solving each of this problem the first problem is a single host

    18:35

    second problem is Auto scaling third problem is Auto healing fourth problem is Enterprise level support

    18:51

    I'm sorry if there are any typos so these are the four problems that we discussed there are hundreds of problems but these are the four problems that are very important now I cannot tell you all of the problems because we are just starting our journey with kubernetes now who is solving these problems as I told

    19:06

    you the one simple answer is kubernetes so now you can answer this question right if somebody is asking you in the interviews who is solving you the problem of Docker or what is the difference between docker

    19:24

    and kubernetes okay so if somebody asks you this question you have the answer now okay so all these five slides or all these 15 minutes I just talked about this one simple question the problem with Docker or the difference between Docker and kubernetes so now you should

    19:41

    be able to explain these problems let me tell you the solutions immediately okay so till now I just explained you the problems with Docker and I just told you that kubernetes will solve the problem but I know that you people will not trust me you will say that oh okay explain me how it will solve because if

    19:58

    anybody says you that oh okay kubernetes solves the problem you should not trust you should ask the question okay explain how kubernetes solves this problem so let's try to understand by default kubernetes is a cluster what is a cluster cluster is basically group of

    20:15

    nodes okay so previously when we installed Docker we just installed on one personal laptop or you know we just installed on one simple ec2 instance so kubernetes in general in a production use case it is installed in a master node architecture

    20:33

    okay so what is masternode architecture just like Jenkins we create clusters so that means to say whenever we install kubernetes we just create one master node and we create multiple nodes okay so somebody will directly ask me oh does that mean kubernetes cannot be installed

    20:49

    on one single node you can definitely do it but that is only your developer environment okay so to just practice kubernetes or you know to just start working around with kubernetes you can also install kubernetes on one single node but in general in production kubernetes is installed in whether it is

    21:07

    high availability or a standalone mode kubernetes is generally installed as a cluster so now what happens okay I might be installing it as a cluster but your question will be what what is the Advent is that I get if I install as a cluster the advantage would be in the previous slide I told you if you go back to this

    21:23

    problem here I told you that this container is actually getting affected by this one container that is taking a lot of memory right now what will happen is if you install kubernetes in kubernetes there are two nodes let's assume so if this node okay so let's

    21:39

    assume this is the specific node and this is container one and container 99 okay so if container one is impacting this container 99 immediately kubernetes will put this container 99 in a different node okay so that what happened this container 99 is not affected by this one or the meaning is

    21:55

    that a faulty node okay so there is a faulty node for example or there is one faulty application in the node which is impacting the other applications so kubernetes because it has multiple node architecture immediately it can put nodes in a different uh sorry pods in a

    22:12

    different node or applications in a different node okay so because of which you have a cluster-like architecture okay so this is one problem that is already solved by using the cluster behavior of kubernetes so kubernetes is by default cluster in nature now the

    22:28

    second problem what is that auto healing so kubernetes basically has something called as uh okay I don't want to go into the details but kubernetes has something called as replication controller or replica sets so replica set is a new name replication controller

    22:44

    is old name okay just like you can consider version one and version two so kubernetes has something called as replica set so all that you need to do is you don't even have to deploy a new container okay let's say you have C1 okay and your C1 or your application is

    22:59

    receiving load increase load previously it was receiving 10 000 on one Festival it is receiving one lakh for example okay or one tenth of a million so what we can simply do is kubernetes is basically dependent on yaml files so everything in kubernetes is all about

    23:14

    yaml files so in replication controller.aml file replica set controller.yaml file or even in the deployment ml file now if you don't know what these terms are don't worry all that you need to understand is as a devops engineer you can go to one specific yaml file yaml is basically a

    23:31

    indentation format file just like Json files okay so you can simply go to this yaml template file and say that include increase my replicas from 1 to 10 because my traffic is increasing I know that tomorrow is Festival so I

    23:47

    want to increase traffic from 1 to 10 this is manual way and kubernetes also supports something called as HPA which is horizontal pod Auto scalar okay so using which you can directly say that okay whenever there is a load just keep increase okay if one of my container is receiving a threshold of 80 percentage

    24:03

    so whenever you see that the load is reaching threshold of 80 percent just spin up one more container okay so in such cases it will keep up spinning containers if the load is even going from 1 million to 10 Millions even your horizontal pod scalar feature of your kubernetes can handle so this is how you

    24:18

    are achieving Auto scaling two problems solved now let's go with the problem number three what is the problem number three that I told you Auto healing so what is auto healing basically the word heal itself means that whenever there is a damage kubernetes has to control the damage

    24:36

    Okay so kubernetes controls and fix the damage so it will either control or it will fix so most of the time it will control the damage now what is the meaning of

    24:53

    controlling the damage or what is the meaning of Auto healing so let's say for some reason one of your container is going down there are hundreds of reasons why your container can go down I'll explain you what are the classic problems of why a

    25:08

    pod can go down or why a container can't go down there are multiple things and there are some standard things that you can remember uh or you know standard debugging steps for a container when container goes down but for now let's assume that your container is going down so in case of Docker like I told you you

    25:24

    have to look into the docker PS commands look into all the list of containers and understand okay one of my container went on so let me restart or let me recreate this container whereas kubernetes has a feature called Auto healing using this Auto healing feature

    25:40

    whenever the container is going down even before the container goes down kubernetes will start a new container okay so even before this container goes down how kubernetes will basically work is as soon as kubernetes receives uh in in

    25:56

    kubernetes there is something called as API server okay so tomorrow I'll explain you the kubernetes architecture during that time I'll explain you what is API server what are the different components or how kubernetes architecture is present but for now there is something called as API server whenever the API

    26:11

    server understands that one of the container is going down or whenever it receives a signal coin container is going down immediately what kubernetes does is even before this container goes down it will roll out a new container okay so whenever it will roll out a new

    26:28

    container the end user will not even understand that the container has gone down and a new container has come up unless your application like let's say your us is a very heavy application or you know in some cases that might happen but I am only talking about the generic terminology or general uh usage purposes

    26:45

    so even before your container goes down a new rollout or a new container is created or a new pod is created in kubernetes we usually deal in terms of PODS not in containers but for now let's understand that uh even if your container is going down before that kubernetes starts a new container so

    27:02

    using which we have achieved a feature called Auto healing so three problem solved I explained you three problems Auto healing Auto scaling single host nature of Docker because kubernetes is a cluster it has scope from for putting one container from one specific node to

    27:18

    another specific node what is the fourth problem the fourth problem is the Enterprise uh nature like Docker I told you it does not has entered uh sorry uh Enterprise nature okay so it does not have many Enterprise support capabilities like it does not

    27:35

    support firewalls it does not support load balancers or by default okay so it does not support a lot of things unless you go to Docker spam so what people have done is the people at kubernetes so kubernetes is basically

    27:51

    a tool that was originated from Google okay so people at Google were using a specific tool called Borg where they say that kubernetes is one of the parts of work so they say that Borg is uh like you know

    28:07

    um a even a better solution and kubernetes is one of the like you can consider as one of the parts of blocks or one of Borg or you can consider kubernetes as initial solution for Borg so we don't have much details here but the people at Google what so Borg is not

    28:22

    an open source store so the people at Google what they have done is they have built a Enterprise foreign container orchestration platform okay so why they have built a container

    28:40

    orchestration platform that supports Enterprise level is because the docker platform which is just a container platform right Docker was just a container platform so it does not have all of these capabilities but to run your application

    28:55

    on a pad on a platform which is not Enterprise ready it is not suggestible right so that's why nobody use Docker in production okay so dogger is never used in production so you might use Docker spam in

    29:11

    production but Docker independently is never used in production because it's not a Enterprise level solution so Docker is basically a container platform which will allow you to play with containers on your personal laptop or on your ec2 instances but in general you

    29:27

    can consider that Docker has some container runtime which will allow you to run containers or which will allow you to manage the life cycle of containers but it's not an Enterprise solution because it does not have all the list of capabilities like Auto healing water scaling load balancer support firewall support support for API

    29:44

    gateways okay so all of white listing blacklisting so these are all features that you require to run your application in production now Flipkart or Amazon cannot just say that Docker platform is a platform that is running container so let me move to Docker so the first question the organizations mncs or your

    30:00

    corporates will have is okay I appreciate a solution like Docker platform but is it suitable for our organization does it support all of these capabilities because I want to Blacklist few clients or I want to whitelist a few particular IPS I want to

    30:17

    Blacklist uh somebody who is uh trying to perform uh you know DDOS attack or denial of service attack so all of these capabilities are required which Docker does not have and kubernetes is the one that is aiming to solve this problem now does kubernetes solve this problem 100

    30:32

    percent so to answer this question in a nutshell the answer is definitely no so you might talk to experts in kubernetes you can talk to anyone like who have been in the world of kubernetes so it is not as simple as we do it in the world of

    30:48

    virtual machines okay so when we were do dealing with virtual machines like 10 years back or you know uh seven to eight years back everybody was on Virtual machines and the way you can integrate this external tools to Virtual machines was Far easy and virtual missions offer

    31:03

    you far more security when you compare with uh containers so kubernetes is evolving and it is backed up by very wonderful people at cncf okay so there are many contributors to cncf even I am

    31:19

    one of the contributors to cmcf cncf where the goal of this community is to make kubernetes a better place okay so you are basically I mean kubernetes basically has a very good

    31:34

    backup and every day there are lots of enhancements that are done to kubernetes so you will see that there are many projects in the cncf like you have podman you have build packs you have uh you know uh Prometheus all of these are cncf incubated or you know cncf adopted

    31:50

    projects so they might be created by someone else but cncf has adopted this project so what is the meaning is that there is a community that is constantly focusing on developing the kubernetes community not just the kubernetes application but the tools around kubernetes because kubernetes also by

    32:06

    default does not provide you a lot of capabilities but kubernetes provides a concept uh like you know Concepts like custom resources custom resource definitions using which you can extend kubernetes to any level okay so let's say by default kubernetes uh for example by default kubernetes

    32:22

    does not support Advanced load balancing capabilities okay so everybody's knows this and this is a practical truth so by default kubernetes has services and Q proxy which will just give you some basic load balancing like right round robin but this is one of the major

    32:37

    problems and how kubernetes solves this problem because kubernetes introduced custom resources and custom resource definitions and they told applications like fi nginx that okay you create a kubernetes controller using which people can use your load balancer even in kubernetes okay and this concept was

    32:54

    called as Ingress controllers so similarly kubernetes is advancing every day and kubernetes is improving and it is reaching this near 100 percent okay so we will reach that near 100 so that is one of the reasons why even you know some of the companies hesitate to

    33:11

    implement kubernetes and production people are migrating slowly to kubernetes in production because of all this support and all of these things so kubernetes is one such tool that you have to definitely watch out for and like I told you in the very first slide kubernetes is easy

    33:27

    don't worry about kubernetes if you understood these four topics that I told you so today your part is done like let's assume that you have learned already five percent of kubernetes okay now what we will focus is learning the next 95 percent and this next 95 percent will completely depend on your

    33:43

    foundations that is your first five percent that is to understand why you need to learn kubernetes if you understand the why statement why you need to learn kubernetes then you will understand with your effort not directly with your effort in learning kubernetes you will understand the risk 95 percent okay so we will slow by slow I mean in

    34:01

    our next classes step by step we'll learn about the concepts like pods we'll learn about the concepts like deployments Services even before all of these things I'll explain you the architecture of kubernetes okay because that is very important and on the very first day like tomorrow when I explained to the architecture of

    34:16

    kubernetes might be some people might not understand the complete architecture and you might feel that oh there are so many components in kubernetes now I have to learn about all of these components but on the very first day you will not understand all the components I'm very sure about it you might understand the

    34:31

    definitions you might feel like you understood it but practically to gain understanding of all the kubernetes components it will take some time so don't lose hope in our next videos we'll start learning with pods we'll start learning with deployment Services Ingress controllers we'll start talking about admission controllers so it is a

    34:47

    long journey and stay with me you will learn kubernetes definitely because kubernetes is very easy I hope you like this video if you like the video click on the like button if you have any feedback for me definitely post that in the comment section and don't forget to share it with your friends and your colleagues thank you so

    35:03

    much for watching the video take care everyone bye see you in the next video