Welcome to this lesson, which is the practical exercise for this course, deploying with Kubernetes. As we discussed, there are many methods for installing Kubernetes. Kubeadm, which is used in this exercise is the general method for installing Kubernetes. It can be used on various public and private Clouds. However, it can also be used on bare metal. Minikube is a virtualized approach for installing Kubernetes. Kops is the utility for installing Kubernetes on Amazon Web Services. The kops utility enforces many of the enterprise standards for Kubernetes. Things like desize, processor, instance size. So we can't take advantage of the free tier with kops. Kops takes advantage of other AWS resources as well, and Kubernetes for the Google Cloud Platform. So there is a utility for installing Kubernetes on the Google Cloud. In this exercise, we're installing Kubernetes on Amazon Web Services, not with Kops. Therefore, we are able to use the AWS free tier. Of these steps, work on every environment, AWS, Azure, Google Cloud, bare metal, and more. So we're going to start here at our AWS console and we're going to click, "Launch Instance." We're going to create two instances, one called KubeMaster and KubeWrkr. So here we're going to get again, click the, "Launch Instance button." We're going to select, Ubuntu server 18.04 LTS, 64-bit. So we can see it here and continue. In this case, we're going to select the t2.micro instance, which is not really within the requirements of Kubernetes, but it can be used. So we'll select that and then we will click the next button to configure instance details. We are going to then click, "Add Storage," then now we're going to click, "Configure Security Group." In this case, notice I'm selecting an existing security group, but it's a special security group. It's allowing all traffic. So please consult the documentation for allowing all traffic. For this exercise, we don't need to go through all of the steps and open certain ports for certain applications. We're just allowing all traffic and you can remove this as soon as you're done with it. Generally, this is a bad idea, but for this exercise, it's best not to get caught up in the details of networking. So this is inbound rules and outbound rules, we're allowing all traffic. Click, "Review and Launch," right here, review and launch, and review what we have, click "Launch." Now we have to pick a key pair. So either use an existing or creating a new key and acknowledge, then click, "Launch Instances," and then, "View Instances." The instances will begin to build, and again, one instances called KubeMaster, that'll be our master cube or a master node, and the worker node is going to be cube worker. Now when we're ready to connect, we are going to select each instance, click, "Connect," and we'll get an opportunity to connect. There are various ways to connect to the instance. We are going to connect using PuTTY. So please consult the documentation right here from this page about PuTTY or other documentation, but there are other ways to connect as well. Here, we have connected already on PuTTY. What I did was I just changed the color of the text between the master node and the worker node. This way, we can easily tell them apart. We're going to now install using kubeadm. This link here is the link to the kubeadm documentation, nothing more than that. So what we're going to do now, we go to the documentation and we can see it here. We're installing on Ubuntu, so we want to click this tab. As you can see, there are other tabs and we're going to run these commands. So first, we're going to run sudo and we're going to do our apt-get update. So we're going to do update of our instance first. Then we're going to install docker. So we need to run these commands on both the master and the worker nodes. So again, we're installing docker on both of these nodes. So just be aware and I'll explain why we're going to use docker as our container runtime. Then we're going to switch over to superuser or SU app as sometimes it's called, and then we're going to run an update here and we're also going to install a curl in some other packages. We're going to run this command to bring data key. We're going to be to run this command here, run this in three steps. We're going to do another update, and then we're going to install a kubeadm, and kubelet, and kubectl, and we're going to finally finish with this command. So all of this list needs to be run on both the master and the worker nodes. So here, since we're using Docker, we don't have to go through these steps here to configure drivers to be used for the kubelet on the control plane. Because we're using Docker, we can move to initializing the cluster, and here's some Kubernetes.io. This is the documentation for initializing the cluster. Also here, if you need this information here for any additional configuration, again, Kubernetes.io, here is the documentation. So what we're going to do is moving to initializing the cluster, and then we'll run that on the master node or run most of everything on the master node. One thing that's important, we've got to go back to our console and we need the first a private IP address for our master node. So we'll select our master node and we'll come down here under the description and we will get the private IP. Here is the command we're going to use for initialization. So kubeadm in it and you can see the parameters that we're passing in. Because of the network plugin that we're going to be using, we have certain parameters and also since we are using the free tier, we will not pass the preflight checks, the preflight status. So what we're going to do is suppress those errors and those warnings. We're going to pass in this flag to suppress the number of CPUs errors and warnings. So in this case, I found my master nodes private IP address. Now, yours will be different. So we're passing in the private IP address and then the rest of these parameters, and also suppressing the errors. We can see the full command when it's built down here. After it completes the command, must be one of these commands that it's actually returned. So after the initialization completes, it's going to tell us here that the initialization was successful. It's going to give us these three commands here. Notice it says, run as a regular user. We'll get to that, and here we need the worker node to join our Kubernetes cluster. So what we need to do is run this command on the worker node. So what I've done here is I've copied it, here I've just selected, highlighted it, and copied it. I did paste it into a notepad just to make this one full command and we can see here that we are passing this full command kubeadm join, we have to have the private IP of the worker nodes. So we're going to have to look that up same way on the console port 6443 and minus token. The token is given to us from the initialization which took place on the master node and then we can see this long hash that we get. So you can see here how the command was run. It's important because your private IP is going to be different and your tokens will be different, and your hashes will be different, but the commands are run the same way every time. So there are still three commands that need to be run and they need to be run as a regular user. You can see this right here. So what will have to do is exit route. So we've already come back here to the master node and we're exiting out of SU, and we're out of the pseudo and now we are just at our regular user prompt, and then we're going to run these three commands. You can see them right here. So we run the commands and after that's done, we can run a kubectl get nodes. So at this point, most of our commands from this point forward, are going to be kubectl with the Kubernetes command line. Notice here after we run kubectl get nodes, we're seeing here the status of the nodes is" NotReady". You can take a look here. These are the private IP addresses of our master node, which you can see here, and our worker node. So again, yours will be different, but you'll get this "NotReady" status. The reason why we're getting this not ready status is because we have not yet installed our network plugin. So what we're going to do now is go back to our documentation. We can see here, we're going to use the network plugin Calico. You notice here there are others. But for this example, we're going to use Calico. You're only going to use one. So in this case we're going to select Calico. Again, this is right here on the documentation. Just scroll down to the network plugin. All we have to do is take this command and run this exact command on the master node. You can see it right here. So in this example, we can see we went to our master node. We can see its status is still not ready, and we can see the master and the worker node listed. In this case, I pasted in the command for our network plugin for Calico, you see it right here, and then ran it and Calico installs. Then we can go back to our master node and run kubectl get pods -- all namespaces. Then we'll see several of the pods that are running here on the Kubernetes cluster. Again, we can run kubectl get nodes, and we'll see the status is now ready. So it went from not ready to ready. So now as part of our practical application, let's deploy an application. Nginx is one of the applications that is a learning application for Kubernetes. For this exercise, it will be fine. The nginx is one of the test applications and it is a web server. So what we're going to do is create a deployment for nginx. The application will be downloaded, kubectl create deployment. We can see here the command to create the deployment. In this case, what was rundown here is kubectl create deployment -- the nginx image. We're going to call the deployment base-nginx. So again, we can see here create deployment -- image. The image name, in this case, is nginx, and our deployment name is base-nginx. Then a deployment is created. So you can save it to run here and deployment is created with the deployment name. We can run a command to get pods and kubectl get pods, and we can see so far we have one pod running. We can also run get deployments. So the deployments are returned, observe the deployment, we called it base-nginx. So kubectl get deployment. We see here base-nginx is the deployment that's returned and you can see how long it's been running for. Now, we want to scale up our deployment. So we want to create three replicas of the deployment. So the command again, if you remember from earlier, kubectl scale deployment -- replicas, the number of replicas, and the deployment name. So in this case, I ran kubectl scale deployment -- replicas 3, and our deployment name is base-nginx. So we can see here kubectl scale deployment -- 3 base-nginx. So we can see here, the application or the deployment has scaled up successfully. To view the pods, run kubectl get pods. We can see now we have three pods running. We can also describe the pods to list the details, including the node on which the pods are running. So when we run this command, we're going to select one of the pods that were listed. We can actually see that the pod is running on the worker node, because we'll see the worker nodes private IP address. So if we look closely here, we run kubectl describe pod, and we're passing in one of the base-nginx pods. We can see here the node it's running on, is 1723123-51, which if we look here is the private IP address of the worker node, and that's we expect we see its status is running, the image name is nginx, and there some other details here. So the pods are running on the worker node. The image is nginx. Now, as we discussed before, in order to access the application, we need a service. A service is an additional level of abstraction between the user and the running ports. So we're going to expose the deployment using a service, and in this case, it's of type NodePort. So we're going to need to type kubectl create service NodePort, that's the type of a service. The name of the service, which it's a good practice to have the service name the same name as your deployment, and we'll say -- TCP, the exposed port, and then the execution port. So nginx has a port 80, and it also runs on port 80. So we want to expose 80, and this application runs on port 80. So we can see here kubectl creates service of type NodePort. The service name is base-nginx, and we can see here --TCP exposed port and execution port. You can see here, get deployments, we see the deployment name and we want the service to be the same name, so we can see here we run the command here. Now we can run the kubectl gets services, and notice the IP addresses. So these are the cluster IPs, and also notice the ports here, very important. We can see some port mapping. So we can see port 80 is mapped now to the port 31278. Now, your IP address and your ports are going to be different, but all the steps are the same. What we can do now is just to test that our application is running fine, is we can use curl. So curl is a way to bring back data from a web page when you don't have a browser. So we pass in the command, we know our port or our cluster IP address is 10.108.51.157. So we pass in curl with our cluster IP address, just like we see here, and curl HTTP, so just a very similar website address, and this is the code for nginx. Welcome to nginx. It's the code for the homepage for nginx. We can run the same curl command with the same IP address on both nodes, the master and the worker, and we see we get the same data back. So we can see that the nginx web page can be rendered on either the master or the worker. Well, that's really good to run it with curl. However, though, we don't expect any of our users, for business purposes, to ever use curl. At least certainly it would be a rare circumstance. So we want to see the actual web page comes back. So we want to be able to contact the master node and the worker node from our own workstation, our own laptop, and using one of our own browsers there, and we want to connect to the master node and the worker node's public IP address and public DNS. So we can go back to our console and we can actually get the worker, let's start with the worker, we can get the worker nodes, it's in several places, public IP address and the public DNS name. Again here, what we want to do is we want to also get the port that our application was mapped to. So if we run command kubectl get services, that we ran earlier, we can find our service, and we can go over and see our port mapping. So it's always the port that we're looking for, is always to the right of the colon, right before the TCP. So in this example it's 31278. We want to build a URL. So the URL will be the node's public IP address or public DNS name, colon, and the port, the service port that we just saw, the mapped port. So in this case, and again the port is 31278. So we go out to our Amazon console, we get the worker node's public IP address, and we use port 31278 or we get the worker node's DNS name, public DNS name, and the same port, 31278. We put the URL in a browser, and we can see the homepage for nginx. So this is running out of our worker node. You can do the same thing with the worker nodes public DNS name, same port we can see the ngnix on page. We can even do this for the master node. So if we get the master node's public IP address, we can pass that in and the port will be the same. We can see the application is running and also with the master nodes public DNS name as well, same port. So this is a practical application of pod failure and self-healing. So we know one of the big benefits of Kubernetes is the fact that it will self-repair or self-heal. So if we run again, the kubectl get replica sets, we can see here that we have the desired state of the replica sets. We first created these replicas, we passed in the command three for three replicas. So that's our desired state. Our current state is also three. Our ready state is three. So this condition, desired state equals the current state or double equals the current state is true. If we delete a pod or a replica, it will simulate the loss of a pod or a replica failure. So we want to run command, get pods, again. So we know the pods, the replicas will be returned as pods so each one of these pods is a replica, and we can actually see here, your pod names will be different, but it is same command to get the pods and everything runs the same way. Your pod name is going to be different, but just for exercise purposes, let's select the number 2 pod that's running right here, right in the center of the listing. In this case, we can see the name here and just remember our pointing out qrf4r. Now, we're going to run a command to delete this pod. So we're going to run kubectl delete pod and pass in that full pod name. Now, the desired state of the replicas equaling the current state of the replicas will be false. Since the current state will drop down to two, the desired state is three. You can see this up here. Well, even while the delete is going on, when our current status is two well, the desired state is still three, our application is still running, it can still be contacted. We can still bring it up with the worker nodes URL. If we run kubectl get pods, we will see the listing here and we'll notice we have a new pod name in there. So we lost our one pod but now we have a new replica. It was brought up automatically. Sometimes this does happen so fast that you don't even get a chance to watch the process. In an earlier example, we were actually able to watch some of the processes for these types of steps, but sometimes it happens so fast, particularly on Amazon Web Services, that it's hard to catch this while its building new pods. But we can see here very quickly the deployments desired state matches the current state for the replicas. Again, if we run kubectl get replicaset, we will see that indeed the desired state and the current state are equal, or this is true. We can see here once again that we are able to see the nginx homepage both on the worker node and on the master node. So thank you for watching this lesson, the practical application. All of these steps do work. I've tried them several times. So just go through and re-run or pause this video several times. Go through the steps, take your time. Again pod names, IP addresses will be different. There are some commands that you can run just as is but just be careful to know when you need to look for your own pod name, or replica name, or IP address. Thank you for taking this course, deploying with Kubernetes. Please leave a comment and please let us know what you think about this class.