Welcome to this lesson, Kubernetes Overview that is part of this course, Deploying with Kubernetes. The name Kubernetes comes from a Greek word, the word means helmsman or vessel pilot. Soon we'll learn that Kubernetes is similar to commanding a ship of containers for our applications. Kubernetes began as a project founded by Google. It was called Borg. Borg eventually evolved into Kubernetes, powered by many of Borg support staff. In 2015, Google donated the Kubernetes project to a community called Cloud Native Computing Foundation or CNCF. Originally, Kubernetes was designed to solve the the problem of the Monolith architecture. So custom software and hardware stacks that were very difficult to maintain, very, very difficult to standardize, and very difficult to automate and make repeatable. Kubernetes was also designed to also solve the problem of easy rollback, easy deployment, automatable rollback, automatable deployment. Monoliths though became very difficult to migrate to runtime environments such as the cloud. However, since the advent of microservices, such as APIs, REST services, service-oriented architecture, the monolith has become obsolete and unnecessary. So we can see here the applications are in interfacing with standard microservices. Either are virtualized or cloud environment. Some of the Kubernetes microservices include deployment. Deployment that can be automated and easily repeated. Managing of sensitive information such as certificates and passwords. Registration and scaling. So our applications can actually be scaled up using a replicated runtime approach. Load balancing. So now that we have a replicated runtime approach, we need to balance the requests that come in for our application. We have to divide those requests among our replicated runtimes. Job scheduling and management, a way to have a standard, repeatable, automatable DevOps approach and continuity and fault tolerance. So if one of our runtimes goes down, we have another runtime to take their place. Before we go any further, let's discuss container technologies. Kubernetes and other products use containers for the runtimes of the applications. Containers are a solution for reliably deploying and running software products that come from and are intended for different computing environments. Containers are standardized packages consisting of an application and everything need to run the application, including runtimes, system tools, libraries, and settings, and other dependencies. We can see here in this container, we have our application and all of its dependencies. Multiple containers can run on the same operating system and share the same OS kernel. So we can see here, no matter what the environment is for these applications, all these application or containers are sharing the same operating system kernel or virtualized layer in the same hardware. The application and its dependencies are packaged in their own containers. The differences to supporting each one of these containers are abstracted away by the container and the fact that they are all running on the same hardware and operating systems layers. The underlying infrastructure only has to be able to run the individual containers. One of Kubernetes key features is the container management, or what's sometimes called container orchestration. Containers are not unique to Kubernetes. Kubernetes are application centric. So we can see all these applications. They are platform as a service and they are Operating System level virtualized packages that include their own dependencies on microservices, here indicated by libraries. They run their own set of services in the dependencies and each container is not a full virtual machine, but are parts of a virtualized approach that are combined with the kernel of the virtualization solution and the hardware and all containers connect to and run on the same OS kernel. Container orchestration supports fault tolerance for business continuity, optimization of the hardware and the network, real-time scalability. So again, we can have replicated runtimes, automatic discovery of other containers, online live updates of the applications. Recovery of any applications or the runtimes that have gone offline, we can bring up new runtimes and rolling back of the application version to the last stable version without downtime. Kubernetes is open source. It is an open source container orchestration tool. It does have competitors in Amazon Web Services, Azure, Docker, also other vendors also offer container orchestration solutions. Deployment and installation of the container orchestrators can be performed on many different types of infrastructure. Organizations and users can almost always select the environment of which they are most familiar, bare metal, virtual machines, public or private clouds. A very popular model is installing Kubernetes on infrastructure as a service solutions like Amazon Web Services, Google, or Azure. Kubernetes can be installed quite readily on these infrastructure as a service solutions with only a few commands. So the main features are the fault tolerance and the self-healing. We saw as a multi-node solution, Kubernetes can automatically backup and replace containers and failed nodes. Automatic Bin packing to maximize resource utilization. Horizontal scaling. So again, if we need several runtimes and several copies of the application running, that is very readily completed with Kubernetes. In Kubernetes, clusters run pods. Each pod receives its own IP address. However, all the containers within the pod communicate using localhost to support load balancing across the container set. Other features, automatic recovery and rollbacks, and again, this can be automated quite readily. Secret and configuration management, storage orchestration. So if there is important data that is key to recovering a container, that data can be stored in external storage. So Kubernetes can automatically mount storage outside of containers. That way, if data is vital to recovering a container, the data is available in the external storage. It's not lost when the container goes offline. Also batch execution. Kubernetes supports batch operations, long running processes, and also container fail over. Kubernetes has a flexible modular architecture that supports extensibility, plugins, and microservices APIs. Kubernetes again, is extensible and it can be extended and augmented by writing custom software programs that make use of the APIs and the plugins. Kubernetes is supported by a community of more than 2,000 contributors. The Kubernetes community also includes local groups that meet or collaborate within meet-ups, special interest groups, and they focus on scaling networking and other features. Cloud Native Computing Foundation is about as close as we can get to a vendor for Kubernetes. It is a community. For Kubernetes, CNCF offerings include licensing and proper use, scanning for any proprietary code, marketing and conferences, legal guidance, and certification standards. The CNCF hosts other projects besides Kubernetes such as Prometheus. This is the end of module 1 of the Kubernetes overview. In the next lesson, we'll be looking at the Kubernetes architecture.