[MUSIC] You've now been learning things like system D, Log Management and several other Linux system administrative topics. If you've reached this stage, you must now be thinking of all the things you can achieve with Linux in an enterprise environment. It's a proper proper tool to achieve a lot of business solutions. Now, I'd like to take you on a small detour and show you how real enterprises manage your Linux machines. Now, it's fun to set up a few Linux virtual machines and configure them, but what if you need to do it to a 1,000 virtual machines across your enterprise? And what if there are tens of thousands of users within their enterprise who need access to a subset of those 1,000 virtual machines? I'm talking here about questions of scale, when big tech companies face this question. They reacted by building proprietary tools to do this. However, that caused businesses more trouble with issues like upfront costs being higher, vendor lock in and so on. Now, around 2006, a retail company called Amazon, may have heard of them, had built a solution which led a lot of businesses to sign up with them and it was called Amazon Web Services. And a lot of companies were happy to just let their applications run on the cloud. Now, the story goes something like this, internally teams expected projects to take three months, but it was taking three months just to build the database is and the compute infrastructure. Every team was building their own resources for an individual project without much thought about reuse. This was not the case, not just in Amazon, but in several large scale companies. The internal teams at Amazon required a set of common infrastructure services everyone could access without reinventing the wheel every time, and that's precisely what Amazon set out to build. And that's when they began to realize, they might have something bigger. They just called it the cloud, meaning anyone with access could run an operating system or any application for that matter from the cloud. And since all that mattered was that they had access to the operating system or application, well, everything was great. When you run the Linux operating system through your web browser in the first course to complete the labs, you didn't care about where it was running, who initially set it up, how much power and cooling it needed to run the machine, and so on. You had access to a machine on the cloud, so what exactly is this cloud? In 2011, the National Institute for Standards and Technology or NIST in the US, defined cloud computing. The since then, this is the definition everyone's being using. It's easy to just ignore the definition, but let's go through it. So they said, [COUGH] cloud computing is a model for enabling ubiquitous, convenient on-demand network access to be shared pool of configurable computing resources for example, networks, servers, storage, applications and services. That can be rapidly provisioned and released with minimal management effort or service provider interaction. So what does this mean? They called it a model for running computing on demand and the idea was that these computing resources should easily be provisional over a network with minimal effort. In the same paper, they also got on undefined three service models of clouds. They were infrastructure as a service or IaaS, platform as a service or PaaS and software as a service or SaaS. The four deployment models they defined were private cloud, community cloud, public cloud and hybrid cloud. They also defined the five essential characteristics of any cloud environment. In other words, take any cloud today, IBM cloud, or Google, or Amazon's or any private cloud environment and it should satisfy these characteristics. First, on demand self-service. This meant that the computing resource is should be available on demand and anyone needing it can provision it on the go. Second, broad network access. This meant that the services on the network should be accessible over clients. For example, mobile phones, tablets, laptops and workstations. The third, characteristic is resource pooling. This meant that if you provided these resource is over the network, then you should be able to pull your resources, say storage pool or computing pool to serve multiple consumers using in multi-tenant model. The fourth at characteristic is rapid elasticity. The idea here was that your cloud platform should scale rapidly outward and inward commensurate with demand. When resources are taken, the pool shrinks and when resource is are not needed, the pool grows again. The fifth characteristic and most important for businesses was measured service. This means that your cloud platform should be able to tell you who used how much for how long. Keep these five characteristics in mind when you as a System Administrator, want to pick a cloud service to run your computing needs. What level of measured services this cloud offer? What self-service capabilities can I get out of this cloud? These are examples of some good questions that you can ask before deciding to transition to a cloud computing environment. Of course, these days you may never have to ask these questions again as public cloud environments have become ubiquitously accessible. But still there may be certain environments where moving to a public cloud may just not be feasible. Does that mean what you learn here is not useful? Not at all, because within a company's infrastructure, you could have a private cloud. And everything we cover in this course and next can be applied to any cloud environment, private or public. Now, let's take a quick detour, in cloud is transformational and it is crucial that today's companies stay ahead of the evolution of computing technologies. These changes have historically followed a path that takes the compute solution from a nice to have cost cutting technology today to one that drives transformation across and within industries. Consider the progression of computing over the last 50 years, in the 1960s, we had centralized computing. This was computing done at a central location using terminals that are attached to a central computer. The computer itself may controlled all the peripherals directly if they are physically connected to the central computer. Or they may be attacked by a terminal server. Alternatively, if the terminals have the capability, they may be able to connect to the central computer over the network. The terminals, maybe text terminals or thin clients. Now, moving forward into the 80s, we had the client server architecture. It was optimized for low cost, simplicity and flexibility. It distributed management across multiple departments and organizations, and this gave birth to a large number of PC based applications. Cloud has become more prevalent in the last decade. Applications optimized for massive scalability and distribution of services. Clouds support a huge number of mobile devices and sensors. It contributes to a network based architecture. It's being driven by a global acceptance of the Internet. So what started as a nice to have cost cutting measure has become the thing that has defined this era of computing. And you could say the same thing about the client server architecture in the 80s again are low-cost, nice to have, meaning, flexible and easy to use approach. Became the defining thing of their age. [MUSIC]