Now as I've done in the past, when looking at something complex, I'll approach PowerVM Virtualization tools using a layered approach in a stack. Again, remember, this is an overview. No need to stress about the details. Rather, just get the overall concepts. This is a great reference for the day when you encounter an IBM Systems implementation. As always, the layer start at the bottom, and in this case, that's the hardware. This is the POWER9 hardware I covered earlier. Next up is the hypervisor layer. This is the layer where the actual virtualization of the big box occurs. I'll talk in more detail a little later about the Power Hypervisor that is part of PowerVM. Here, I want to mention that Linux KVM is also supported on many Power Systems servers. The implementation of KVM on Power Systems will look, feel, and behave in the same way as it does on x86 hardware. KVM can be a practical choice in environments where it is a proven part of an x86 landscape, whether replacing or supplementing that landscape. But overall, I think you'll find that PowerVM's resilience, production readiness, and scale make it the wise choice for most implementations on Power Systems. The POWER Hypervisor is not the entire picture when it comes to PowerVM virtualization. What's popped into the graphic is the Virtual IO server. Now, as I mentioned earlier, the VIOS, Virtual IO server, is the key component for virtualizing IO adapter processing back into the LPARs. Since this is just an overview, I won't be going into many details on the VIO server or the VIOS. But keep in mind, this is the utility LPAR that is installed alongside the Linux LPAR that provides the intermediary processing necessary to take IO requests from the Linux LPAR and process them through the physical adapter. I'll make many more references to the VIO server in passing, but not in any great detail. The Linux LPAR does not have to have access to the physical IO adapter, only the VIO server does. This enables efficient sharing of physical IO capacity. Think of throughput, not raw performance here though. You have the IO bandwidth that if dedicated might go unused, but when virtualized, the excess capacity can now be used by other LPARs virtually. There might be a slight performance degradation on individual transaction over having a dedicated adapter's capability. But the throughput gains by allowing multiple LPARs to access all the capacity of the IO adapter is generally worth the trade-off. Of course, this isn't always true, but I think you'll find that in the vast majority of the times it is. Now up to this point, I've addressed components that enable the systems processing directly by handling requests or processing those requests. The hardware management layer is handled by, you guessed it, the hardware management console, or I'll start to refer to it generally now as the HMC. This is the first component in the stack where the configuration of the system is addressed, that the VIOS is as well, but it must be in conjunction with the HMC or NovaLink, which I'll get to in a minute. So the LPAR configuration is done on the HMC using a robust GUI or a powerful CLI, mostly used in scripting for automation in my experience. I'll talk more about the HMC later in this module. But for now, remember that without the HMC, the LPARs don't exist. It's the key component for creating the configuration information that the hypervisor loads and uses to control the LPAR's access to the physical hardware. Now hold onto that thought as I finish up the layers. We'll get back to the HMC shortly with more details. Now next up is the advanced virtualization management layer. In this layer, there are options and this Layer 2 is optional. You only need this layer if you plan to integrate your Linux System into a cloud. We plan to do that, so we'll address this layer. Now earlier I spoke about the HMC. HMC was developed in a time before cloud computing. Because of this, there are aspects of the HMC that make its integration into a cloud more challenging. It's possible, but the primary thing that you run into is there are scaling issues. To solve that problem, IBM offers something called NovaLink to perform the server level functions in the cloud computing stack. NovaLink is a partition running RHEL or Ubuntu, running on each Linux system handling the configuration tasks that had been handled by the HMC in the past. The LPAR concepts are unchanged. The LPAR configuration on NovaLink is done using a CLI, but NovaLink leverages OpenStack. The name NovaLink comes from the OpenStack Nova integration that NovaLink performs, and PowerVC is based on OpenStack and therefore serves as the real true graphical interface for NovaLink enabled systems. Now I'll leave the NovaLink discussion for now, but I encourage you to explore it more deeply. Now, although you can use the cloud-based cloud management console to manage the hardware platform more efficiently, I'm going to skip that product in favor of highlighting the cloud enabling product PowerVC. The third course, later on, will cover PowerVC in detail. I'll just provide an overview of PowerVC here. Now, IBM Cloud PowerVC Manager or IBM Cloud Power Virtualization Center Manager, it's full name, but cumbersome, sits between the Power Systems Management console and the IBM Cloud Solutions. PowerVC manages PowerVM virtualization environments through a set of application programming interfaces or API's interacting with the HMC or NovaLink. These APIs provide the management console, the necessary instructions to manage the Power System's hardware, the POWER Hypervisor, NovaLink, and the VIO server. While you create and manage resources with both the management console and PowerVC, PowerVC has more features such as capturing partition configurations and storage configurations, and quickly deploying those images as copies. With PowerVC, you can also use pools of resources and optimize partition placement on Power System servers. It's a full automated and more feature-rich and robust implementation. They're just doing things on the HMC or a NovaLink. All right. The final layer in my Power Systems Management Stack is the multi-cloud management layer. Now like in the advanced virtualization layer, there are options. Again, the use of products in this layer is optional. Multi-cloud integration in this model is integration with x86 based clouds using VMware. As VMware integration is achieved through the use of VMware's vRealize products. The implementation and use of these products is for unique but not unusual circumstances. For the purposes of this course, it's sufficient for you to understand that this integration is possible and the products noted are required to achieve the integration.