What about bandwidth savings? That's another thing that we have to worry about. Well, it turns out that middleboxes like Web Proxy, and WAN accelerators are used to limit the WAN bandwidth used by an enterprise. So for example, the traffic from the enterprise is going through HTTP Proxy, NAT, and firewall out to the real world. And if we wanna move these functionalities into the public cloud, we're putting this network chain in the cloud. And what that means is that from the enterprise, we're gonna have high bandwidth consumption on the wide area network. Which was not there when these functions were in house in the enterprise. That's the downside of moving certain network functions into the cloud. Because if we move them to the cloud, the WAN bandwidth becomes high for the enterprise. The safest solution is not to migrate those types of services out of the enterprise. For instance, an HTTP Proxy is not something that we may wanna move out of the enterprise because that is allowing caching of pages from the origin servers inside the enterprise so that the enterprise doesn't have to waste the network bandwidth going out on the wide area network. Another clever technique that has been proposed in the literature to get the bandwidth savings in the presence of offloading even functions such as HTTP Proxy to the managed cloud, is to use general purpose traffic compression on the gateway between the enterprise and the cloud. The idea is to use protocol agnostic compression techniques. And in many cases, these compression techniques may achieve similar bandwidth compression as the original middlebox was doing, by keeping certain network functions in house. So in other words, even in the context of network functions that were designed to reduce the wide area network traffic, you might be able to offload it to the cloud, by employing good bandwidth compression techniques in the gateway. So the next question is deciding which cloud provider to select as the power. Now we all know technology giants like Amazon have few large Points-of-Presence. On the other hand, CDNs like Akamai they have large numbers of small Points-of-Presence. And so these are two sort of extremes, and what you see in this graphic once again from a research literature is the time that it takes to go to Amazon-like Points-of-Presence or Akamai-like Points-of-Presence. And the blue line is the CDF for Amazon-like Points-of-Presence. And once again these are simulated on PlanetLab. And the red line is for Akamai-like Points-of-Presence. And then you can see most of the latencies are quite small for Akamai-like footprint compared to Amazon-like footprint in terms of choice of Points-of-Presence to route the traffic to. Another emerging trend is edge-computing. We've talked about this in the second mini course when we talked about systems issues in cloud computing. And there we introduced the idea of edge-computing and that's an emerging trend. And that's another option in terms of where to place the paths. And we'll come back to that in a minute. So as it turns out, telecom providers are ideal for edge-computing. And telecom providers like AT&T and Verizon have a geographical footprint that is much denser than the large data centers like Amazon or even the CDNs like Akamai. And residential broadband service providers are already in the business of providing specialized network functions like virtual broadband network gateway, which provides residential broadband users with services like subscriber management, policy and quality of service management, DNS and routing capabilities, and so on. Service providers may also offer services such as Video-on-Demand CDN, virtual Set Top Boxes, to the customers that want to use them. So in other words the telecom providers already are in the game of providing network services specialized for broadband service users by hosting them on edge-computing resources that are much closer to where the services are actually deployed. So typically services are deployed close to the subscribers. And these compute resources are obviously potential candidates for offloading, even enterprise network functions. What I'm showing you here is that the telecom providers are using these edge-computing resources for hosting their own network functions. And these compute resources, could be potential candidates for offloading, even enterprise network functions. So, in this context, it's worthwhile mentioning the OpenCORD initiative. And this is an attempt by telecom providers who own central offices containing switching equipment to host additional compute resources that can be used for offloading network functions. And OpenCORD stands for Central Office Re-architected as a Datacenter. The idea is that telecom providers have central offices that are distributed geographically. And just as an example, here is the set of central offices around Atlanta. You can see that central offices are geodistributed in a metropolitan area like Atlanta, with a number of Points-of-Presence. And those central offices if you equip them with additional general purpose servers, that can serve as an infrastructure for deploying network functions that the service providers themselves have to run. And in addition to that, you can also host 3rd party network functions. And what that does is that it allows enterprises to host their network functions on virtualized hardware that are close to where the enterprises are. And, of course, these network functions of the enterprises are gonna be colocated with telecom providers or all network functions that they have to run, just as I mentioned in the previous slide. And this becomes a candidate realization of mobile edge computing. And the good incentive for infrastructure owners to virtualize their infrastructure and make it available for hosting network functions of enterprises. And there is a business opportunity and that's the thing that the OpenCORD initiative is trying to look at. So, it turns out that organizations like Chick Fil A or Honeywell, they have geo-distributed sites. And each site needs multiple services. And these services include firewalls, intrusion detection, Deep Packet Inspection, HTTP Proxy, WAN optimizers, and so on. And even though they are geo-distributed, all these remote sites require the illusion of a homogeneous network. And up until now, the way it has been done, these premises have on-premise hard custom hardware that used to implement the required network services. And obviously these are candidates to be offloaded to a managed service. So a solution to provide this illusion of a homogenous network is for enterprises to create what are called virtualized customer premise equipment or VCPE. And these VCPEs are gonna be located in every branch of a particular enterprise. So that they all have the look and feel as though they're part of the corporate infrastructure or the enterprise as a whole. And these VCPEs serves as a gateway for multiple parts of an enterprise network to connect to one another. And these VCPEs can be placed in an Edge Pop or a centralized datacenter so that you don't have to have on-premise equipment for the functions that are needed at every one of these branches. And so this is an industry solution for migrating NFVs to a cloud service.