Hi, everybody. This is Enrique. Today I'm going to explain you the final project of this mini-course called proactive rules. It is going to integrate the knowledge you gained from older previous workshops and classes. This is a more complex assignment than the previous workshops, and probably is going to take sometime to complete it successfully. The previous workshops gave you the required ammunition to build a more complex system. There are two objectives for this project. First, you are going to implement a Ryu Controller that is able to first adapt to network topology changes and additionally to adapt to traffic changes. Second, you're going to implement a test framework that programmatically generates network patterns. Such that, you are not going to be running commands on each of the common line interfaces for each of the host, but instead you just press enter and it's going to start sending packages between the host such that you can test the rules that you created with your controller. The first step to build the system is monitoring. What I mean by monitoring? Is basically, you're going to have a monitor that is going to be constantly gathering status information from the switches and storing them into the controller. But technically, you're going to spawn a thread on the controller that is constantly retrieving switch information every t_1 seconds as explained to you on the period of description. You can find a lot more information about this on the ryu-book description for the Ryu Controller. You can use this as the base code or starting skeleton code for the project. As we mentioned before, we want you to be able to successfully overcome changes in the topology. To be able to do that, we want you to create a graph of the topology. Given that the Ryu controller that is based on python, you have two options. You can either use igraph or you can use NetworkX. These are available libraries on python that are going make you a lot more easier to implement graphs, and also use algorithms on top of these graphs. There are Ryu events as the one that we mentioned on the previous workshop like packet N, they are associated to switches and links that appear. Such events are called EventSwitchEnter, EventLinkAdd. You can find more information about this on the Ryu source code. That's the nice thing of having open source projects, is that you can go into it and learn about the details of how it's implemented. Here we just want to check which events are available for us to detect topology changes. Something that you need to remember is to use the minus minus observe link when you're running Ryu. Otherwise, the Ryu controller is not going to detect changes in the links, and when you start creating the graph, you're going to see it as it is empty. You only have notes but not links at all. So remember if you have seen this empty links lists, it means that you need to include this flag here. So now we have information about the topology of the system, then we can use this to create shortest paths. What I mean about shortest path? Is basically, we look for the fastest way from going to Host 1 to Host 2. So you're going to be modifying your Ryu controller and use the topology graph that you just created. You're going to be installing rules based on the shortest path that the graph is going to tell you back. Finally, the corresponding handler for packet N, need to install the entries than match packets based on the destination mark on the Ethernet type. This switch should send the packet through the port that is connected to the path that is the shortest path to the destination. There is no simple way to obtain the latency and bandwidth of the links of a system using only openflow messages. To simplify this assignment, you're going to give to the topology and a controller adjacent file that contains the value depending on the ports to which they are connected, but independent of the switches. So each value on the JSON is going to be something like: port in, port out, the bandwidth, and the latency. So what's the idea of this? We want to have the flexibility of defining different types of links on your topology, but we don't want you to have the need to define the whole topology in this file. If you are starting including the switches here, then you need to define the whole topology before you actually start the system, and that's something that we don't want. If you actually want to make this automated and don't use something like this JSON file, you can perform an iperf and ping previous to and starting sending packages such that you can calculate the maximum and low bandwidth and the latency for each buildings. But that's not required for this assignment and is just, if you are interested in making this a more complete system. The previous step is meant to be used with the latency field on the JSON file. But we also have bandwidth. Sometimes we care more about sending of how much bandwidth we can use, instead of how long it takes to us to reach the host? So we are also going to be implement the static rules for taking the widest path, is pretty much the same idea of the previous code, but you need to use a modified version of Dijkstra than instead of looking for the shortest path, it's going to calculate the widest path. But the whole steps are the same, we create a real application that uses the topology graph. We install the rules based on the widest path, but this graph is going to return to us. After installing the rules, the packet needs to go through a port that is connected to a widest path, and thus are the main components for the static rules. You notice that the first step on this project was to have a monitoring system. So let's try to use that information to have a system that is able to cope with changes on the flow that is going into a system. Because previously, we just get at the information about the topology and we used that blindly for each step. So we're going to be adding additional logic to the widest path controller to be able to adapt to changes in the information that is flowing through the network. So for each flow in the network, you're going to maintain a list that contains the bandwidth that had been used. The list is going to have a size of S_1, and basically every T_1, remember that T_1 is the monitoring period, every T_1 we're going to get information from the switches. We're going to push that value of bandwidth consumption into S_1. We take an average over all the values in S_1 and we tried to estimate how it's going to behave in the future. So each time a new flows needs to be installed, basically we subtract the average of the list of bandwidth used for each of the links in the topology, and then based on the residual capacity of the system then we try to calculate the highest bandwidth available. As you can notice, the algorithm is pretty similar to the previous one, but before installing a new rule, you need to take into account the bandwidth that has happened in the previous S_1 monitoring periods. The following step is to have redistribution. This is a load balancing extension to the productive rule creation. The main idea is that now, every T_2 seconds we're going to try to redistribute the flow with the available links in the system. To be able to do this, the controller needs to maintain the information related to the bytes sent between two hosts from the source, the destination, the bytes that have been sent so far. We're going to be using this information such that every T_2 seconds we try to load balance the systems based on this information, and it's pretty much the same idea. We use the topology graph information with the default bandwidth policies. We initialize the lists of rules to be installed to be empty. Basically, we're going to try to install all the rules from scratch. We sort the comm_list from more packets being sent to fewer packets, we're giving more priority to flows that have sending more information before to be scheduled first. Then for each of the elements on this comm_list that is sorted, we're going to find the widest path, add the required rules to the list to be installed, and then we reduce the corresponding link on the topology. Finally, if all the hosts are reachable, we're going to apply degenerated rules. There is a small drawback with this simple redistribution rule, and is that something the average can be higher than the available bandwidth. If that's the case, after subtracting the bandwidth from the links, if any host is unreachable for any other hosts and we know that there is an actual connection there, then we are going to fall back to the static rules because we cannot allow to assist them to not be able to talk from one host to another one. But these two only happens if we have an almost oversubscribe system in which the average are going really high. Also it needs to be fluctuating between one flow to another one, the one's that is consuming more bandwidth. This is the main components of this project. As you can notice, there are many steps to actually reach to the redistribution path but you're going to be learning on how to handle information of the network, how to use that information to take more intelligent decisions on how to create rules, and you're basically going to have a really interesting software defined network for any of your applications or just for you to play around and learn about software defined networks.