I hope you'll somehow agree that shadow prices have some managerial implications. Shadow prices measures how the objective value would be increased when you have some modifications at your right-hand side. Actually, if we talk about practice, even though in our previous mathematical examples, we see that when you increase the right-hand side by one, maybe the optimal basis changes but I will say in practice, that rarely happens. In practice, the proper scales are also large. Typically, when you increase the right-hand side by one, for one practical constraint, typically the optimal solution is always there. We somehow want to know the shadow prices. One thing that is very interesting is that its sign is determined based on how the feasible region changes. The following propositions should be very clear and very intuitive for everyone, I guess. It says that for any linear program, the sign of a shadow price follows through a rule below. If your objective function is maximization one when you are constraint is less than or equal two. When this number is increased by one, your feasible region becomes larger, then that provides you an increment, so your amount of increase would be either positive or at least zero. On the other hand, if this is greater than or equal to constraint, then when you are right-hand side is increased, then that's going to make the constraints tighter. The amount of increase will actually be negative or at most zero. If your equality constraints exist and you change your right-hand side, then there's really no way for you to predict what would be the impact on the objective function, then it may be anything, maybe positive or maybe negative, or maybe zero. On the contrary, if you are talking about a minimization problem, for a minimization problem, if you are having a less than or equal to constraint, when your right-hand side increase, the feasible region becomes larger, you may do better. Then when you are doing better for the minimization problem, that means your objective value actually go down. The amount of increase may actually be negative or zero. There's no way for you to get a positive increase when this is a minimization problem with less than or equal to constraint. For the others it is pretty much in the same. What my point here is that, we are talking about the impact on the objective value when a right-hand side is increased by one. According to whether that's a greater than or equal to constraint, according to whether that's a maximization or minimization problem, increasing the right-hand side by one either makes it tighter, makes it more relaxed and that may benefit or hurt your objective function. You should be able to understand all these impacts. Understand and predict what's going to happen. If you look at this table again, maybe you actually understand why we want to talk about shadow prices in the lecture of duality. Because we somehow may take a look at the objective function and the constraints to get some ideas about something of the size that actually has a lot of connections with duality. That's the [inaudible] Whenever we are talking about a constraint shifting and if that does not affect an optimal solution, we would expect the shadow price to be zero. Shadow prices would be zero, if your constraint is non-binding at your optimal solution. Somehow that makes sense. If your constraint is non-binding at your optimal solution like this one, then making it larger or smaller does not change your optimal solution. There's no impact on your optimal solution. Finding shadow prices allows us to answer the questions regarding additional unit of resources. Somehow, we want to check that for all our resources, but there's no way to do that easily if we do not apply duality. We have m constraints, we don't really want to solve m separate LPs. We don't want to increase the right-hand side of the first one by one and then solve it, and then modify the first, second constraint by one and then solve it. We don't want to do that for m times, we actually want to apply duality. Let's see how to do that. The fact is that dual optimal solution is exactly your shadow prices. If you go back to the previous two pages, if you think about how may we use the objective function and the constraints sign to determine the shadow prices sign, you get some ideas about duality. If you look at proposition 8, which says, if something is non-binding, then something must be zero or well, that's complementary slackness. Proposition 9 should not be too surprising for you. For any linear program. The shadow prices would equal to the values of dual variables in the dual optimal solution. Well, say it in another way. If you are given a linear program, you want to find all the shadow prices, find a dual, solve the dual, get the dual optimal solution, and then you'll have all the shadow prices. You don't need to solve m programs, you just need to solve one program, that's duality. Very quickly, that's why this is true. Let's say B is your optimal basis in the original optimal solution. Your z is C_B transpose A_B inverse you know that. Now we say for the constraint 1, if b_1 becomes b_1 plus 1, then your z would become this. Your z prime is going to be C_B transpose A_ B inverse times the new b. The new b and the old B is changed by only one unit at the constraint 1. The first part gives you the original z. The second part says that, well, that should be the first element of C_B transpose A_B inverse. Of course, your z prime is this way if we assume the basis does not change. The key point here is that this is the only change, and you may expect the impact of this change is on the first element of C_B transpose A_B inverse. That means the shadow price for constraint 1 is the first element of C_B transpose A_B inverse. In general, the shadow price of constraint i would be the ith element of C_B transpose A_B inverse. You all know that the C_B transpose A_B inverse indeed is the dual optimal solution, then we're done. All you need to do is to solve one dual program, and then you are done. Coming back to this example, suppose we want to do this, and we say, oh, we have two constraints, we want to find their shadow prices. How about this? We're going to solve its dual program. Somehow its dual program is another linear program. Somehow we are able to solve it, and then the dual optimal solution may be obtained as 4, 0. 4, 0 is here, what does that mean? That means for the original problem, the first constraint, the shadow price is four, the second constraint the shadow price is zero. You may verify this very quickly. For the first constraint, is x_1 plus x_2 should be greater than or equal to two, it's this one. The second constraint says that 3x_1 plus x_2 should be greater than or equal to one, so it's the inner one. The inner one actually has no impact on the optimal solution, so that's why its shadow price is zero. Then for the first one, if you increase the right-hand side by one, obviously, what we will do is to increase the optimal solution from 0, 2 to 0, 3, and then that's going to give you an increase of four. One thing to remind you is that when we are talking about shadow prices, we are talking about the absolute increase on your objective value. It doesn't really matter whether you are talking about a minimization problem or a maximization problem. Now let's conclude our discussions about shadow prices. We have learned how to deal with this. We have learned how to evaluate a change on the right-hand side values. All those right-hand side values are subject to changes. There's no need to solve m LPs, you don't need to do all those things from the primal perspective. All we need to do is to just solve one dual linear program and that's not just a coincidence. Whenever you have a primal problem where you have a lot of constraints, you know for your dual problems they are all variables. All these constraints has something to do with all those dual variables. That somehow explains why you may solve just a dual variable to answer all the questions regarding primal constraints. Somehow that makes sense. Then knowing the duality theory, at least at this moment we understand. It helps us to deal with this particular problem, to deal with this "what-if" analysis. If we don't know duality you really have no other ways, you need to increase the right-hand side by one, solve it. The second constraint, right-hand side by one, solve it. You really need to solve m LPs, but now you just need to solve one dual LP. That would be amazing. This is one kind of sensitivity analysis task. It is one kind of "what-if" analysis to see how sensitive our optimal solution is facing some small changes. We will have some other examples talking about this, but anyway, this is something that you really need to do in practice. We hope that once you learn linear programming or any other things about operations research, that has something to do with your practical decisions. But those useful things may in many cases comes from your understanding about theories. That's pretty much I have for now. Thank you.