Hi, everyone. Welcome back to operations research. We still need to continue our journey about theory because we still have non-linear programming theories to discuss with you. This is our Week 5. Here we want to talk about the theories for non-linear programming. For non-linear programming, there are actually a lot of things to talk about. In particular, today, we will focus on convexity. Again, convex or convexity, they are not terms that we face in our daily lives, but for operations research, for non-linear programming, and for a lot of these applications like statistics, machine learning, blah, blah, blah, convexity is very important. If you want to do any further studies in computer science, information science, information management, operations research, industrial engineering, blah, blah, blah, convexity is definitely something you need. Pretty much we want to talk about two ideas in today's lecture. We want to talk about convex sets, and we want to talk about convex functions. The reason to talk about them is simple, whenever you have a mathematical program, that's to say a non-linear program to solve, you have somehow a lot of constraints. Eventually, you have a feasible region. If your feasible region looks very weird, where there is a hole, there is a hole like this. If your feasible region looks like weird creature, you're going to create some difficulties in solving this problem. Because you don't have good, nice properties to identify optimal solutions. Because your region is so weird, your optimal solution may be located anywhere. Somehow we need to have some regulations on our feasible region, that's convex sets. We're going to define the idea of convex sets and show you, for example, for linear programs, a feasible region is always convex. Maybe it's more important to talk about convex functions, which looks something like this. What's this? Suppose you have a non-linear problem, where your objective function looks like this. Then again, it's very difficult to solve this problem because your optimal solution may be here, here, here, here, here. It's not so easy to just look at a local minimum. If you are trying to locate an optimal solution, a global minimum, local minimum may not be sufficient. That's why we need a definition for convexity or convex functions. A convex function always look like this, so it has a specific shape, where if you have a local minimum, it guarantees to be a global minimum. We are going to give you formal definitions. The idea actually, it's not so difficult. But the thing is that if you are just talking about single-dimensional problem, if you only talk about single-dimensional problem, where you only have one decision variable, then the idea is simple and solving it is not too hard. The thing is that we need to generalize the whole ideas to n dimensions, to n variables. For example, if you have two variables, then your curve would be something like this. You have X1 and X2, and if you're plugging them into your function, you get your Z or Y values. Convex functions may become harder to be visualized and harder to be analyzed. We're going to give you some tools, for example, something called gradient, something called Hessian, something called positive semi-definite, blah, blah, blah. If you are a fan of calculus, is you are a fan of linear algebra, today, we will make you happy. That's for today. Later, you will see how calculus, linear algebra is needed for doing these convex analysis.