Okay, so that's proof that your program is a convex program. So obviously the only thing that we need to worry about is the objective function. So that f be the negation of the objective function because previously we want to maximize the negative 1/2 of blah, blah, blah, plus some of lambda I. If we want to maximize the concave function, we need to minimize that some kind of convex function. Okay, we want to show that this is concave, so that show f is convex. Okay, so given the function and f, now, if we want to show that this is convex, we want to find its hessian metrics, okay? And for hessian metrics, we are doing second order derivatives, so obviously the last term would disappear. Now for this term, whenever we differentiate it with respect to Lambda i and then Lambda j, we're going to take away this and that, of course. And also you may see that this is pretty much the sum over i n j. Okay, so for any i and any j, you only have this particular term remains if your Lambda i and the Lambda j satisfying i and j. Okay, so whenever you have, for example, you differentiate this whole thing with respect to Lambda1 and Lambda 2, then what you're going to obtain is that you only have the last term remains. Okay, that's how you get your passion. So, all we need to do is to try to show that this particular big metrics is positive, semi definite. Well, there are a lot of things and doing that leading principle miners, whatever seems to be very difficult. So that's used a slightly different technique. So, the thing is that you may observe that this is really a very some kind of regular metrics, okay? And after some careful observation, you may say that whenever you are looking at a term, for example, this guy, you may always split it into y1 x1 transpose as one thing multiplied by y2 x2, okay? So eventually that allows us to split the big metrics or two factories these big metrics into two metrics is okay, this one and that one. So, the first term here, for example, is y1 x1 transpose times y1 x1 and so on and so on and so on. So here each value here is actually a vector, okay? Each value here is actually a vector for here a column vector, okay? And that's why this is a column, this is a column, this is a column you will see which is defined in this way. Your z is actually n x m metrics, okay, so you may factorise this Hessian metrics into a product of z transpose times z, okay. And once you do that now, if we want to show that Hessian is positive, semidefinite, we use the definition. Okay, so it's interesting here we don't use those properties like leading principle minors or the lygan values we use the definition. The definition is that if you have a metrics A which is positive, semidefinite this is defined in a way that x transpose Ax should be non-negative for all possible x that is of appropriate dimensions. Okay, if this is the case, A is defined as positive semidefinite. So, now that we see whether we have this x transpose Ax, this hessian can be decomposed into z transpose z. And then you see that x transpose z transpose is indeed z x transpose, all right? And that means we are actually doing the factors zx. We get the vector zx, take the norm and then square it. And obviously, this means we have a non-negative term here, okay? And this is true whatever z you have, whatever x you have, that's how we prove the fact that our dual program is indeed convex.