Now, remember last time we were considering a setting where we wanted to do y minus lets say x1 beta, and minimize that, where x was let's say j n1 and then a bunch of 0s. And then a bunch of zeroes and j and 2. Where j is again a vector of 1s. Let's say x1 is like that. And we found, so we have two groups of data and we found out our estimates then works out our beta hat works out to be y1 bar y2 bar, okay? Where again our beta is equal in this case to beta 1 and beta 2. Now consider y minus x2 gamma, consider minimizing the model where x 1 is equal to a vector Jn1 plus n2. And then a vector that's Jn1 and then a bunch of 0s. Let me just write it as 0n2 meaning that vector of 0s of length n2. Now, so this is 0n2 and this is 0n1. And this is x2, I'm sorry. And then gamma is equal to gamma 1 and gamma w. Now notice, if I add these two columns, right, I get this column right here. And similarly, if I take this column and I subtract this one, then I get the second column here. So what we see is that x1 and x2 contain the same column space. They have an identical column space. And what we know from our projection argument is that the fitted values from both the models have to be the same. With the fitted values from model 1, if for any observation in group 1, the fitted value is going to be y1 bar. And for any observation in group 2, it's going to be y2 bar. So we know that beta1 hat = y1 bar and beta2 hat = y2 bar. We know that because we worked it out in the last example where we figured it out. Okay, now look at x2 times gamma. Well, the fitted values for anyone in group 1, are now if I multiply x2 times gamma, it's going to be gamma 1 hat plus gamma 2 hat. And then for any one in group 2, its got to just be gamma 1 hat by itself. So, we know that the fitted values have to satisfy these equations and they have to agree, because the column space of the two is the same. So what we know then, is that beta 1 hat, which is y 1 bar, has to be equal to gamma 1 hat plus gamma 2 hat and beta 2 hat, which is equal to y 2 bar, has to be equal to gamma 1 hat. We can use that to now solve for gamma 1 hat and gamma 2 hat without actually having to go through the trouble of inverting this matrix. Now it's a 2 by 2 matrix so it shouldn't be that hard to invert. But let's suppose you have a little bit harder over setting, then it would be a little bit harder to invert, let's suppose we just had 10 columns. And this is kind of a common trick in these ANOVA type examples where you can reparameterize to the easy case where you get a bunch of block diagonal one vectors like in the case of x1, in which case x transpose x works out to be a diagonal matrix and then very easy to invert. And then if you want any different reparameterization which would result in an x transpose x that's hard to revert, you can use the fact that you know that the fitted values have to be identical to convert between the parameters after the fact. Okay? So in this case, you know that gamma 1 hat has to be equal to beta 2 hat. And then you know that gamma 2 hat, then just plugging in with those two equations, has to be equal to beta 1 hat minus beta 2 hat. And so that gives you a very quick way to go between parameters. In equivalent liner models with different specifications, with just different row organization. Okay? So it's a useful trick when you're trying to work with these inovatide models.