So, to introduce the specific basis functions that we use for our Spectral Element Method, we take another detour here and we look into function interpolation again as we did actually in the chapter on the Pseudospectral Method. Now again, similar to the Chebyshev Method, we have a limited area here because we're inside an element. Which is why we want to make use of the Lagrange Polynomials, which are defined between -1 and 1. Now, how are these Lagrange Polynomials defined? What's the equation describing these polynomials? So, here is the definition of the Lagrange Polynomials. Actually, for the first time, we encounter a product sign. So, our basis functions, Phi i, will now be replaced by Lagrange Functions that we will always denote by a small letter L, subscript is i, which is the ith polynomial of order n, which is a superscript. So you calculate it by a product over the term psi minus psi j divided by psi minus psi j. Now, the first observation we can immediately see here is that even though the sum goes over j from 1 to N+1, where N is the maximum polynomial order, we can immediately see that I cannot be equal J because then we would divide by zero. So, that's one of the first important aspects here. We've not talked about the actual points subscript i or subscript j yet. Actually, you have to define N+1 co-location points, psi i, that were used inside the elements. We have not specified those yet and we will do this later. But still keeping this general, you can write down this product as a long list of terms as you can see here and there's a couple of things we can denote. First of all this is a continuous function inside the domain minus one and one. But what are the values actually at those specific points. And you can see here, for a polynomial, Lagrange Polynomials of order N. If the points i not equals j, actually the Lagrange polynomials are zero. Remember, graphically, of course, we will see it in a minute. This is just like again saying the value is one at one grid point or co-location point zero at all other points. So that we've seen this before. And of course, then the proof comes that below that, if for polynomial i x i i the value of the Lagrange polynomial is one. Now we will look at this in more detail graphically. Another very important point is actually there is a delta, the Lagrange polynomials have a kind of a delta property that means if you multiply one polynomial i with polynomial j it's zero except if i equals j and then of course you recover again the value one, which is can be expressed as using the delta i, j the Kronecker delta. If you write down the Lagrange polynomials not in the entire domain if we only ask for the values at the specific co-location points psi j, we would write it like this. This would be li capital order n at the point psi j is equal to the product as we've seen before, and that's actually one, unless if i equals j. So, this can also be written using the Kronecker delta. So, simply li xij is equal to the Kronecker delta ij. That's very powerful and we're going to use this later actually in particular for the mass matrix and we will see this later. So, now comes the question which co-location points to use inside each element? Now we are going to use the so-called Gauss-Lobatto-Legendre points. So, often refer to GLL points. Let's first look at them graphically. So, we start at the bottom with actually order one in which case you only have the two end points of the domain minus one and one. So, these are the edges of the element and of course that only allows a linear function to be described and basically we recover in this case the classic linear finite element methods. If you increase the order, n equals two, you will have three points. So, this will actually lead on a global scale to an equidistant set of co-location points because you only have one additional point inside the element. If the order is the same and the size of the elements would be the same in that case. Actually, it would be equidistant. And then if the order increases, and you can very nicely see that something happens just exactly like in the Chebyshev case, there is a densification of the points if you approach the boundaries. With these specific co-location points, the Lagrange polynomials, actually the derivative at those points is zero and also the Lagrange function or polynomials themselves are bounded between an actual value between minus one and one. And this actually minimizes interpolation errors and makes them a very, very good choice for function interpolation. Now an important point will be, comes later, that we also use exactly those points, those GLL points to perform the numerical integration inside this domain and actually this will be the trick to make later the mass matrix diagonal, we will come to that. So, that's a very important step here. Just for the moment, we actually can take the locations of these GLL points from a table. I will show that later. I can see it in your faces you've been waiting to see how these Lagrange polynomials look like. So, here we go. We start with the smallest order, which would be one. And this of course will be a linear function and you can see here in the background, Lagrange polynomials with increasing order. Two would mean equidistant inside the element. This of course, translates also to the global domain provided that the elements are of the same size and then if you go to higher orders, you can see nicely the form and the shape of these Lagrange polynomials and the densification near the boundaries. Now something really cool from a simulation point of view is actually that the order N, capital N of the polynomials is a parameter of your simulation. So, basically, in principle, without any modification, you can just change that parameter to have a simulations with increasing order without changing everything else. At least this is the way we have written our sample spectral element code. So, that's very cool. But should we go to higher and higher orders to make our simulations more and more accurate? The answer is no. Why not? Because actually as was the case in the pseudospectral case for the Chebyshev method where we had this densification of grid points at the boundaries, this is a problem if we think of the Courant criterion. The time-step will depend also on the smallest distance and if that becomes smaller and smaller you also need to make the timestep smaller and smaller in your simulation. In that case, it might become too expensive, very inefficient. Because you can imagine that if you sample a wave or a sinusoidal wave or even a wave containing different frequencies, you might oversample the wavefield use many more grid points at the edges of the elements compared to the center of the element. That doesn't make sense and that's actually the reason why in realistic simulations the order of the polynomials or that Lagrange spectral element method is usually not higher than four. That order for capital N equals four, is used and then basically you have five co-location points and four distances inside your element where the difference of the distance between the collocation points is basically negligible. So, let's make an example how to interpolate a function, an arbitrary function using Lagrange polynomials. So, we seek an approximation u bar, we call it. As a function of psi as the sum from i equals one to n plus one. Where n is the maximum order of our polynomial is then equal to u, which is the displacement values at the point psi i multiplying li that are continuous functions of psi. Now as an example function we use a sum over some sine functions and you can see the function here. Now let's look at how well we're approximating that function as a function of the order n of our polynomials and we start with n equals two and you see it's actually not a very good approximation of the original function f. If we increase the order we better and better approximate that function until basically we see no difference at least visually. So, of course in the end it will depend on the specific form of the function up to what order we have to go to to get overall correct solutions. So, this is how the Lagrange interpolation works. What we need to do now is actually to find also an integration scheme that we can use to calculate our many integrals that we have to solve to get our mass and stiffness matrices.