At the very beginning of this week, we started with interpolating functions, approximating functions, and we said there are many different choices for these basis functions. There is one specific set, which is Chebyshev polynomials, that are very interesting also in connection with other methods that we will encounter later. They are based on the trigonometric relations for cosine and phi that you see here, and they actually can be used to define so-called Chebyshev polynomials. The trigonometric relations give us polynomials in cosine phi. Now, this is the way Chebyshev polynomials are defined, by basically replacing cosine phi by X. You see the definitions here. So, the first polynomial is actually a constant, is one, and the other ones are then polynomials in X. We will look at this graphically. So, here, you can see some of the first Chebyshev polynomials. They are oscillating functions and defined between minus one and one. So, that's a major difference to what we had before using cosine and sine series, where the functions were always defined between minus and plus infinity and they were periodic. So, the major difference here is that Chebyshev polynomials allow us, basically, to approximate limited area functions. That's very important, for example, to implement boundary conditions. We had never talked about this in the context of the Fourier pseudospectral method, but because of the periodicity, it's actually very hard to implement properly boundary conditions, such as a free-surface boundary condition, a stress-free condition, and so forth. The Chebyshev method helps us here. So, how can we use these Chebyshev polynomials to approximate functions? Let's start with a continuous description, simply, again, like we want to approximate a function F of X by G N of X, and the G N of X this time will be a sum over weighted Chebyshev polynomials with some coefficients C that we don't know yet. As was the case in the Fourier method, we want to do this on the computer. So, we have to find a discrete set of points, and, of course, again, we would like to have that exact interpolation property. Now, looking again at the Chebyshev polynomials, the extremal values of these polynomials, where the derivative with space is zero, actually have a very important significance. They are called the Chebyshev collocation points, and as the name collocation already says, those will be the points where we can actually exactly interpolate an arbitrary function. So, these points XK equals cosine K pi by N are the Chebyshev collocation points. I want to show you those points as a function, basically, of the order of the polynomials. That's given in this graph here. Now, something very interesting happens. If the polynomial order increases, the difference between the smallest distance between points and the largest distance between points is actually increasing. So, we have no longer the situation that we have regular grid points, as was the case for the discrete Fourier series. Here, the grid point distance is actually uneven. This will have important consequences when we do time-dependent calculations because if you remember, the CFL criterion, the time step, will depend on the smallest distance. So, we're expecting some kind of a problem here, but we'll talk about this later. Let's just make an example. Let's take a function like X to the power of three in the domain between minus one and one and approximate it by Chebyshev polynomials. You see this here, this is perfect. You don't see a difference. It's a very good approximation, and, of course, it exactly interpolates at the Chebyshev collocation points. But what happens if we have a discontinuous function like a Heaviside function? You can see this here on the right-hand side, and, in fact, you also get Gibbs phenomenon, just as we've seen if we approximated the function using sine and cosine functions. Now, what about the cardinal functions? We can do the same approach as we've done before. We can try to approximate a function that's defined as being one at one of the collocation points and zero everywhere else. Actually, in this graph, there are two examples shown, and that works equally well in this situation. So, again, here, we have the opportunity to define these so-called cardinal functions and approximate arbitrary functional values by simply weighting these cardinal functions with the functional values at those individual points, a very powerful technique. Again, we would like to calculate derivatives. Now, we encounter yet another mathematic way of calculating a derivative of a vector, and for that, we use a matrix-vector calculation. Actually, the matrix will contain the difference operator and the vector contains the functional values, and the result will be a vector, same length as the original function, but it contains then the first derivative of that function. Now, here is the, without derivation, the definition of the Chebyshev derivative operator. This is easily implemented in a computer code. You will see that later in the Jupyter Notebooks. But let's compare what these matrices look like, the differential matrices, for these various approaches. You can have a finite difference matrix, a Fourier difference matrix, and this is shown graphically here. So, actually, a major difference is that the finite difference matrix is banded, and the Fourier or the Chebyshev difference matrices would be complete. So, they would have non-zero values for all of these elements. Now, let's try and make an example. Let's again define a function where we can calculate the analytical derivative easily. So, here's a sum over some sine functions and calculate the numerical derivative of this function using the Chebyshev matrix-vector type multiplication. Here, you see the results. Actually, as before, as was the case for the previous pseudospectral approach using Fourier series, we have an exact derivative at those Chebyshev collocation points. The error, you can see here is multiplied by 10 to the 11, again, is extremely small. These are only rounding errors coming from the computer. So, again, we have a very powerful method of calculating derivatives, essential ingredient for partial-differential equations, and we're going to apply it again this time to the one-dimensional elastic wave equation.