Hello. Welcome back to the course on Audio Signal Processing for Music Applications. We are in the last week of the course and we just touching some small topics to basically wrap up the course, and give you some relevant complimentary topics that I believe you should be interested in. So for example, in this lecture I want to just review a few things of every week, and highlight some of the core topics that we have been going over. So that you can see kind of our perspective of the things that are relevant. So we will basically go through every week, we'll start by just describing the general framework of the course, and what we consider as the basic ideas that we cover. And then we'll go through the different topics like the first one on the sound spectra and basically on the DFT and STFT. Then we will talk about sinusoids and harmonics. Then we will go over the residual and stochastic components, and the idea of modeling sound with sinusoid or component. And then we'll talk about the two applications that we use when we're transforming sounds. And the other one of last week was about describing sounds and music in collections. And then, the previous lecture we just did a short attempt at looking beyond this course and seeing what other topics could be continuations of these. So let's go through everything. And first, the idea is the spectrogram, so this is a view of a spectrogram of this piano sound that we have seen in the class. And this captures a little bit of the essence of what we are doing. We are basically starting from this spectral representation of a sound. The idea is a sound, the basic representation is the time domain waveform. But for us that's really not that useful, so what we are doing first is to go to the frequency domain to the spectrum. And this has some perceptual motivation in the sense that our hearing process does some of that basically we are doing some of these analyses ourselves in order then to understand better what a sound or a piece of music is. So there is some perceptual motivation behind the idea that the frequency domain. The spectrum of a sound is a very basic view of a sound, after which we can do a lot of things from. And the kind of things we have been doing can be captured with this diagram. So we have been starting from the good signal, x of n, and all the analysis we have done have quite a bit of these. Basically the idea is to start with the Fast transform, the FFT and then obtain the peaks from the spectrum. Out of that we can obtain the partials, the harmonics of a sound. And these can be subtracted from the, basically the regional signal, and obtain this residual. Okay, so this is the basic analysis that a lot of the things we have been doing go thru. And out of that we can obtain interesting features. So we can do feature analysis to obtain the fundamental frequency we can obtain some ways of describing some sounds, but we can also transform these features. We can transform the sinusoid. We can transform this residual, and obtain a new sound, a synthesized sound that can be a modified version of the input sound. Or if we don't make any transformations, ideally, it should be very similar to the input sound. So this is the kind of that basic framework within which we have elaborated all our analysis description and synthesis techniques. So let's now go through some of these individual aspects of all this framework. The first one was the idea of a spectrum of a sound and the idea that we're going to start from a sound and obtain a spectrum, and we had two variants. We had the single frame version basically. And that's what we call the Discrete Fourier Transform in which we just analyze a fragment of a sound or a sound that is very short and obtain a single spectrum. And then we went over the time variant version of that, which is the short-time Fourier transform. So that instead of having a single spectrum, we have a sequence of spectra, and that's the X sub l of k, which is this idea of time varying frequency representation of a sound. And you can see that is the first model that was useful for us. The first analysis synthesis model that could capture any sound. In fact, this was an identity system, therefore we could analyze and synthesize any sound. Then on top of that we built this idea of sinusoidal or harmonic models. The idea that we could capture parts of the sound that have a sinusoidal nature. Strictly speaking the DFT is also a sinusoidal model, but this sinusoidal model is a little bit different. These are the idea of stable sinusoids, of sinusoids within a sound that have some stability, some coherence and that are really representing something meaningful from the acoustics of the sound. And that was the sinusoidal model. And the harmonic model was a step beyond that in the sense of there is quite a large family of sounds that these sinusoids have a harmonic relationship. So the harmonic sounds have a series of harmonics, but there are multiples of fundamental frequency. So we can use that restriction so if we have sounds that have that type of behavior, then the analysis can be done in a more restricted way and we can obtain a much more powerful representation. The idea of this harmonic model allows us to represent a sound in a very compact way and at the same time have a lot of potential for describing and transforming the sound. Then we saw that these sinusoidal, these harmonic components of sounds do not capture everything in the sound. There is a part of the sound that is left out, and this is the residual or it's the overcasting component. So when we have analyzed the sinusoids of the harmonics of the sound We can actually subtract them from the original sound and obtain residual, that is what is left and is sometimes quite relevant. Sometimes that's a very small part of the sound that can be discarded, so it's not perceptually relevant. But in many cases, this is an important part of the sound that needs to be preserved. It needs to be captured. And we can just capture ICTs and that will need to residual component or we can modulate with the stochastic model, with the idea of filtered white noise. So in the bottom, a representation the residual is approximated with this idea of a time bearing filter, Through which we put white noise. So we have this complete model of Sinusoidal plus stochastic components, and that captures many sounds. Not all the sounds are properly modeled this way. But, quite a large family of sounds either sinusoidal plus stochastic or harmonic plus stochastic can be used to model many sounds. And that then yields many potentials for capturing the essence of the sound or being able to modify the sound. And that what brought us to the idea of transforming sounds. When we have these type of representations We have these harmonics or these sinusoids. And we have the frequencies, the amplitudes, and the faces, and they can be processed. They can be manipulated quite a lot. We can change quite a bit their values, and the stochastic component too. And things like time is stretching or a shift in the frequencies or doing arbitrary changes. In fact, in the class we went over some common transformations, but there are many more that we could do that go beyond what we cover in class. So apart from the transformation of sounds, we also talked about describing sounds. This is a huge field. In fact, we only talk about a small part of it. The concept of describing sounds cover a wide variety of abstractions. And we mentioned we use this diagram to show the different levels of obstruction or description of sounds. We stayed very much in the low levels of descriptions and the physical sensorial, and some perceptual type descriptors that are useful to describe sounds and music signals. But this is a very interesting area of application of this spectral analyses towards this imagined broader feel of describing sounds and music that are not just a collection of frames. But there is a complete pieces of music and complete music collections and the type of problems are quite different from what we taught before. But anyway, so that was a good introduction to describing sounds and music using the techniques we have been talking the previous weeks. And then finally, in the last lecture we just kind of open up a door of saying okay, and what's beyond that, and beyond that there is a lot. There is a lot within the audio signal processing field. So audio signal processing is much more than what we have been looking at, and you can explore many other methodologies to analyze, describe and transform sounds. And even more than that, sounds and music, which is our target, a kind of information that we are trying to understand. It's much more than audio. So we just hinted at the idea that we can use many other sources of information to analyze describe sounds in music. And that's a very new area that is being explored in the last few years, and that opens up very interesting new application and development areas. And that's all, so all the slides are in the SMS Tools. And hopefully that was just a very brief summary of some highlights of the lectures that we went through the course. And maybe that help you understand some of this overall view of the course. And how we see the coherent of the topics that we covered. So, thank you very much and I will see you in the next lecture. Bye bye!