Hello, welcome to the course on Audio Signal Processing for Music Applications. We are in the last week of the course. And in this week we are touching different small topics with the idea of wrapping up the course. And giving you some views that we haven't been able to cover in the course like highlighting some specific topics. Like identifying future directions of the kinds of things we are doing or topics you might be interested in exploring. After taking this course and in this particular lecture, I want to introduce you my research group. The music technology group at the Universitat Pompeu Fabra. So that you know us better and especially you know the type of research that we do. And mainly the research that relates to what we have been talking about these last few weeks. So the Universitat Pompeu Fabra is in Barcelona, in the northeast part of Spain and in the city the university has three campuses. We are in the Communication campus as part of the Engineering Department, but together with Department of Journalism, Audio/Visual Communication. And the Department of Linguistics, and departments that relate to the topic of communication in an interdisciplinary type of manner. But our group is very much within the Engineering Department, what we call the Department of Information and Communication Technologies. And therefore, our teaching responsibilities relate to computer science, electrical engineering, and even biomedical engineering. And we teach courses that relate with music, audio, and some general topics like programming or machine learning or some other generic topics that are offered in these undergraduate programs. At the graduate level we have more specific programs, we have this master in sound of music computing that is very much within the scope of our research group. This is a program that has been going on for a few years now. And our course in fact, the Coursera course is in fact one of the courses of the master. Even though in the master, we can compliment this online teaching with some other types of activities that I believe, compliment and can help understand better some of the things we have been talking. I guess you have figured out online teaching is amazing, it's wonderful. But also it's good to have some human contact from time to time, in a classroom or in a personal situation. So anyways you of course, you are most welcome to apply to this master. Which apart from the other signal processing type of course. We also offer other courses and activities related with other areas of music technology, like related with the kinds of things we do in the music technology group that I will mention you. And then, the PhD program is a PhD program of the department. And like all PhD programs focus on a thesis. And therefore, it's a thesis under the supervision of a faculty member. And so, at the MTG we have many PhD students working with our faculty. And focusing on a variety of topics of the things we do. And, again, that's what I will mention next. So let me tell you a little bit about the research, which is the core of what we do. We can classify the research into four areas. One is Audio Signal Processing and you should know about that by now. Another we call it Sound and Music Description and we also hinted to that in the last week and we will now talk a little bit more about that. And then we have another area which we call musical and advanced interaction which relates with developing interfaces for music applications. And finally, another one that we kind of mentioned also too this week which relates with semantic technology, with semantic work. So all the technologies that are at a higher level than this audio processing that a lot deal with text related issues. And that allow to make sense of sound and music related information and that can compliment quite well this audio processing type of approach. So let me just go through each of these topics and give you some examples of some things that we do. So let's start with audio signal processing. One of the first research lines that we started was related to singing voice synthesis and still is a very active line of research. In fact, we started a collaboration with Yamaha quite a long time ago. And one of the projects that we have been developing has been Vocaloid, what is now known as Vocaloid, a singing voice synthesizer. And maybe the people in Japan know very well the Hatsune Miku character. Hatsune Miku is one of the virtual singers that Vocaloid has and it may be for the people that don't know about that. You might be interested in learning about it and just type Vocaloid or Hatsune Miku in YouTube on internet and you will see the kinds of things that Hatsune Milu has been doing. Anyway so the technology behind the singing voice synthesizer is very much related to what we have been doing in class. It's a spectra based model of the voice in which we analyze the harmonics and we subtract the harmonics and we obtain. A model similar to this idea of harmonic plus stochastic decomposition. In the area of audio signal processing, another topic related with that one is transforming the voice and mainly again the singing voice. So developing real time systems that allow to change the voice in real time from your own voice and change a character. And we have been doing several installations, like in museums or plug ins that allow to do this type of transformations. Again, with very similar techniques like the ones we have been doing in class. Another more recent activity that we started working on is sound source separation. And again, we have hinted at some of these issues in the course. But sound source separation is quite well defined problem by itself that has developed into a sort of a set of methodologies that are used for this particular problem that deviate a little bit from what we have been doing. But it's a clear continuation of this course. And as the word says the ideas is to separate sources within a polyphonic signal. And try to get the individual sources to sound as good as possible. Okay, now lets go to the other topic that we classify our research into and is the idea of sound and music description. And this is something that we talked about last week and is the idea of how to extract features from a sound that can be used to describe sound or a piece of music. So, Essentia is a library that we have been developing and maintaining and we are extending constantly to which we are basically incorporating new research results that we obtain at this level. The level of obtaining features that might be of use to describe and in this case, for example, this is the visualization of some of these low level features in a sonic visualizer. And that can be quite useful for a number of tasks. In the same area of sound and music description and related with the kind of things we mentioned about the different levels of abstraction. The level of abstraction that musically starts to be interesting is when we can extract features that have some musical significance. So things like the harmony or the chords of a piece of music. Features that can allow us to segment recordings so that we can define the structure of the piece of music. Or features that allow us to characterize the melodies. So that we can extract the pitch, or the prominent pitch of a song, and then identify certain notes or certain melodic landmarks that can be of use. And then another example is rhythm related features. So we are interested in trying to find ways to find different levels of a rhythmic kind of structure. You can start from the onsets but you can go higher up, so it's a multilayer kind of concept rhythm. And we can define different patterns or structuring units that relate with rhythm. And these are things that are very active and we are working on them. And then another topic related with these is going towards music collections, not just describing sounds, but describing collections of sounds or collections of music recordings. And this is an example of a project that I am leading which is called compmusic in which we are studying several music collections from different music traditions around the world. In fact we are studying five music traditions. The two Indian traditions, Carnatic and Hindustani. One tradition from China, the Beijing Opera. Another from Turkey, what can be called the Turkish Makam tradition. And finally another from the North part of Africa, the Maghreb, which is the Arab-Andalusian music. These are traditions, classical traditions that very interesting, they have some specific peculiarities that required some particular analysis. And that's what we are interested in. Finding ways for describing the melodic, the rhythmic, or the semantic issues of a particular music repertoire of a music tradition, in the way that make sense for that particular music tradition. Then out of that we put that together into like, a discovery type of front end which we call Gimia. And then I will talk about in a demonstration class that can be used to navigate through these different music collections. And in this case we start both from audio signals, and we also need information related to audio. All this metadata, all this editorial metadata, and information about the music that we can extract, for example from places like Wikipedia. So anyway, so that's a quite complex project in which we do different types of analysis. It uses some single processing work, it uses some machine learning, and finally it uses some semantic analysis to make sense of these other types of data that we also use. Okay and then let's go to the next area that we mentioned and that's not relate much with the course is the idea of musical and advanced interfaces. So we have a team within the MTG that's specializes in developing interfaces and developing new way of interacting with music. And the reactable which is a quite well known outcome of that group has been quite popular. So this is a table with which a musician can interact with different objects and through that can make music. So it's basically a synthesizer and is an interactive synthesizer that has this feedback, this visual feedback that is quite interesting and that offers quite a lot of new possibilities for making music. And then the last group of topics that I mentioned was the one called semantic technologies for sound and music. And this is this area in which apart from the audio content, we are interested in this text based information, this metadata that is associated with the audio. And that can be treated by itself. It can be processed in different ways. For example, free sound which is this website that you all know, it's a great platform to do some of this semantic type of work. So for example all the work that we need to do to organize the tags and to recommend tags when you upload a sound. I know the issues of searching and recommending things. This can be done from the audio and we do it from the audio. And it can be done from this text and this requires some semantic type of technology. So Freesound is a great platform to do some of this semantic analysis research. And finally, the last project that I want to mention is AcousticBrainz. AcousticBrainz is a very new initiative that we started with MusicBrainz. That again, even though it has this audio related work it allows us to explore semantic analysis, semantic technologies, and develop very interesting research on these new areas. So MusicBrainz is a great environment that has structure information about music. And we will talk about that in a demonstration class this week. So we start from that and in MusicBrainz, every single song, every single artist has an identifier. And then we assign that to a particular audio recordings that people in AcousticBrainz analyze and upload to our server. And so we have a huge collection of analyzed sounds, analyzed music. So we don't have the music in here because of the copyright issues so the people, just upload the analysis of those recordings using an extractor. We'll talk about that in the demonstration class so you'll learn more about that also in this class. So anyway, so this is a very new and interesting project that I believe has a lot of future for exploring this combination of music description with audio content and semantic analysis. And develop a lot of very interesting new ideas and even possibilities for developing very interesting tools for music. Here are some references to the things that I have mentioned. So you can go to website of the University of the department or the MTG where you will find the much more information about what I've been talking about. And here I just added some links that you might be interested in looking at. For example the Wikipedia entry for vocaloid or also the website of the project compmusic, which is were studying all these world music traditions. Or the entry in wikipedia for the reactable instrument or the website of this new project acousticbrainz. And I guess, you can find these slides in the sms tools too. Okay, so I gave you a very brief overview of the kind of research we do at MTG. We're doing more, so if you want to learn more you can look at our website. And you can learn all the kinds of things we do. We try to be very active at explaining what we do. So you will find quite a lot of links and videos about all our work. So, I hope you enjoy it. Thank you very much. See you next lecture, bye bye.