Welcome to this lecture on advanced techniques. We are entering the last stages of this Edx MOOC. I'm going to miss it but I wanted to talk about advanced techniques. So far we've been mostly playing with standard devices and I'm going to introduce you to a few techniques that I think are important, standard devices and standard SDKs and everything standard. But in reality, for example, in a lot of my work, I actually built a lot of my own stuff. I actually build custom displays, custom controller sometimes when necessary and I wanted to share some of the more advanced techniques here. We're going to go beyond some of these and I'll bring in some of these devices that I have here on the left as examples of how to make things more advanced. Let's get started. Here is my overview of the advanced techniques. We're going to have a block on a few topics that I think are mostly relevant for VR, so procedural generation, redirected walking, and custom controllers, I think makes a lot of sense for VR. Then we're going to do a little bit more advanced AR, so 3D reconstruction, object tracking and custom displays is what I want to talk about here. Then I have a list of topics that are really cross-cutting, so both VR and AR and remember, this is on advanced techniques. Michael, why did you put accessibility here on the advanced techniques? Well, I wish I didn't have to but the problem is, accessibility is really difficult to do at the moment. We're addressing the topic, we mentioned it throughout the MOOC but we actually have really very limited means to really make the XR stuff very accessible just because the tools including Unity and Unreal are not really accessible. Obviously, like using AR and VR displays, VR is mostly predominantly visual and then AR, while there are a lot of opportunities in research, it's not really designed with accessibility in mind, most of the AR devices that I know at this stage. If we're going to talk about this, we're going to talk about text input. How do you actually get things typed into AR and VR? Collaboration, so multi-user, that's something that I think is very interesting. I'll expand upon this a little bit in my XR research lecture following after this. We're going to talk about adaptive layout and customization. Giving more control to the users or actually the system being smarter about how to actually adapt the layout of the scene to make it fit the physical environment of the user. Key topic actually and not very well-addressed in current implementations. Progressive XR, this idea of building adaptivity into the interfaces so that they can actually adapt. An interface can adapt to AR or to VR depending on the task preference and whatever the user feels like. The context could also be a driving factor here. We're going to talk about this. Finally, mixed reality capture and virtual production. This is a topic that I am especially interested in at the moment in my research. Also, this is something that you may be aware of from filmmaking. Filmmaking is changing radically bringing a AR/VR techniques into the whole filmmaking world and I think that's why it's very interesting. It's relevant to us because I've been demoing a lot throughout the MOOC and I wished I had more of these tools available. You saw a few examples of mixed reality capture and virtual production, I'm going to expand upon that. This is my brief overview. Now we're going to get started. We're going to jump right in. This lecture is really built around some of these concepts. Whenever I have like a really cool demo or video, I'll plug it in. Otherwise, we're going to spend some time looking at some slides that I spent a lot of time actually designing to really get these concepts across. The first one we're going to look at is procedural generation. You are in this physical world, this is your room, you're looking this way and then in this load you see virtual content, virtual reality here in blue. The gray parts are the physical world. Draw a line between these, down here is actually a copy top-down. what do you see? This is top-down view of your headset. If you walk this way here, you would move that way here, top-down, cool. Now that we've established this, let's say you are going to interact with this world and maybe look a little bit, you're coming closer here. I'm going to let you walk in a second but let's say you are getting here. This idea of procedural generation is we're loading more and more virtual content. As you, for example, move or spend more time, it could be lazy loading but the whole idea of procedural generation is, as the user explores more of the virtual world by being in the physical world, we can actually add to it. That's going to be interesting. We could generate a maze here to have you walk through this. This actually then brings me to the next concept, which is redirected walking. Redirected walking starts with this impulse, did you notice? I'm going to go back. I'm going to give you this impulse. That impulse will most likely trigger and it will catch your attention. It's a visual cue and which will catch your attention. It will make you adjust your gaze. Well, people walk where they look. It's like you drive where you to look and so be careful when you're driving and not looking. It's the same here. We have a little bit of an impact and we move this. Now, as we're focused on this, we're actually going to move the table, did you notice? Just a little bit, I'm clearly exaggerating here. There might be smaller shifts but anyway. This shift is really important. It catches the user's attention, adapting their gaze and walking towards the object, did you see? Rather than going straight, we actually walked what is more like a circle. While the user feels like, "Wow, I must be at the end of this physical world now," we actually moved just this way in a circle and that gives us more virtual space that we can cover in the same physical world, redirected walking. There is actually research that shows that people can feel like they are walking along this corridor, but what they're doing is they're walking in a circle, and that is just very interesting. Just a little bit of perceptual tricks here and there. Redirected walking, we have that, now we're going to go to custom controllers. We're going to play virtual ping-pong here. There's going to be a ping-pong ball, and obviously we can give you normally a virtual reality controller. A lot of these experiences are like that. Usually you have some virtual controller, this one here, and to be honest, this one feels pretty good. Playing ping-pong with this is not the worst thing that I've done so far. If it were different experience like golf or something, this doesn't feel right. We're going to explore it a little, what options we have and how could we build custom controllers. One thing we could do and this is often done as, we just give haptic feedback. We give you the illusion of holding a racket, but what you're actually holding is the controller. The controller represented here by this cylinder in the top-down view. Haptic feedback, we could for example use the vibration motors that are in most VR controllers to give you that feel whenever you hit the ball or something, we have a little bit of vibration feedback. Make the user feel like they use the real object. But in fact, they are not, they are just using a virtual reality controller. Pedal works quite well, so let's stick with this example and say we remove this, and what we now actually give you is a real physical controller that really matches the shape, it's a custom controller. We actually give you a pedal. Maybe we instrument it in some way so that we can track it or we do object recognition. I'll talk about this in a few slides as another concept. Controller now actually has the physical affordances that really makes the controller look and feel like the real object, it might be the real object, but maybe it's battery operated inside, internally as a custom controller. You need to think about that and somehow also need to communicate with the headset, probably it's in inertial measurement units so that you can detect small tilts and maybe some lights around it so you can track it. This is how most controllers actually work. We can track it visually there. The computer sees the light, you don't actually see that frequency. Here is a custom controller, it's not as advanced as I just talked about, but this is a project that my student Ruchi and I have been working on. It's an independent study, it was really a cool little project. What we did here was build a custom 3D controller, so that we call it WatchPod. In fact, what we do is we take a watch, we remove the wristband, we use the touchscreen of that little watch, we put it in this pod. As you can see, we're tracking some of the 3D movements with this controller now. Because we have a touchscreen, we can also go into different modes, like this is translation mode, now we can move the cube to the right and to the left. We have a combination of the touch display and some other things, we cut this here and the physical movement of the custom controller. To be honest, just using the inertial measurement unit and the IMU on the smartwatch wasn't really great. Acceleration and gyroscope, it's really hard to play with those, so building custom controller is actually quite a lot of work. I think this project could be cool. Did you see how we manipulated the A frame scene? Yeah. Unfortunately, which is now somewhere else in the world, just like many other students successfully graduated, and this is an independent study that I'm stuck with, but it was a cool idea and I hope that one day maybe it'll make it into a paper, or maybe into a little bit of a project that we can use in my lab. This is an example of a custom controller. Now we're going to enter a little bit more AR-oriented contents. I want to talk about 3D reconstruction. Physical world, virtual world, but you're seeing it through an AR headset. We have a physical table now, not a virtual table, I actually moved it into the real world if you noticed and is represented here by this box. As you walk through this space, what we do is the device scans the environment and it does a real-time 3D reconstruction of this. Not only the basic walls and ceilings, no, increasingly we can build denser point clouds, this is then a 3D point cloud that represents this mesh, and so we can also see at least the geometry of this table, what you and I call table. A computer wouldn't be able at this stage just with 3D reconstruction and spatial mapping, we can't actually call this a table yet. You just have a spatial mesh, so we scan the physical world as a dense 3D point cloud. If you want to call it table, we need an additional process, it's an important step. We need object recognition, some object segmentation, recognition segmentation, classification, so giving it a label, so then we can actually finally call it table. Most devices only do spatial mapping if at all. Then adding this object recognition thing isn't really part of current pipelines. It's a very expensive process and it goes against some lookup tables. If you're a search engine provider, we actually have a good chance of getting this right. Look at Google Lens, for example. Object tracking is now something that we can do. If we can do object recognition we can do object tracking. Let's bring in a ping-pong bracket. Now we have that and it's actually a physical one. We're playing ping-pong in AR now, so it's fine. Now we actually scan this object, we recognize this object. Did you see the blue? I hope you noticed, this was tedious. Anyway, object recognition to detect the physical object in the real world. We scan it and now as the user moves it, so I'm going to play, hit the ball. We can actually register. That means the 3D location and track it. Which means if you build this overtime, we can actually determine the velocity at which you hit the ball. Now we're going to hit the ball and the ball is going to bounce off the table. Pretty cool. Remember, it's a virtual ping-pong ball. We use the physical object to hit a virtual ping-pong ball that was then bouncing around using the idea of object recognition, and then also registration and tracking. Pretty cool. These are some of the concepts that I think are pretty fundamental but also advanced. You can build these kinds of things with display. Obviously with the Leap Motion, you could track hands that are above this field of view. So you place it on a table, hands on the window, you can track your hands. You can start building some more virtual controllers. I've also seen it mounted in some interesting ways and scan the hand. Now we have inbuilt hand tracking in a lot of the devices including the Oculus Quest and obviously the HoloLens 2. Not so important anymore, but if you want to build your custom tracking solutions, you would actually need to have a depth camera, for example, not the one that's built into the HoloLens into real sense or the connector I was showing off earlier, it's over there. The connector would also fit the bill. Then we can actually implement some of these concepts that I was just talking about conceptually, we could actually implement them with technologies. I do this all the time in a lot of my projects, it's actually quite fun. What I don't do that much is actually custom displays, I wanted to talk about custom displays. Along the reality virtuality continuum or the mixed reality spectrum, we actually have a number of displays that we can just put out here. We have tangible, spatial, so projective AR. We have hand-held AR, we also have hand-worn AR, so it's probably somewhere here. We can have room-size so cave VR displays, if you will, and then head-mounted VR. It's pretty standard. The displays that you are probably most familiar with are hand-held AR, since we spent quite some time with them in this course. Then maybe head-mounted either the cardboard or maybe some Oculus or HTC Vive, virtual reality headset. What you haven't thought so much about is like, where are these other kinds of displays? I wanted to give a few examples here. One is Lightform, which in some ways is a shout out to a research project called Rumor Life from friends and colleagues at Microsoft Research, and it's now more or less a little spin off and startup. Lightform is a combination of a projector and a camera, it's like a procam setup and that allows you to build spatial projective AR experiences. Pretty cool. Then Looking Glass, maybe that's something you've come across. I haven't actually played with it myself. I've seen it in a few installations when I visited the friends at Disney or actually at Michigan we have a few people experimenting with it. Think of it as like a window that comes in different sizes. It's a holographic AR display. To give you a sense of what this is, I'm going to show you a research prototype, which is a room sized AR mirrors. It`s like monitor-based AR, is somewhere here. By doing room-sized monitor-based AR, that is really interesting. I wanted to show you my little project here, AR mirrors. It was one of my, before the holidays projects let`s built this. I just want to see how this feels. This is me in the lab. I have screens everywhere. It's a good and bad decision because I also have writable walls, but that's a different story. Anyway, on each of these screens actually I have a connector. Connect here, connect here. These can track me and each of these actually show a mirror, like a virtual copy. We actually have a 3D model of my lab, a virtual copy of what I'm doing in this lab. We can obviously then do a skeletal tracking, both the easy parts, the joints that are highly visible, and then some that are harder to see for this display. For that display actually these would be harder to see. I just wanted to color code this a little differently. This is how this looks when I'm in the room. This really now works like a mirror. As you're going towards it, it actually changes the perspective and renders it differently. If you do this in front of a mirror, like you're going to the left or going to the right, you actually get to see a better angle at what's behind you. This is what I wanted to mimic here. For that we obviously needed to create bots on really accustomed display technology. Maybe you were thinking, "Hey Mike, are you building your own display?" I have actually done that as well with 45 degree till setups. Papers goes and all this stuff. Pretty cool. But it's hard to actually demonstrate this kind of stuff so that it looks like a cool experience for you.