[NOISE] Hello, this is week seven of Interactive computer graphics course. The topic we discuss here is Real World Interaction. So real world interaction, we mean a user interface for computing systems, working in the real world. So, so far, we have discussed user interface between computer and the user. So user do something, and then computer do something, and there's interaction here. So through here, we extend this into the real world. So some computers can be connected to real world objects like a robot or appliances, home appliances so we discuss here the interaction between this kind of computing systems working in the real world robots or home appliances and so on. So here's a list of topics we'll discuss this week. So we discuss command card interface for home robots. And Style-By Demonstration for teaching robots behavior, and the Actuated Puppet Device, for posing a 3D characters and Robotic Light systems and Fur Display. The first topic we discuss is Command Card Interface, where this one is, this work was published as Magic Cards. And question here is, how to command a robot, working robot in your home. Now, typical approach is I think is two extremes. One is very simplified, easy to use control with speech or gesture. It's very abstract control. However, in some cases, this is too abstract, too simple. You know, you cannot specify details with very simple command like clean here, or do it, or go there. It's too abstract. And also, these kind of commands are volatile. You know, as soon as you say something it disappears. It's kind of a difficult to confirm whether you actually gave a command and sort And also you need to remember what command is available when you speech or gesture. The other extreme is direct control, using joystick or game pad or something. So you continuously provide command and then system continuously follows a command. So you can do everything with this control, every details, however. In some cases, this is too low-level control, and it's very tedious to control continuously. So what we try to do here is to hit some middle ground between these two extremes. So this is a method we propose here. We propose paper card interface for giving command. So in the, environment the user leaves a command card giving instruction to robot, such as clean here, or deliver this object here, and so on. And when she goes out, everything happens, and then robot do the task when she is going out. So the user puts instruction card in the environment, and then robot does the job while the user is out. So that's the idea. Let me show you a video. So, first part of this video is a futuristic view. So this is a concept video, not really implemented. But I hope you get the idea from this sequence. So suppose you have house, messy room, messy kitchen, full of trash bins is full. So we need, I want robot system to take care of it. So, she pulls out a command card set. So this one is a convenient list, of possible operations you can give to the robot system. And you pick up appropriate command card and leave it in the environment, like wash this kitchen, and deliver this to the garbage bin, take out this garbage, make bed, so, clean here. So a good thing about this kind of paper card are couple of things. So, first you can see a list of available commands just by looking at the cards set so you do not need to remember possible commands. And also leaving a card is a very clear instruction and if you want to cancel you just pick up a card. And also a card is a very appropriate for specifying physical location. Like clean here, deliver here and here is very ambiguous, but card location exactly specifies the location. So that's a benefit. And after she goes out a robot appears and do the task. And again this is a concept video not real implementations so in this case, yeah some robot appears and it does the job. So, here what we try to do is mimic interaction with human-human interaction you know a human can give a message card to another human. And then they can do the job. So we try to do the similar thing with computer system. So that's it. And in the evening the user comes back and everything is done. So this is the kind of envisioned futuristic view. And now let me show you actual implementation. So, Current Implementation, not too fancy, but do the basic thing. We built a system using Roomba robots. So we have a couple of robots working together. And also, we put it in the environment, where everything is observed by the ceiling mounted camera. And then system continuously tracks the command card and also the robot locations. So yeah here's the system in the view- in what, in the actually working system so user give instruction like deliver this box from here to there. This location. And vacuum clean this spot. And, also, remembers the task. And then the user goes out and the system starts working. And ceiling camera captures all of the command cards. First, system picks up the command cards Using this card pickup robot. He is in this card pickup. And after that, the system starts to do the job. First request was to deliver here, this to here. So, pushing robot starts to working and push the object to the target location. And next, user requested the system to vacuum clean this spot. So vacuum cleaner appears and then clean it. So, that's the system and also, another interesting aspect is that reporting errors. So, user gives instructions to robot, as a paper card. And, but sometimes system also provide feedback to the user. For example, system makes an error, then system should present error report to the user. An interesting point about the system is that error report is also given as a paper card left in the environment. Let's see it. So if the robot fails, like battery is out. And then small printer, mobile printer robot appears and leave a printed message to the user, such as, I'm sorry, I failed. So in this way, this close a loop, you know, user provides command on paper card and then system provides feedback as a paper card. In this way, user can interact with the robotic system without touching any computer, just using paper card. So yeah, again the environments is like this we have ceiling mounted cameras and central computers and then remotely controlled robots. And we use visual tag using 2-dimensional augmented reality visual code. And backside you see instruction for the user. So that's our system. And here let me briefly describe pushing algorithm developed for this system. So this is a pushing algorithm for a non- pre- prehensile, which means no grabbing. So yeah, so naive approach is two binary state approach. So first half your robot should go behind the object, and after reaching the behind object the robot going to the pushing state and it tries to push. But in the middle then robot's mo- may- object may move sideways, and in that case robot go back to the behind. And after reaching the behind goes to pushing state. So in reality the robot needs to go back and. forth between these two states. And this is not very desirable. So, this is unstable. So, robot can, into an unstable status, between just go back and forth between these states. And also this requires very careful parameter setting. You know, you need some threshold to distinguish these two states. And this parameter can be very sensitive. So, what we propose it to avoid this. We propose to use a dipole field to achieve pushing behavior. So suppose you have object here, and then you want to move this object to the- this direction. So, given this object position and orientation, we compute the dipole field at the center. centered at the object and then orient it towards the target direction. And after that, simply we flows the robot, according to this dipole field. There are a couple benefits. First, there's no clear mode switching, going behind and pushing it smoothly merged so there's no sudden change between the two modes. Another thing is that scale invariance. So, as you see dipole field it doesn't change, even if you scale up or scale down. Always the same shape depend, regardless of its scale. So which means that you can apply the same algorithm, the same parameters to the object with different scales. So here's a little bit more details. So you have object and pushing direction, desired pushing direction, direction and you first compute local coordinate plane defined by the object position and orientation. And then you compute the position. Of the robot, relative to this object in this coordinate plane. So, you get cosine theta and sine theta, with the distance r. After having this theta angle, the algorithm is very simple. What you have to do is, just compute cosine2 theta, sine2 theta. It gives you the direction. Where the robot should move This is just definition of the dipole field- dipole field, and you can implement this kind of idea with Very, very few lines of codes. So, let me show you a brief video. So, yeah this is the basic idea. And the system consists of a ceiling camera and robots. And then, this is a computed Dipole Field. And then robot flows along the, flow field. So in this example, the robot pushes a very small cup. And then the movement is very small. There is no distinct two modes. It simply merge together. And here the robot tries to push the large dish exactly using the same dipole field. And you can apply the same algorithm to the Roomba box, to push things on the floor. By the way, this robot is very easy to control, from computer. So, if you want to try robotic system, I recommend to try this system. Another interesting thing is here, Cooperative pushing. If the object is too heavy for one object push, two robots should cooperate together. Traditional approach is the parallel pushing, you know? There's the object and then two object side by side, robot push together. But here, we tested Single-line serial pushing. So the algorithm is very simple, just apply two dipole fields. Here, this robot tries to push this box this way, but it's too heavy. Then this robot pushes this second- the first robot. The second robot pushes the first robot, again, using a dipole field. So by combining two dipole field flow then, you know, robots can push relatively heavy object. And again motion is very smooth. So that's it. So, yeah, so we introduce command card based interaction and pushing algorithm to achieve it. And original paper was called Magic Cards, A paper tag interface for implicit robot control. And the paper ID we used is a popular one. A popular visual two dimensional bar code frequently used in augmented reality systems. Original paper was published in 1998 by- as Matrix: realtime object identification and registration method for augmented reality. And if you want to use it and then there's a popular toolkit called AR toolkit, and I recommend you to try it, if you want. And for the pushing algorithm, we published a paper on the Dipole Field for Object Delivery by Pushing on a Flat Surface. So if you want to look at the pushing algorithm please take a look at this paper. Thank you.