[SOUND]. This, in the hands of a physician can be a serious threat to the security of some systems. And whatever way you're thinking that may be, you're probably wrong. The story comes from a hospital that installed these computers on wheels. Essentially, workstations that were on wheeled carts that you could move from room to room or bay to bay in a hospital and physicians were leaving this logged in. Now there's a real security problem there, because there's patient records that are visible, you potentially have access to prescriptions or even drugs themselves. So it's very important that only authorized people have access to these systems. The security solution that was proposed was that proximity sensors were put on these carts. So if the doctor walked away for a certain amount of time, they would automatically be logged out. This was so frustrating to physicians, that they started putting styrofoam cups over the proximity detectors so it wouldn't know when they walked away. This actually made the system less secure. And the solutions that security people proposed to this problem included, security monitors walking the hospital halls, removing the styrofoam cups. And, on the technical side, building models of human frustration and exposure to logout behavior, to optimize the time before the system logged them out. because I can tell you exactly what went wrong with that system. Whoever designed the security did not take the couple of days they should have to follow physicians and nurses around the hospital and see how they were using the carts. because if the physician wheels that into a room, types in some information and then goes to examine a patient, they shouldn't be logged out. And if they lose all the information they typed in, it's totally reasonable for them to get around the unreasonable security system. Instead what we see so often in security is that the person who designs it sits in their office, comes up with how it probably should work and what they think is secure. They don't give any consideration to human workflows, tasks, or usability, and they impose this on people and expect them to conform. When people are reasonably trying to get their work done, and the security system gets in their way, of course people try to get around it because it's stopping them from doing their job. And what we need to do is make sure that human workflows, capabilities, and tasks are incorporated into the security side, to make things actually work. Let me give you another example of where this has failed. Our pathetically insecure password systems. I am supposed to create passwords that have no dictionary words, but they have uppercase, lowercase, numbers, punctuation, that are at least eight characters long, that don't repeat the same character multiple times, that don't repeat previous passwords I've used. I have to change them every six months and, I can't repeat them across the 200 or more places that I actually have passwords. Anyone who has the most basic understanding of human cognitive abilities knows that this is a ridiculous thing to ask people to do. But not only do we ask people to do it, but we get targeted with stories about how we're being insecure or stupid for creating simple passwords, or for not changing them enough, or for reusing them across sites. The field of human computer interaction has spent decades learning how people think, psychologically and cognitively. Building models of that, and showing how that can be applied to the design of technology. We've built design methods that allow us to integrate people's tasks and feedback into the systems that we built. And we have lots of ways of evaluating systems for usability, to see how well people can learn them, how well they remember how to use them, and how quickly and efficiently they can accomplish their tasks within those systems. Unfortunately, so much of cyber security has completely ignored all of what we know about human computer interaction. And if any consideration is given to that it's kind of tacked on the end. This is a burden that cyber security people are actually familiar with, because a lot of times security is tacked on to a system after its built. But if we want a secure system, the security and the human component needs to be integrated from the beginning. You can't build a security system unless you have talked to people, you understand what they're doing and you arrange the security around helping them do their tasks. If the security gets in the way, it's totally reasonable that people are going to find ways around it so they can get their work done. If you design human users into the system from the beginning, and build the security around what they're doing, your system ends up being more secure, because people don't have to go around it. It works with them in parallel with their tasks. We see the same sorts of problems in privacy. We ran a simulation in my lab where we took a whole bunch of personal data points from Facebook. Your name, your private messages, your favorite books. And we asked people to go through that list and check off what they thought apps like Candy Crush Saga or Farmville could access. Everybody underestimated how much data apps could access. And so as a second step, we had them view the Facebook privacy policy and data use policy, or we had some people watch this interactive horror movie called Take This Lollipop that integrates data from your Facebook account that it accesses like an app, and then shows this stalker tracking you down. The people who watched the video were significantly more informed afterwards about what data points apps could access, than the people who read the data use policy. So this says a lot how well we're conveying to people about what the risks are with sharing their data, and how they protect it. The same problems exist in privacy settings and security controls. They're just not designed for human users. It's extremely hard for people to figure out how to use them. And the same solutions that are available in HCI for security, apply exactly in the same way to privacy. So what we're going to cover in this class are the basics of human computer interaction. How do we understand people's cognitive and psychological abilities? How do we understand their tasks and what they're trying to do, and then look at methods of designing that into systems and evaluating how well the systems do? We'll then, take those lessons and examine all kinds of different security and privacy problems from authentication to social media privacy settings. So we can see how you as a designer, can build in understandings of humans, to make systems that are ultimately more secure.