Hi, folks. Ed Amoroso here. I want to tell you a little bit about some of the early requirements work in cybersecurity built on the reference monitor concept that we showed you in a previous video from James Anderson. I want to tell you a little story first, it kind of tracks my own career because I was involved in a lot of this. It was in the mid-early 80s, I'm taking a job at a place called Bell Laboratories. If you can imagine, best Mecca for technology, particularly computer science, that was the place. These masters doing Unix are so smart. Those guys had invented an operating system just by coding some very simplistic routines as part of leaving a bigger project; they were working on this big monster project they called the Multics, and they thought, this is getting too complicated, and they went back home and they coded some very simple operating system capability that is smaller. So, they call that Unix, and you would know it as the precursor to Android Linux, IOS, about everything. But these guys were geniuses and I remember being a little young pup working there and I would try and go into the hallways where these guys worked and hoped that some of that genius would come out of their office and hit me. I would literally just find an excuse to go walk and be near them. They looked cool, too. The beard and Birkenstocks, it's like straight from central casting. But one of the things that they were not doing was adding cyber security, we called it computer security then, adding computer security mechanisms to underlying Unix. So, some teams started to look at this. And one was led by a guy named Bob Marsh Sr., who, in those days, was like one of the supervisors I was working for, and the idea was, how do we add security to Unix? Now, think about this. If I said to you, many of you watching, take some system, I want you to add security to it. What would you do? How would you start? What would be the framework that would help you decide what to do? I'm not sure you'd have a good one, right? You'd go, well, add something here, add something there. You'd come up with something in your mind that you believe would be the right answer to the question, how do we add security to this system? So, back in the 80s, the question was, how do we add security to Unix? Now, luckily, there was a document being written by bureaucrats in the government that was awesome. Now, I know some of the more senior folks watching this might hearken back and say, well, wait a minute. There are a lot of mistakes in that thing. Yeah, I mean in 1983, our view of computing was very different. For example, there was no real developed notion of networking in this document they were working on. But as I reflect back, I think it's one of the best government reports I've ever seen and here's what they came up with. The cover of the book was orange, so it was called the Orange Book, and this TCSEC, Trusted Computer System Evaluation Criteria, and it had this big long government reference model DoD 5200 blah blah blah blah, whatever, all these different ways of referring to it. But it was a criteria set of requirements where, if you decide you have a system and you want to make it more secure, you want to add security to it, it said, here's our idea and how you do it. Like if you have a really terrible system, no security, then they would say you're in this one class, like sort of the base class, the kindergarten version of cybersecurity, we have almost anything. They say if you want to do a little better, then maybe you improve access control this way, improve auditing this way, and demonstrate that you've built a nice system through this thing called assurance, meaning you demonstrate correctness and bug-free operation. Now, if you want to go a little further, here's some requirements to get you to the next class, and you can improve and add more access controls and add more auditing. I thought it was awesome. That's what we did for Unix. We thought, wait, look. We had this wonderful operating system, we want to add security to it. Why don't we use this document? So, it's mind-boggling now when you think about how nicely that document was written because anyone watching here would agree that usually, the image that we have in our mind of a government document, is probably terrible. I hope I'm not offending too many people watching but usually, government documents are written by committee, and was ever written by a committee that was any good? I would say this is one example. So, let's talk a couple of concepts that I think are really foundational. It came from the Orange Book. Again, the functional mechanisms, I'm not going to spend time on now because in subsequent videos we're going to be spending hours together on access and audit and intrusion detection and certainly authentication, which is a big deal. But the second half of requirements I want to spend a minute on, and they're called assurance requirements. Now, here's the idea. The idea was, that in addition to building something correct you have to provide some evidence that you did a good job. Isn't that weird? Like in security, it's not enough to just secure something. You actually have to provide demonstrable evidence that you've done something and done it right. And computer security for its first 20, 30, 40 years was as much focused on assurance as it was on functionality, and it's one of my great disappointments in cyber that we don't spend more time on assurance. We just don't. But I think it needs to come back. But here's one concept that was invented in the Orange Book that I think is marvelous. It was the idea called a Trusted Computing Base, TCB. And here's what that meant. It meant that every system should have some minimal set of functionality, that if all goes haywire, I know I can trust. Meaning, I provide high assurance that that piece of functionality is right. Now, it's all but missing in computing with one exception in modern day sort of device and system management. Some of you may know something about the idea of trusted execution. Like if you know a little bit about mobility and mobility design, mobile operating systems, you probably heard about TEE or Trusted Execution Environment, and it's that TCB concept where I have one little embedded piece of functionality that I trust. Sometimes, it's hardware, like in a lot of systems, you have a thing called a Trusted Platform Module, TPM, that implements that TCB, Trusted Computing Base concept, invented by the Orange Book almost 50 years ago. Isn't that incredible? Well, I guess it wasn't 50. 40 years ago. I think it's just absolutely magnificent that a government document had the presence of mind to recognize how important a trusted base is to the overall secure operating system. Really, really, really nice and elegant concept. So, what happened? Well, the Orange Book kind of chugged along. It certainly became the basis for much of computer security from the 80s to maybe the 2000s, but it got a little tangled up in some bureaucracy, as many government things will. Like, to demonstrate that you actually met a criteria class, it's got this one long evaluation process where a bunch of auditors would come in. Nobody liked that. That turned out to be expensive, hard to do, bureaucratic, a lot of paperwork and everybody went ugh. And I think as computing kind of became more complex and we became more impatient with our ability to do computing, the Orange Book was screaming, "Hey, do it correctly. Slow down. Do it right." But the world was saying, "Go faster. I want more. I need more functionality. I need more capability." And you see our divergence. And by the way folks, that divergence from do it slow, do it right, to I need more, I need it fast, that divergence is the cybersecurity problem that we focus on in this class. That's it in a nutshell. If you're wondering, what is cybersecurity about? Yeah, we have definitions and things that we give in an academic sense, but the way you want to think about it, the way you want to describe it to others is, "Do it right, do it slow, that goes on one path. Do it fast, I need more, that's a different path." The difference is what hackers exploit. So, that's why it's so important to make sure that we are rooted in an understanding of our history. The Orange Book is part of our history. As technologists, we hate history. Computer scientists in particular hate history. A book that was written a year ago might as well been written a hundred years ago. It goes in the bargain bin at the local bookstore. My gosh, that's a year ago. That's old stuff. Pay no attention. We've no sense of history, and we tend to make mistakes over and over and over and over again because we never look back. Now, particularly in Silicon Valley here in the United States, it's characterized but this unwillingness to slow down because that's not how you do it. That's not what's made Silicon Valley so exciting and successful. So, we have to understand that we're not going to change that. But we can't be silly. We have to recognize that a lot of the problems we have in cyber are simply because we've divorced ourselves from some of the basic foundational concepts like trusted computing base that came with us through the Orange Book. So, keep that in mind as we go through this. I think it will be helpful to you and useful to you as you continue your studies in cybersecurity. We'll see in the next video.