We can build computers that do amazing things. An exciting recent development is a new field called machine learning. Machine learning aims to create machines that can learn from their experiences. Remarkably, machine learning appears to hold the key to understanding how our own brains work. In this section, we're going to explore this exciting new field and some of the questions that it raises. Intelligent machines are all around us. They power our search engines. They filter our email for spam. They suggest things that we might like to buy or like to see. They drive cars and they even explore other planets for us. None of these tasks is easy. To solve them, a machine needs to not only follow rules, but also recognize patterns and react quickly to new information. Our machines are able to do this courtesy of a brilliant idea, computation. A computation is a way of solving a problem by following a set of instructions, or a recipe, called an algorithm. An algorithm tells a machine how to solve its problem by taking a series of small steps. The steps are usually very simple, such as adding one or checking if two numbers are the same, but when many small simple steps are joined together, the result can be behavior that's both complex and intelligent. The father of our modern idea of computation is Alan Turing. Alan Turing was an English mathematician who lived from 1912 to 1954. Turing was obsessed with the project of trying to create an intelligent machine. Turing discovered what he thought was the key to this in his most famous mathematical paper. In that paper, Turing discovered what's now known as a universal computing machine. A universal computing machine is a machine that, if given the right instructions, can replace any of our computer. That might sound like an incredible proposal. A single machine that could replace any other computing machine, not just on the planet now, but any other computing machine that could possibly be built. You might guess that if a universal computing machine exists at all, it would have to be a fantastically complex device. Alan Turing showed something that nobody had guessed. It's relatively easy to build a universal computing machine. The universal computing machine that Turing described is known today as the universal Turing Machine. The universal Turing Machine consists of a long paper tape and a head that can scan along the tape and read and write symbols, guided by a simple set of instructions. If given the right set of instructions, the right algorithm, a universal Turing machine can mimic, can reproduce the behavior of any possible computing machine. So it seems that the problem of producing intelligent behavior is now reduced to simply the problem of finding the right kind of algorithm. But what kind of algorithm produces intelligent behavior? For many years, attention focused on algorithms that involved language-like symbols and rules, the sort of thing that you might see if you try to write down in English instructions on how to create intelligent behavior. More recently, attention's focused on algorithms that do not involve language-like rules and symbols. These algorithms manipulate distributed patterns of activity in a network inspired by the human brain, so-called connectionist networks. Today, the algorithms that hold the most promise for generating intelligence behavior are probabilistic algorithms. Probabilistic algorithms allow a system to represent not only a range of different outcomes, but also the system's uncertainty about those outcomes. The system may represent not only that there's a tiger lurking around the corner, but also how uncertain the system is about this. One of the great virtues of probabilistic algorithms, as we'll see later, is that they allow a system to learn from experience using a simple principle called Bayes' rule. The idea of computing allows us to build intelligent machines, but it also suggests a new way of thinking about ourselves. If performing the right computation is the way to make a machine intelligent, perhaps that's also what makes us humans intelligent. Perhaps computation is not just a useful engineering tool, but also the key to explaining how the human brain works. Let's unpack the idea that computation could help us to explain the human brain. In the 1970s, a brilliant young cognitive scientist called David Marr said that computation could help us to answer three different questions about the brain. First, which task does the brain solve? Second, how does the brain solve that task? Third, why is that task important for the brain to solve? Marr grouped these three questions into three different levels of computational description. Let's look at each of these levels of description in turn. Rather confusingly, Marr called his first level of description the computational level. The computational level for Marr covers two things. First, which task does the brain solve? And second, why is that task important for the brain to solve? Let's use a thought experiment to understand the computational level better. Imagine that one day, you discover in your granny's attic a mysterious device. The device has buttons, dials and levers, and you don't know what any of it does. However, you remember that granny used the device when she was balancing her checkbook. You play around with the device and you notice a pattern. If you dial two numbers into the device, it appears to display something that stands for their sum. Balancing a checkbook requires adding numbers. Therefore, it seems reasonable to think that the task granny's device solves is that of computing the addition function. In Marr's terms, this is a computational level description of granny's device. It's a description of which mathematical function, addition, subtraction, multiplication, the device computes. Notice that in order to answer this which question, we have to answer a why question. Why was granny using the device? Without some guess as to a device's intended purpose, in this case, balancing a checkbook, we would have no way of picking out from the vast number of physical things a device does which are relevant to solving its tasks. This is why Marr groups his which and why questions together. Both fall under what he calls the computational level description of the device. Marr's second level of description is called the algorithmic level. The algorithmic level concerns how the device solves its task. There are many different algorithms that compute the addition function. Without further investigation, all we know is that granny's device is using one of them. Different algorithms would involve the device taking different steps or taking its steps in different orders. Some algorithms are faster than others and some use less memory. How do we know which algorithm granny's device is using? A good start is to try and find out the basic steps that granny's device can take, how long it takes to execute a single step, and how much memory it has. Once we know these basic building blocks, we can start to form hypotheses about which algorithm it's using. We can then probe granny's device by giving it lots of addition problems. We can look at its performance profile, how fast it solves problems, and the errors it's susceptible to make. By looking at its performance profile, we can test our hypotheses about which algorithms it's using. Marr's third level is called the implementation level. Even if we're sure of which algorithm granny's device uses, we still wouldn't know how the physical parts inside granny's device map onto steps in that algorithm. Imagine we open granny's device up. Inside, we might find all sorts of different things, little gears, cog wheels, pins, and hammers. An implementation level description would describe the role each one of these physical parts plays in implementing the device's algorithm. How do we go about finding this implementation level description? Well, one strategy would be to keep granny's device open and watch what changes inside it when it solves an addition problem. We could then try mapping those physical changes inside the device onto steps in its addition algorithm. Another strategy would be to intervene on the device. We might try rewiring or moving one of its components and seeing how that affects its performance. This will give a clue as to the role of that component in implementing its algorithm. Marr provides us with three ways in which computation can help us to explain a mysterious device. One might use computation to offer a computational level description of the device. Which function does it compute and why? An algorithmic level description. How does it compute that function? Or an implementation level description. How do the physical components of that device map onto steps in its algorithm? The puzzles we face with granny's device are not a million miles away from the problems that cognitive scientists face when confronted by the human brain. Cognitive scientists want to know which computation the brain performs, which algorithm it uses for performing that computation, and which physical bits of the brain are relevant for implementing that computation. The techniques that cognitive scientists use to answer these questions are also structurally similar. Cognitive scientists try to understand the purpose of a particular piece of behavior. What does our piece of behavior aim to achieve for the brain? They fit hypotheses about algorithms that the brain is running to data about human reaction times and susceptibility to error. And they watch and intervene on the brain using a variety of experimental techniques to try and isolate the role that various physical bits of the brain play in generating behavior. The big difference between granny's device and the human brain is that brains are vastly more complex. The human brain is one of the most complex objects in the universe. It has over 100 billion neurons and a mind-bogglingly complex web of close to a quadrillion connections. The brain performs not one, but many different computations simultaneously, each one a great deal more complex than the addition function. Unraveling a computational description of the brain is a daunting task, yet it's a project on which significant progress has already been made.