The big question for this segment is, what is the relationship between statistics and problem solving? [MUSIC] If only problem solving was just a matter of applying logic and data, and working out the answer. It rarely is, particularly, if we're dealing with complex, adaptive systems in which each component seems to have a will of its own. In most problem solving, there are large foggy areas that we know little about. This may be just because of lack of information or it may be because we're dealing with systems so complex that it's impossible to track down causes through multiple networks with complex and unpredictable feedback loops. Module Four focuses on those problems where we don't have enough information and know we don't. It's about how statistics works. Why are your statistics are important? And also about why sometimes statistics can mislead us. How do you solve a problem if a lot of your information is seriously dodgy? Or if you're trying to predict what voters will do at the next election? Or what should you do if it's just too expensive to collect the necessary data? Do you just guess? Modern statistics provide a remarkable demonstration that we can do better than that as a result of the careful application of elegant and sophisticated mathematical logic to the problem of uncertainty. Statistics begin with the mathematics of probability. If you want to figure out if a dice is loaded you could throw it a few times. If you get a six every time you'll get suspicious, why? Because that's not the sort of outcome you expect from a normal dice. On the other hand, sometimes even a normal dice would yield several sixes in a row. So how likely is that outcome? Statisticians can tell us with great precision if a particular outcome is very unlikely given certain assumption, and that may tell us to check our assumptions. Such as I really thought this casino was honest, but having lost all my money in just one hour, I'm beginning to wonder. The mathematics of statistics is elegant and beautiful. And it can lead us to powerful conclusions that we could not possibly reach without statistical methods. Pharmaceutical companies can use tests on small groups of people to estimate the likely outcome of using drugs on much larger groups. Statisticians can help them do so with great but not total precision. And that can help them weigh the risks and benefits of introducing a drug to the population as a whole. But, and yeah, there always seems to be a but in problem solving. Statistics are also slippery, and easy to misunderstand and misuse. They create a model world, a world based on certain assumptions like the climate models of meteorologists. And there's always the possibility that the real world is slightly different, perhaps in ways that matter. Was the sample on which you tested your drug typical of a population as a whole? Could there be subtle long term effects that won't show up in tests conducted over just two years? In 1954, an American journalist, Darrell Huff, published a wonderful book called, How to Lie with Statistics. It was so good that it was used for many years as a basic text, and it's still worth reading. It described many subtle and not so subtle statistical traps. For example, the word average should always ring alarm bells. Because there are many different types of averages, and they can mean completely different things. If 90% of the population is poor but 10% is very rich, the mean income will tell you very little, because most people will earn a lot less. The median is better because half the population earns more than the median, and half earns less. Even better would be a graph showing the income of the lowest 10%, the next 10% and so on. Best of all, of course, would be a list of the incomes of every member of society. But that's too much information. So we need measures of what statisticians call central tendency, but we need to use them carefully and thoughtfully. And the trouble is that most of us really find it hard to think statistically. Indeed, as the psychologist, Daniel Kahneman, has shown with great precision, most people find it very hard to think rationally even about quite simple problems. This is because our minds seem to have many problem solving tricks. Most of which are quick and dirty. We're usually too optimistic in our guesses. We over value information we've just learned. These shortcuts work pretty well a lot of the time, but they can also get us into trouble. If you've never had an accident running a red light, or being caught by a speed camera, does that mean you have a good chance of getting away with it one more time? The fact that there are clear limits to human rationality poses a problem for one of the most important of modern problem solving disciplines, economics. Economists make the simplifying assumption that most people are rational most of the time. That they take the time to calculate costs, to compare products systematically and choose what is best. But evidence has accumulated to show that most people are not this rational. And rarely have the time or information needed to make rational economic choices. If economists assume people are acting rationally, when in fact, most people are not, could this explain why economists so often fail to solve big problems like how to avoid periodic financial crashes? If economic theory is guiding the policy of most governments, and most financial institutions and most companies, is that a problem? If so, how do we solve it? [MUSIC]