Hi. My name's Andy Clark, I'm a professor of Philosophy working at the University of Edinburgh in the school of Philosophy, Psychology, and Language Sciences. My own particular interests lie in the Philosophy of Cognitive Science. That's to say the sciences and study of the mind, are within that area, in what's known as the study of embodied cognition. Embodied cognition is all about the huge differences that having an active body, and being situated in a stable environment, make to the kind of tasks that the brain has to perform in order to support adaptive success. This kind of work provides a useful antidote to the increase in the neurocentric vision that we encounter in contemporary media. For example, it often seems as if we've learned all we need to know about spatial navigation, or perhaps about falling in love, when we learned which bits of the brain are doing what, when we exhibit those kinds of capacities. In fact, though, for most interesting capacities that biological agents exhibit, they turn upon a complex interplay between what the brain's doing and what the body is doing. Bringing new perceps into the processing arena and what we do out there in the world. Given that my interest lies squarely in body cognition in these ways, I'm very fortunate to be working here at the University of Edinburgh where we have an internationally famous robotics program. Robotics course is one of the key places where brain, body, and world come together. Within that program, a leading figure, my co-presenter in this particular session, Professor Barbara Webb. >> Hi I'm Barbara Webb, and I work in the Institute for Perception, Action and Behaviour. And my group is interested particularly in minimal minds in how much or, indeed, how little brain might be needed to make a robot do intelligent things or to explain adaptive behavior in humans or other animals. And, in fact, when we start to look at this in a robot perspective, we sometimes find that you don't need a brain at all. A classic example of this are the passive dynamic walking machines built at Cornell University. >> This robot has no motors or controllers. Rather, its motion is begun by falling forward on this slight slope. This makes one of its legs swing forward by pendulum motion. The foot contacts the ground, the other leg swings forward and so on. And as you can see, this produces a very natural walking gate, much more like a human walking gate than many well known humanoid robots. There are other physical tricks that contribute to this smooth walking. Just like humans theres a knee structure that stops the leg bending forward but allows it to bend backwards. Providing foot clearance during the swing. The feet themselves are curved to make each step smooth. And there is a counterbalancing swing of the arms. >> So understanding and actually building the physical system tells us more about how we walk than trying to introspectively think about how we move our legs, swing the foot forward, etc. And, in fact, trying to think about how you walk tends to make you walk worse. You should try that for yourself. So the first message, really, is that taking the physical into account is always important, but on the other hand, obviously, there is something going on inside us when we walk. There's lots of unconscious sensory processing, picking up the visual cues, the feedback from our feet about what we're walking over so that we can handle difficult terrain, go up steps, avoid obstacles. But of course, there's also, inside of us, some intention. We're walking somewhere, for some reason. So are we then more than just machines? >> As a philosopher working in this area, I feel like it's my duty at this point just to pause to remind ourselves of how strange and interesting the mind and the mantle really are. If you think about the volcano when it erupts It just does what it does. It's just part of a physical order behaving in a way that doesn't seem to require us to think about what it believes, what it hopes, what it desires. Not even what it perceives. When the volcano erupts and interrupts my family vacation, I don't think that the volcano planned to erupt on that day, believed that I was starting the vacation that day, desired to interrupt my vacation. On the other hand when I go to the fridge to get a beer, you may watch my behavior and say Andy believed there was a beer in the fridge, desired to drink the beer, and so went to the fridge. Similarly, when I observe the behavior of my cat chasing a mouse I think hey, my cat perceived the location of the mouse, in some sense desired to catch the mouse, and then engaged in a very successful piece of predatory behavior. So even at those low levels, it seems as if to understand perception and action in the natural order, we need to understand how creatures can exhibit seemingly sensible bits of behavior on the basis of information that they take in from the world. In that sense they seem to be fundamentally unlike volcanoes, and therefore its not quite so clear how many unique tricks we need to understand, to understand how that kind of behavior can be produced. >> Indeed, most humans, animals, and robots do use perception of the world to perform their complex tasks. But we should resist the urge to assume that that means something really complicated is going on inside the animal, that it has to make an explicit plan about what's out there in the world and what it needs to do. Very often we find it's still the case that embodiment can solve much of the problem and simplify the problem that the mind is left with to solve. So already we have the example that when walking to the fridge Andy doesn't have to remember anything about how to swing his legs back and forth or how to get through doorways. That's done mostly on his physical system. And for the cat in fact, it's physical perception system is tuned specifically to look for fast, small moving objects, and if you try with a laser pointer they find that more attractive than a mouse, even though it's not going to be any good to eat in the end. So another example that's weve been looking at in our own research is the behavior of desert ants that don't follow chemical trails but nevertheless can navigate through complex environments. >> These ants forage in very hot climates and need to move rapidly They use a combination of navigation strategies, that include dead-reckoning and visual memory. In this example, the ant is using its previous experience of returning along this route, through the complex vegetation, to find its way back to its nest, which is just a small hole in the ground that could be tens of meters away. Up till now, it has been assumed that ants and other creatures would have to recall each of the positions along a route to decide where to go next. But in fact, the behavior can be mimicked by a robot that never goes where it is. Instead, it simply alters the size of its zig-zag proportionally to how unfamiliar its current heading appears. In other words, if things look familiar it is probably heading the right way and can continue straight on. If they don't look familiar, it should swing around until it sees some view that it saw before, and head that way. In fact it doesn't even distinguish individual objects as objects, but just processes its low resolution vision as a direct holistic memory simply classifying everything it sees at the moment as, yes I've seen this before or no this is unfamiliar. Note that this whole method exploits the fact that ants walk forwards. And so the direction in which it views a scene and stores that scene in memory is the same direction that it needs to take to repeat the route. >> Actually, the ant has a number of other adaptations that allow it to find its way efficiently. For example the top part of its eye is sensitive to polarized light. And this allows it to detect the polarization pattern that's in the skylight. From this it can actually deduce the position of the sun, even when the sun is not visible if it's hidden by clouds for example. And uses a very reliable compass cue so that it can go in the direct that is has recalled and learnt. Biology's actually full of examples like this where the physical structure is actually designed to solve the problem for us and we can use those things in robotics as well.