Making Robots Walk

This New York Times article on Creepy-Crawly Robotics highlights the ascendancy of “biologically inspired” design ideas in robotics. Some sense of the impact of this idea can be seen in the following sequence of 4 videos about a challenging mobility problem: bipedal walking. It is very hard to get a two-legged robot to walk about a room, much less step up a curb or down a flight of stairs, without it landing in a heap. Compare this mechanical mummy-walk (exhibit 1) to Martijn Wisse’s elegant design study at Cornell (exhibit 2) of an old toy patent from 1912, for instance. It is simply gravity and mechanical design that propels Wisse’s design down the ramp; see here those same principles applied to a level-ground walker (exhibit 3) from Jim Collin’s laboratory. Finally, and most recently, this cool dinosaur (exhibit 4) from Gil Pratt’s MIT LegLab is simply stunning.

There are two different ideas at play here. The first, which is the approach behind the mechanical mummy clip and older robotic technologies like the arm on the space shuttle, is to view walking (or grabbing stuff in space) as foremost a control problem. Say you’ve got a leg with a hip, knee, and ankle, which you construct from a series of joints that can each be moved to n different positions. This means that the foot attached to the end of your ankle can be in one of 3n different positions with respect to the torso it is attached to. Standing, walking, running all live in this space, the thinking runs; and if it doesn’t come off in practice, then simply increase the flexibility of your joints by increasing the size of n. The problem is that increasing the size of n increases the complexity of the control problem: there’s more options to choose, which means more processing to be done, and to be done quickly, which translates to more energy and resources required to “think” out each move. This is often a theoretician’s approach to robotic problems, and is often undone by practice. The insight illustrated by these videos is that getting a thing to walk, rather than proving that it should be able to, necessitates reducing the complexity of the control problem. And this serious constraint on resources changes the nature of the problem.

The biologically inspired approach to robotics in this case looks for clever ways to cut down on the space of possibilities by looking to see how biology pulls off similar tricks, rather than following mechanical or artificial constraints. It should be stressed too that one might also radically change scale and look to micro-organisms for inspiration, like this very cool NASA mobility concept study.

In the case of bipedal walking, the interesting insight behind this selection of videos is the benefit from viewing the walking problem to involve more than simply how to orient a pair of 3-jointed limbs, but instead to view walking as an activity of a dynamic system where the control of the moving parts is restricted (and in part, solved) in virtue of their role within the system. Walking involves swaying hips and moving arms. It is more dance than determination. In the dinosaur example, there are a lot of joints (it is a 16 degree of freedom system!), but movement of these joints is restricted both mechanically and by software to model the skeletal and muscular structure of a dinosaur. On top of this, its got a long tail and a heavy head stretched out front to back to help stabilize it, like a tightrope-walker’s pole.

Philosophy is a theoretician’s game, but there are branches which purport to address practical matters, such as decision and reasoning. There are arguments for resource-bounded decision and rationality that are perfectly analogous to the illustrated story I’ve told here about robot mobility, but my sense is that these arguments have yet to be fully engaged. Perhaps movie clips of mechanical mummies and robotic dinosaurs will soften some to the general idea behind these arguments.


Comments

Making Robots Walk — 4 Comments

  1. Nice correction Kenny, thanks. A clarification: The foot itself is only in one of n3 positions, but the leg can be placed in 3n configurations, which is the problem. The leg is what has to be positioned to place the foot in one of these n3 positions. And foot placement is not independent of how the mass above it is distributed. (We fall over when we act otherwise.)

  2. Gregory,

    Your post reminds me of Dennett’s remarks about traditional philosophical methodology in “Cognitive Wheels: The Frame Problem of AI”:

    Hume, like virtually all other philosophers and “mentalistic” psychologists, was unable to see the frame problem because he operated at what I call a purely semantic level, or a phenomenological level. At the phenomenological level, all the items in view are individuated by their meanings. Their meanings are, if you like, “given”—but this just means that the theorist helps himself to all the meanings he wants. In this way the semantic relation between one item and the next is typically plain to see, and one just assumes that the items behave as items with those meanings ought to behave.

    ….
    That is the mechanical question the philosophers left to some dimly imagined future researcher. Such a division of labor might have been all right, but it is turning out that most of the truly difficult and deep puzzles of learning and intelligence get kicked downstairs by this move. It is rather as if philosophers were to proclaim themselves expert explainers of the methods of a stage magician, and then, when we ask them to explain how the magician does the sawing-the-lady-in-half trick, they explain that it is really quite obvious: the magician doesn’t really saw her in half; he simply makes it appear that he does. “But how does he do that?” we ask. “Not our department,” say the philosophers—and some of them add, sonorously: “Explanation has to stop somewhere.”

    Personally, I have had a difficult time convincing certain epistemologists of the relevance of relatively primitive and abstract computability constraints, let alone computational complexity and other more basic implementation-level concerns. It is my hope that the recent (seeming?) popularity of formal epistemology will catalyze dialogue between epistemologists and AI/machine learning researchers.

  3. Thanks for this, John. I think the problem of bridging traditional epistemology and formal epistemology is difficult for intellectual as well as sociological reasons.

    From the beginning AI has been more receptive to philosophy than philosophy to AI, in part because of the seminal paper by Pat Haynes and John MacCarthy, “Some Philosophical Problems from the Standpoint of Artificial Intelligence” (1969). In that paper, which lays out the situation calculus and introduces the frame problem mentioned in the Dennett quote above, they argue that AI needs philosophy.

    Philosophy hasn’t had a similar argument that it needs AI, however. It has been admonished in the past to pay attention to the computational sciences, and to pay attention to cognitive psychology, but those scoldings have been made to advance particular theses about the mind, about language, about justification. In most cases the science was used as a bludgeon. Naturalists and armchairs, recall.

    What’s new about formal epistemology and, in my view, what needs to be made more explicit, is the observation that several problems that appear in various epistemological theories have analogues within the computational sciences. Formal epistemology is a field rather than a class of theories (cf. reliabilism) or a school (cf. Bayesianism), because it represents an attempt to marshal a treasure trove of methods to explicate epistemological notions and relations. As you noted, there are learning problems that have the very same structure that is described by various epistemic theories. Sometimes, sometimes, there are genuine and deep philosophical insights that come from looking at the computational analogue.

Leave a Reply

Your email address will not be published. Required fields are marked *