Introduction

There have been many recent advances in robotics, but still we do not find robots being widely used in natural environments. For small, legged robots to become truly useful in applications such as urban search and rescue, planetary exploration, and de-mining, they will need to traverse rough terrain under conditions when a human operator is not able to observe the entire scene and guide them carefully through it. For practical reasons, it should also be possible for teams of such robots to communicate with each other to so that the experience and learning of one can be disseminated to others and the group’s collective experience enriched. In this way a flock of robots will become gradually better at traversing various classes of rough terrain without direct human control.

To achieve this goal, advances must be made primarily in combining perception of the environment with robust locomotion strategies. Currently, robots can successfully do path-planning or climb over difficult maneuvers in simulation, where the environment is completely known. Conversely, the best biologically-inspired robots can navigate outdoor, real-world terrain but only under the tight control of a skilled operator. In many cases the robots that can perform best in real-world outdoor environments do so by solely making use of principles from biology such as compliances and actuation schemes, but are not capable of complicated maneuvers such as are used with robots navigating obstacles in simulation or completely-known environments. Consequently, these robots cannot navigate large obstacles or extremely challenging terrain.

We propose to combine biologically-inspired design priciples with advances in sensing and perception to create a robot that is able to autonomously navigate difficult real-world environments. We will focus on the learning of terrain properties and corresponding locomotion tactics that will enable a compliant legged robot platform to function in he real world, performing robustly without an involved human operator. These environments are not anything that an ordinary squirrel, lizard, insect, or other animal could not traverse quite easily; however, the state of robotics today is such that this task is still a significant challenge for man-made creations. Furthermore, we propose having the robot learn and improve its behavior with real-world experience, since learning in simulation differs significantly from the complexity of the environments it will encounter during expected operation.

We imagine the following scenario taking place at the completion of this project: A small quadrupedal robot trots up a dirt trail, following a basic sequence of waypoints and adjusting automatically for minor variations in terrain. When it encounters a section of the trail covered with medium-size rocks, it picks its way carefully through them. It detects the shape of the rocks, and that they are dusty and so are likely slippery. It decides that it must place its feet on a few flat regions, and brace itself against an opposing sloped face to both avoid slipping and maneuver over the rocks. Further along the trail, it finds a medium-size branch fallen across the trail. It places a foot on top of the branch to climb over it, but the branch suddenly shifts its position as a result of the weight of the robot, causing the robot to tumble sideways. The robot rights itself, checks that there is no damage, and attempts to cross the branch again. This time it knows that thin horizontal objects slightly above the ground may not be able to sustain weight, and moreover this particular obstacle can certainly not be stepped on! Instead of stepping on the branch again, the robot steps carefully over it (or goes around it, or anticipates that it will move) and continues along the trail.

Ideas to add:

  • Robustness at multiple levels--basic robust bio-inspired body, can do pretty well without sensing, but then with the sensing it will be able to do much cooler stuff and climb harder obstacles. Local sensing and vision, layers build on top of each other increasing complexity
  • Robot will attempt different stuff--robustness in strategies. Will explore, learn motion planning strategies. Don't have to rely on having seen terrain before--will still do intelligent stuff.
  • Doesn't need exact foot-plant locations necessarily? Can just feel its way around. This will also be a backup robust strategy. Putting vision on top of a feeling-around strategy will make it not have to duplicate searching and enable it to do better path planning.
  • In mission scenario, add something about going through grass or other visually difficult terrain

Elaboration on Introduction summary, including current state-of-the-art research:

Following is a review of the current state-of-the-art work in the areas of biologically-inspired robots, terrain navigation, perception, and planning, showing that the technology is well-developed in each of the areas individually but there is little (no?) work in combining these.

Current bio-inspired robot platforms are physically robust and relatively simple to control at a low level by virtue of their “preflexes” (a term taken from biology literature to describe the stabilizing effects of their kinematic configuration and passive stiffness and damping) [cite Loeb?]. Bio-inspiration is found in their basic morphology (overall kinematic design) and in the use of local variations in stiffness and damping to help reject disturbances due to minor variations in terrain, foot slippage, etc. Bio-inspiration is also seen in the use of under-actuated limbs and damage tolerant materials that can “bend without breaking” [cite Voelb? see 1st Sprawlita IJJR paper for ref.].

They have so far been able to locomote over challenging vertical surfaces (brick, stucco walls) as well as horizontal surfaces and hill slopes with small rocks. Small gaps, ridges, etc. can also be negotiated. They are now poised to tackle more difficult terrain than small legged robots have traversed in the past, and indeed they can (barely) do so under the control of a skilled human operator. A human is needed for gait selection, for choosing a path—even on highly structured surfaces such as brick or stucco building wall—and for checking that the robot does not get into situations that it could tip over or become stuck.

Furthermore, the current climbing robots suffer from a limited limb workspace, being specialized for relatively flat walls and floors. RHex, RiSE, Whegs, and other bio-inspired robots (REFERENCES!!) can only cross obstacles up to about a leg height. Their underactuated legs make control easier but prevent them from crossing piles of rubble or similarly difficult terrains where exact foot placement is required. Redesigning the platforms for greater limb mobility will allow them to work better on rough terrain, however it does not address the more critical problem of making them more autonomous.

A few (non-bio-inspired?) robots do successfully operate in actual outdoor environments, but so far they have only used proprioceptive and inertial sensing and require the supervision of a human operator. \cite{Gassman.01} have a large quadruped robot that can climb steeply sloped mountainsides, and (REFERENCE: BIG DOG) have a quadruped that can walk over rock beds, slopes, and other relatively uniform terrains.

\\ operate in real outdoor environments -- lauron.. but just plants its feet I think \cite{Gassman.01,}

There are several issues involved in a robot being autonomous. The robot needs to be able to sense large obstacles that it will not be able to get around by any means, so it can choose a path around them. For large but traverseable obstacles, the robot needs to detect the geometry of the world and surface properties. This will enable the robot to do exact foot-placement maneuvers and short-range path planning. In real-world environments, for robust operation it is not enough to have open-loop or regular-gaited control strategies such as are used in many bio-inspired robots—for some situations the robot will have to pick its way across the terrain carefully, requiring careful foot placement and planning. Finally, the robot needs to be able to right itself if something goes wrong and it flips over—in general, it must prevent any situations where human intervention is required. [Aside: this last point really relates to the mechanical design of the robot more than the sensing/control part..]

The sensing required to accomplish these tasks is not currently used on legged robots. They currently know nothing about the environment aside from force sensing at the feet, inertial sensing and proprioceptive sensing (joint encoders). This sensory input is currently used for force distribution and for simple “reflex” type behaviors – for example “pawing” to try to attach a foot that is having difficulty supporting a load (Cite: RiSE and Stickybot) ( Sal: Another reflex is the local searching from Pearson 1984 ). Other researchers have implemented other reflexes using sensors. In contrast, for autonomous wheeled vehicles a great deal has been done in terms of terrain characterization, path planning, etc. [Thrun, Whitaker, Dubowsky, etc.], due to the reduced dimensionality of the configuration space of a vehicle.

Yirong: Paragraph about state of the art perception systems. What sort of 3-D geometry can be detected currently using stereo cameras, laser range finders, etc? Can the robot distinguish between solid ground and plants that will not support any weight? Can the robot tell anything about the texture of the surfaces, like if they're slippery or not?

Yirong: I think I've fixed this one, but if there's anything else you think needs to be added, go ahead..

While legged robots operating in real-world environments do not do any sort of path-planning, in simulation and "constructed" environments there has been much work for path planning and locomotion.

There are a number of robots that have done effective motion-planning in simulation enviroments. \cite{Hauser.05, Hauser.06} have simulated a humanoid and hexapod robot climbing over steeply inclined and rugged environments, including climbing up ladders, while \cite{Bai.02} has simulated a quadruped navigating around small obstacles. There have also been a number of examples in which motion planning has been done in a simulation environment and then a robot has been made to move through an environment set up to be precisely like the simulation. \cite{Bretl.06, Bretl.03, Kuffner.02,Kim.05,Lee.06} do, variously, making a robot climb an indoor climbing wall with discrete handholds; have a biped reach around obstacles to retrieve objects; enable a quadruped to climb around convex and concave surfaces such as from floors to walls; and climb over tall blocks in various orientations. The DARPA LittleDog? project (REFERENCES!!) is also similar in that close to perfect state estimation of the robot is known and the terrain is completely characterized.

\\robots that just operate in simulation, but do cool stuff: includes climbing using hands and feet (or multiple limbs for the 6-legged robot) over 45* ground and ladders (Hauser) \cite{Hauser.05, Hauser.06, Bai.02, }. Bai.02 use a voxel-based description of the terrain to figure out which areas are better to step on.

\\robots that do path-planning in simulation then operate in constructed environments set up to match the simulations: \cite{Bretl.06, Bretl.03, Kuffner.02,Kim.05,Lee.06} Kim.05 do floor-wall transitions. Bretl = climbing wall stuff. Kuffner = make the robot avoid obstacles, can reach under a table to retrieve something. Lee = climb over obstacles

Several robots have used proprioceptive sensors to walk over slightly irregular surfaces in laboratory environments: \cite{Fukuoka.03} and \cite{Estremera.05} correct for body roll over lightly sloped surfaces and maintain stability. \cite{Estremera.05} also can avoid "forbidden regions" on the ground, the locations of which are pre-programmed into the robot.

\\ several robots use internal sensors (IMUs and force sensors) to stably walk over somewhat irregular terrain: \cite{Fukuoka.03}. \cite{Estremera.05} can also automatically adapt to gently-sloping terrain, but does not do any sensing except for joint angles. It avoids forbidden regions told to it a priori.

A few robots use simple sensors to gain information about the terrain. \cite{Kajita.97} enable a biped robot to walk over sloped terrain by using a sensor that detects the terrain height in a small region in front of the robot, thereby allowing the robot to estimate the terrain geometry. \cite{Lewis.05} use a video camera to detect black and white patches on a flat laboratory surface that indicate regions where their planar biped robot is and is not allowed to step.

\\ \cite{Kajita.97} uses a sensor to look at the ground ahead of it and estimate the terrain shape. With this they control a biped robot. \cite{Lewis.05} uses a video camera to look at color patches on the ground to avoid "slippery" patches in a planar biped.

ARE THERE ANY OTHER LEGGED ROBOTS THAT USE TEXTURE IN PLANNING?

Yirong: possible paragraph about long-range path planning. Work on this last also.

There has also been much work on long-range path planning … [paragraph about long-range path-planning.. is this necessary even?]

In conclusion, we propose to incorporate sensing onto a robust bio-inspired robot so that it can operate autonomously across difficult environments.

Approach

The approach involves 3 main components:

1. Build a lizard-inspired quadrupedal robot that that is adapted from our previous bio-inspired legged robots. Like its predecessors it will be physically robust and relatively simple to control by virtue of its passive mechanical behavior. Even in the absence of sensory feedback, this machine will be capable of a stable locomotion over terrain with small obstacles. For difficult terrain of the type that we will be focusing on, sensory feedback will be essential.

2. We will make advances in sensing that allow the robot to detect its environment, including the terrain and obstacles, to an extent that it is able to successfully choose a short-length-scale path to traverse the terrain. The robot will perceive not only the geometric characteristics of the environment but also other features of the environment such as texture, slipperiness, and so forth that will determine whether or not the robot will be able to find a secure foothold there or not.

3. We will make advances in control that allow the robot to operate without constant human supervision. Much of this will be based on current work done by Andrew Ng's lab, but it may need to be modified to work with the incomplete knowledge of the environment provided by its sensors.

The part of the proposal that is weakest at present is:

1. How we will sense and characterize obstacles in a way that is useful to a robot that has to navigate among them. What data will we gather? How will we store and organize it? What features will we define? How will they be represented? What structure will we impose on the information? What value functions will we define? How will they use the features?

2. How we will get the robot to learn useful tactics (I am using the word “tactic” as typically contrasted with “strategy” which is more long-term) in traversing them?

Methods – have this in three sections to correspond to the Approach sections? For now this is copied from the other TWiki page..

Terrain representation and characterization for planning and learning

We identify three length scales in the traversal and characterization of rough terrain:

  • Long scale (several to many body-lengths)
Long scale chacterization and planning involves distances of more than a few body-lengths ahead. At this range, the characterization of the terrain is relatively coarse. Many local geometric regions and features may be occluded and the physical properties (e.g. slipperiness) of surfaces may be unknown. At this scale path planning involves a large search space, which is mitigated by the possiblity of reducing the dimensionality so that only the position and perhaps a gross estimate of body orientation is needed. Path planning at this scale with legged robots is not very different to path planning for wheeled vehicles on rough terrain and we can adapt existing methods. In the near term, we can avoid this issue altogether by allowing a human operator to establish the overall path.

  • Intermediate scale (1-3 body-lengths)
At the intermediate scale there is more information available than at greater distance and the amount of information increases rapidly as the robot starts to contact parts of the surface. At this length scale one is less concerned with the details of motion control (e.g. actuator efforts, impedances, accelerations) than in the immediate case but one does need to consider the configuration of the robot - which includes not only the overall position and orientation but also the possible placements of the feet with respect to the body.

  • Immediate scale (<=1 body length)
We are looking for stances that provide stability with respect to tip-over and slippage and that also provide freedom of movement to proceed to the next stance.

Platform and sensor development

The platform that we will use for terrain navigation is a quadrupedal robot, designed for traversing difficult terrain.

For initial experiments, and for subsequent comparison with the platform that we will construct, we will use the existing RiSE platform, reconfigured as a quadruped (see Figure [QuadraRiSE]). The RiSE platform in hexapedal form has already demonstrated impressive climbing capabilities on a variety of vertical surfaces [cite SPIE, ICRA06]. It is also capable of transitions between horizontal and vertical surfaces and of climbing irregular curved surfaces such as the trunks of trees. However, its climbing demonstrations have always taken place with a skilled human operator specifying the overall trajectory and gait selection. In addition, most of the climbing has taken place on surfaces such as stucco or brick buildings which are challenging, but quite structured. Indeed, the hexapedal version of RiSE probably does not have large enough configuration spaces for its feet to do well on rough and irregular terrain. Nonetheless, the RiSE platform is a capable machine for many experiments. It has a powerful on-board processor, three degrees of freedom of force sensing at each foot and an easily extensible programming and communications environment. It can also easily support the weight of a camera system and additional sensors. For our experiments we will convert the RiSE platform to a quadruped, which climbs less well on vertical surfaces but has more room for the feet to maneuver without interfering.

In parallel with conducting experiments on the RiSE platform we will develop a new quadruped, specialized for climbing over obstacles and negotiating rough terrain. This machine will utilize much of the technology behind the gecko-inspired Stickybot platform [cite Stickybot JEB]. Like Stickybot, and the earlier Sprawl robots [cite IJRR] (see Fig. Stickybot+Sprawl) it will be fabricated, using Shape Deposition Manufacturing [cite SDMref] out of heterogeneous multi-material components with embedded sensors and other discrete components such as bearings. Llike Stickybot it will utilize embedded carbon fiber fabric where high stiffness and strength are required and soft grades of urethane polymer where compliance and structural damping are desired; and like these earlier robots it will be robust with respect to accidental falls. We also expect to reuse the basic Stickybot gait controller, and cartesian stiffness control scheme for regulating external and internal "grasp" forces between pairs of feet.

However, in contrast to Stickybot, the new platform will not be specialized for climbing smooth vertical surfaces such as glass and will not employ adhesive feet with hyperextending toes. The new platform will have more ground clearance than Stickybot and the limbs will have a larger workspace in the vertical plane. The wireless communication will also be upgrade to allow higher bandwidth transmission of sensor data and commands between the robot and a host computer and the payload capability will be increased to allow a camera and additional sensors onboard. Unlike the much heavier RiSE platform, the Stickybot platform has modest processing power on board. Consequently, all learning, motion planning and vision interpretation would have to be performed off-board via a nearby host computer. It will be determined early in the project whether to continue this approach or whether a suitable low-power processor can be found that allows more computation on board.

Be sure this includes:

  • Principles of bio-inspiration that make the robot reliable and control easier;

  • Advancement in perception

  • That the robot will not be fixed but malleable as we determine additional sensing tasks or robot morphologies that would be better-suited to the task

Terrain sensation learning

(Sal: The ground can be sensed through the force/position profile the robot experiences as the foot contacts and leaves the ground, for instance, hard terrain such as asphalt, dirt trails, indoors, will have a large force impulse and no penatration past the contact point. Softer terrain such as grass and carpet, will have a smaller force impulse. Deformable terrain, like gravel and sand will show a particular penatration/force profile that might be different for each material. This can be combined with the vision system such that the learning algorithm can associate the various visual stimuli to the different terrians.)

Learning and Perception

Yirong: Section on learning and perception/sensing. What will we do to make the robot capable of sensing what we have said it will need to sense? In particular, I wrote above that we need to detect (1) 3D terrain geometry, (2) Texture of the terrain, (3) anything else it would need to know about the terrain to navigate through it autonomously. I also wrote that the robot will improve with real-world experience. What research will we do to accomplish these things? What is the overall approach and plan of action?

Sample points to include:

  • Use stereo cameras because are light and can get effective 3-D information (references!!) as well as color from which you get a sense of texture
  • But, the sensors will not be fixed, other sensors could be added as necessary
  • Plan: Improve algorithms for getting 3-D shape from stereo cameras (references!!)

voxel-based terrain representations (cite.. Bai.02 and another one) are showing promise for motion-planning over complicated terrains. We plan on using this as a terrain feature...

Learning and control integrated with sensing

Yirong: section on Learning and control integrated with sensing. The robot will have to actually navigate through the environment making use of the sensor data it has. How will it do that? I had imagined for this section it would talk about how the LittleDog control schemes would be adapted to this situation. With LittleDog currently, it has excellent knowledge of the terrain and excellent state estimation. With the proposed robot, it would have much-reduced knowledge of the terrain and state. We still want the robot to be able to do semi-complicated maneuvers such as LittleDog does that require exact foot placement and balance. How would we accomplish that? How will it in general do motion-planning on the scale of several body lengths? Following are some things I think should be mentioned in this section:

  • Learning will occur by experience in the real world – as it traverses terrain, it will get better
  • Supervised learning at first

but perhaps more importantly, the robot does not need an accurate representation of the terrain all the time, potentially. Several options here:

- robot can think there are a few possibilities about what the terrain might be. For example, if it sees tall grass ahead of it, it might think the grass covering was solid ground based on its vision system with some probability. But, it also might think with some probability that it was grass through which it could easily walk. The robot could then choose actions based on the probability distribution, so that it didn't fall catastrophically if it was wrong.

Also, the robot could do an experiment and see if the grass was in fact grass or an obstacle. The result of this experiment could inform its near-term decisions: if it saw some other grass later on, it would more likely have similar properties to grass it saw an hour ago than to a yellow curb it saw a week ago. In this way it can learn on a short-term basis as well as improving its knowledge overall.

RL is uniquely suited to these sorts of problems by several properties:

  • do experiments then incorporate the information you learned
  • MDP has probabilities about the result of an action--if I step into the grass, will it give way or be solid? This allows for effective decision-making.

can use RL that weights recent data more heavily than data it saw a long time ago--so that it can use knowledge on a daily basis. This will enable it to respond well to lighting changes that vary slowly over time, for example.

Also, below is a list of ideas of stuff I had originally thought should go in the proposal, but now am realizing that many of them shouldn't since the focus has changed somewhat. What I wrote below is sort of about how a LittleDog-like robot could learn to do motion-planning, but now the focus is about how to incorporate perception with motion-planning, since the "motion plannign given perfect environment/state estimation" problem is already being worked on with the LittleDog DARPA project. We want to focus on stuff that is NOT being done by the LittleDog project.

Following is from the other document, needs serious modification:

Learning can also occur at the medium level for general short-range path planning: if the robot sees a general terrain geometry (with certain surface characteristics), it will learn better over time what the appropriate sequence of actions to take is to traverse it. The robot will better interpret the immediate terrain and be able to take the next step appropriately, including being able to control its body dynamics, balance, energy efficiency, and stability margin.

Further detailed ideas:

- In particular, we assume the robot has incomplete information about the environment. Its sensors are able to determine the terrain immediately around it well, but it cannot resolve obstacles in the distance. It may, however, get a sense of some important properties of the terrain in the distance: the slope of the terrain, how good the footholds might be, and so on. With this information the robot could do path planning.

- Parameters that might be useful in estimating how traversable a stretch of terrain is: 3-D voxelization of the environment, slope of the ground, vertical height of rectangular obstacles, estimated coefficient of friction, estimated roughness (or angle of asperities) (in general, probability of finding a foothold), crumbliness of the materials or how likely they are to tip back and forth or move if a foot is placed on them, waviness of the surface at various length scales (reference: haptics; can measure this using the robot somehow), etc.

- To accomplish learning in path-planning, it is important for the robot to recognize when it is in an environment it has seen before, or aspects of a new environment that are similar to something it has seen in the past. How should this recognition be accomplished? Recognition of an environment and encoding the learning is also important for one robot of a fleet to teach the others about an environment, what paths are good or bad to navigate the environment.

- In Lee et al 2006 (ICRA), they have a value function dictating where the next footstep should occur. The value function is based primarily on a low-dimensional representation of the robot body state, which implicitly includes a low-dimensional representation of the environment. The value function generated was able to effectively navigate the robot through a large number of different obstacles, even those not seen in training. One approach to RI is to also have a value function about where to put your feet based on the immediate terrain. The value function will surely be an approximation to the optimal value function, since you cannot train it over all possible terrains, and it is (to start with) a low-dimensional representation. Learning can occur during the robot’s operation by updating the value function over time to remove errors in it, making it closer to the optimal value function (which could then be used to generate a more optimal policy). A possibly analogous situation in nature is a child or animal learning to walk. At first they can walk unsteadily, but they improve their walking ability over time as they gain experience. I guess I’m not really sure what a child/animal is learning (maybe body dynamics, or how leg compliances should change with terrain) but that seems to be a good model for how our robot should learn.

- I guess learning body dynamics and how leg compliances (in general mechanical properties of the robot) should vary with gaits and maneuvers, etc., is both a strength of our labs (since we have both learning and mechanical people) and could be quite interesting from a RI perspective.

- Another way to improve a value function over time through learning is if the robot does local planning with a search tree that looks into the future several steps. Such a search tree would likely use a value function as a heuristic for the search. As the robot encounters new terrain, it can try different maneuvers and see if they work. The robot can then prune the search tree to speed search in the future. This practically corresponds to updating the value function to reduce/enhance the probability of visiting certain states.

- There are several possible alternate approaches that could be used instead of a value function based on the local environment. One option is to compose advanced maneuvers from primitive maneuvers, as was done by [Reference: ICRA’06 galloping simulation]. The approached used by them was somewhat constrained in that they had a very small number of state-based motion primitives to build up their library. This could be improved upon considerably. By using combinations of motion primitives you can potentially build up a large library of possible motions in a space-efficient manner. Also reference the MIT research about how vision also builds things up using primitives.

- Another option is to… something involving body dynamics, inverse dynamics for control. Can specify a desired trajectory for the center of mass, for example, and figure out how its legs need to move to accomplish that. This can be combined with estimates of where good footholds are. Should be able to deal well with new terrain and inputs, which is a big RI program goal.

- Another possibility is to have the robot estimate the roughness of the terrain, and choose an appropriate control strategy accordingly. On one end of the spectrum is flat ground, where the robot could execute an open-loop gait with minimal “thinking” involved. At the other extreme is a boulder field where the robot must do much planning to traverse the terrain.

- Must figure out the right way to represent/encode the locomotion pattern that will allow you to have these various gaits and allow you to do learning!

- Integration with sensors. What happens if your sensor data is funny? Can we develop a robust way of interpreting sensor data? (what sensor data should we interpret?) What do you do if you have no sensor data? There should be some continuum of optimal actions to take based on the amount of terrain you can see ahead of you. If you know the entire 3-D terrain ahead, you can do advanced (global) path planning, while if you know nothing about what is ahead of you, you might have to do more conservative motions. If you have an intermediate amount of sensory data, you could do something in between.

MilestonesTasksResults -- copied from other TWiki page for now

Alan - I have made a start on what I think will be an important section of the proposal in which, for each year, we summarize:

  • Focus: what will be the thrust of our activities during that year
  • Research Questions: what are the main research questions that we will be trying to answer
  • Activities: what activities will we undertake (things built, experiments performed, tests conducted, software developed...) What outreach (educational or tech. transition) activities will we undertake?
  • Metrics: how will we assess whether we are on target



  • Focus: what will be the thrust of our activities during that year
  • Research Questions: what are the main research questions that we will be trying to answer
  • Activities: what activities will we undertake (things built, experiments performed, tests conducted, software developed...)
  • Metrics: how will we assess whether we are on target

Yirong: years 2 and 3 could be improved also. Work on this last.

Year One

  • Focus: Basic traveral and characterization of difficult terrain --
Development of robust crawling platform and sensor suite and establishment of initial terrain description and feature set. Examination of strategies for rough terrain navigation vis a vis feature set.
  • Research Questions:
    • What are the main sources of difficulty in traversing terrain that is near the limits of the capabilities of a bio-inspired climbing platform? How can it be detected when the robot is in one of these difficult situations using a combination of readily available sensors?
    • What initial feature set (for the current state and recent history) is sufficient to allow a skilled human operator to determine whether the robot is in a "good" versus or "bad" or "precarious" condition?
    • What is a sufficiently complete description of terrain at the mid-length scale to avoid problems of tipping, slippage, and getting stuck (e.g., legs jammed or body "over centered" so that forward progress is not possible)?
    • How do we reliably detect the composition of the terrain? For example how do we discriminate a rock, which can sustain a load, from a large leaf that cannot? Or, discriminate between a slippery surface and high-friction surface. * How do we accurately detect the geometry of the terrain at length scales (~1cm) sufficient to enable the robot to choose foot-plant locations?

  • Activities:
    • Development of initial robot platform, adapted from existing Stickybot and iSprawl technology and initial experiments with existing RiSE platform.
    • Developement of initial terrain description (including multi-resolution voxel representation and ??? from previous work) for characterizing the terrain at the immediate (<= 1 body-length) and mid-term (1-3 body lengths) scales. Terrain features will include surface condition (friction and ease of gripping with microspined feet), roughness at multiple lengths scales (10^1 to 10^4 body lengths), surface orientations at likely foot locations, surface composition (load-bearing or not).
    • Testing of initial sensor suite, including stereo vision, proprioceptive sensing, inertial sensing and extereoceptive sensors such as antennae and proximity sensors -- adapted from previous robot platforms. Tests will be conducted on indoor obstacles and taking advantage of external vision system (Vicon) for tracking robot body and limbs position and orientation.
    • Integrate motion-planning with crude terrain description: the robot should know where it is relative to the environment and be able to do crude path-planning, at least to circumnavigate large impassable obstacles and to identify regions that are easy to cross.

  • Metrics:
    • Successful operation of robotic platform over terrain that is near the limits of current legged robots such as RHex, RiSE, WHegs.
    • Demonstration of robustness of the platform with respect to accidents.
    • Successful determination by a human operator of the quality of the immediate terrain using only data in the established feature set.
    • Basic robot/host computer wireless communication and protocols to permit higher level control and learning in the next phase.

NOTE: have milestones for both detecting pure geometrical terrain and doing stuff with that (like cinder cone rubble) and also more terrain-recognition goals like getting it to detect where the ground is in a grassy environment.

Year Two

  • Focus: Improve performance on difficult terrain and refine the terrain description for learning. Apply learning on
  • Research Questions:
    • How can we integrate the information from stero vision with the information obtained by proprioceptive and proximity sensing to creater a more detailed characterization of the terrain with lower uncertainty as the robot starts to traverse it?
    • Can the we define stereotypical primitive operations such as horizontal-slope transition, safe stance recovery and self-righting that can be employed at run time?
    • If a skilled human operator guides the robot through difficult terrain the first time, can enough information be obtained to traverse the same terrain a second time, autonomously? Can this result be generalized to similar terrain?
    • Can it be recognized two terrains are sufficiently similar that the same primitives for locomotion, recovery, etc. can be pieced together (in varying sequence) for each?
    • Given (only) feature information about the long-term and mid-term terrain, what trajectory does a human operator choose?
  • Activities:
    • Improved feature definitions
    • value function corresponding to paths and configurations.
    • Terrain matching and recognition experiments.
    • Preliminary outdoor experiments to investigate sensitivity to changes in lighting.

  • Metrics:
    • Robot can match terrains under varations in lighting.

Year Three

  • Focus: Integration of immediate and mid-term planning and motion control.
    • Increased focus on outdoor performance.
    • Learning terrain characeristics and short-term motion strategies for common conditions (e.g. horizontal/slope transition, steep slopes, valleys, hills that could cause robot to become stuck)
  • Research Questions:
    • Can terrain properties be generalized and, if so, what is the appropriate representation?
    • Can learning be transmitted from robot platform to another?
  • Activities:
    • Outdoor and indoor experiments in difficult terrain traversal.
    • "Challenge" demonstration on an outdoor trail with steep and rocky sections.
  • Metrics:
    • Ability to recover from mishaps.
    • Ability to backtrack and choose and alternate route when stuck.
    • Ability to improve performance when exposed to similar terrain and obstacles multiple times.
    • Ability to learn (learn terrain characteristics and associated short-term locomotion strategies) from supervised trajectory/exploration provided by a skilled human operator.

Current projects

Mark’s DARPA project is about building robust bio-inspired robots that can move (currently) over relatively simple or structured terrain where there are relatively small obstacles. Limitations are that a knowledgeable operator must specifically guide the robot over the terrain, and it (presently) cannot deal with difficult obstacles that require complex climbing maneuvers—it currently uses a regular gait.

Andrew’s DARPA project is all about moving if you know the terrain extremely well, and have perfect state estimation of your robot. Limitations of the current project are that the hardware cannot be modified and it is moving over a 1m^2 patch. More importantly when trying to generalize the work to in the search and rescue or exploration applications, as described earlier, for any practical robot application the environment and robot’s state will not be perfectly known.

Broader Impacts

  • Small legged robots have the potential to be useful for applications such as search and rescue in the aftermath of an earthquake or explosion, for planetary exploration, and for de-mining. They can go where it is too dangerous for people and and into spaces where people and large robots cannot fit. Being small, they have an intrinsic advantage in terms of surviving falls (as small animals do). They are also generally less expensive, easier to transport, and quieter than large machines. However, for small, legged robots to be practically useful in such applications it is necessary to field teams or flocks of them and it should not be necessary for each robot to be carefully and continuously controlled by an expert human operator. This is particularly true when the robots venture into caves, basements or other situations where a human operator has limited visibility of the robots’ immediate surroundings.

Although the experiments and learning methods developed for these robots will be aimed at traversing obstacles and rough terrain, the same approach -- including the bio-inspired compliant robust platforms, the sensor interpretation methods and the learning – will be broadly applicable to other applications in relatively unstructured environments including household and hospital robots. These are likely to be much larger markets in the future. Can we refer to any specific plans we have for K-12 involvement? At a minimum, we can point out that, as we have done with RiSE, we will make a concerted effort to set aside summer research internships for minority and under-represented students. We had 3 such students in our lab last year and expect to do similarly.

We will continue our active community outreach that has been in effect for the last few years. In particular,

We will also take advantage of the fact that bio-inspired legged robots are a popular topic that easily captures the imagination of grade-school and high school students.

Cutkosky’s group has been visible in the media, with various appearances in science education programming (e.g. Sangbae’s recent appearance in LA,, Science Central, Quantum, …) and in events such as (extract from Biomoimetics/Sprawl media + Googlefest, Minnesota museum of Science exhibition). Mention participation by ?? in FIRST?

Additional interviews have been given to print media including Popular Science, Forbes, NY Times, Metropolis Magazine,…. (See http://bdml.stanford.edu/twiki/bin/view/Main/WhatsNew and http://bdml.stanford.edu/twiki/bin/view/Main/VideosAndMedia and http://www-cdr.stanford.edu/biomimetics/sprawlmedia/sprawl-media.html

In addition, Cutkosky and his students have collectively made over a dozen presentations to high-school groups either by visiting them or having them tour the laboratory over the last few years. This outreach has had direct benefits for the lab as well: High school students have become undergraduate summer researchers in the lab and Hispanic undergraduate summer researchers have become graduate students in the lab. We are committed to continuing this outreach and we will continue to capitalize on the public interest in bio-inspired robotics to get grade-school and high-school students enthused about engineering and robotics.

Broader impacts from dictated notes:

The capabilities of legged robots are currently improving. Important applications will include search and rescue, planetary exploration, demining, and other applications that involve terrain that is too rugged for small wheeled vehicles to negotiate successfully. However, the current capabilities of legged robots still are far behind their potential. This proposal marries advances in robust bio-inspired robots with advances in learning how to negotiate obstacles will provide small legged robots for the first time with robust intelligence. We argue this is currently the key impediment to their being more widely useful

The principles of marrying robust bio-inspired mechanisms with terrain recognition and planning can be applied to other domains as well including wheeled vehicles, household robots (which is projected to be an important ~market in the future)

Cost will have to come down for household robotics for them to be useful, but after that it should be big

Hospital robotics, assisting the elderly, other areas

This marriage of technologies is nowhere more compelling than this, where we’re trying to get robots to overcome obstacles that they currently can’t (do), which is why this is a great application to start with

Big idea is being bio-inspired and being able to cope with uncertainty in the environment

Collaboration Plan

Results from prior NSF work

References

-- AlanAsbeck - 23 Oct 2006

 
This site is powered by the TWiki collaboration platformCopyright &© by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback