Logistics
Program Description
From
NSF Robust Intelligence Proposal:
The Robust Intelligence ( RI ) program encompasses all aspects of the computational understanding and modeling of intelligence in complex, realistic contexts. In contrast to systems that use limited reasoning strategies or address problems in narrow contexts, robust intelligence may be characterized by a system's flexibility, resourcefulness, use of a variety of modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The RI program advances and integrates the research traditions of artificial intelligence, computer vision, human language research, robotics, machine learning, computational neuroscience, cognitive science, and related areas.
Researchers across all areas of RI are addressing progressively richer environments, larger-scale data, and more sophisticated computational and statistical approaches, looking to nature in many cases to model cognitive and computational processes. Interactions across traditional disciplines are also of increasing importance. For example, speech and dialogue research seeks to understand the cognitive psychological underpinnings of conversation that contribute to the robustness of human speech perception and intention understanding. Computer vision is exploring approaches developed in language processing to represent the semantic information in images and video in ways useful for mining, navigation, and robotic interaction, and working with ideas developed in computer graphics and physics-based modeling to understand and depict collections of images. A cognitive architecture may bridge sophisticated planning and problem solving modules with perception and action modules, perhaps accounting for certain human or animal behaviors. Robotic systems need to understand and interact with humans in unfamiliar and unstructured environments. Computational understanding of neurons, networks, and the brain increasingly draws on computer vision, robotics, and machine learning, and provides insights into the coding, representations, and learning underlying intelligent behavior in nature.
These examples are meant to convey the general goals of RI, not to limit its scope. The program supports projects that will advance the frontiers of all RI research areas, as well as those that integrate different aspects of these fields. More information on topics of interest to the RI program is available at: http://www.nsf.gov/cise/iis/ri_pgm.jsp
The RI program encompasses all aspects of computational understanding and modeling of intelligence in complex, realistic contexts, advancing and integrating across the research traditions of artificial intelligence, computer vision, human language research, robotics, machine learning, computational neuroscience, cognitive science, several areas of computer graphics, and related areas. In contrast to systems that use limited-reasoning strategies or address problems in narrow contexts, robust intelligence may be characterized by a system's flexibility, resourcefulness, use of a variety of modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals.
I think we fit particularly in this category:
- Research on intelligent and assistive robotics, healthcare robotics, social robotics, micro- and nano-robotics, marine robotics, mobile robotics, neuro-robotics, rescue robotics, space robotics, humanoid robotics, unmanned aerial vehicles (UAVs), and multi-robot coordination and cooperation with novel approaches to sensing, perception, cognition, actuation, autonomous manipulation, learning and adaptation, haptics, and multi-modal human-robot interaction.
- Synergistic and collaborative research of innovative and emerging technologies to improve the intelligence, mobility, autonomy, manipulability, adaptability, and interactivity of robotic systems operating in unstructured and uncertain environments.
- Fundamental research on innovative and emerging robotic technologies for monitoring and surveillance of our environment, and to improve quality of life.
Proposal Notes
From Cyber Physical proposal
Context
We have a perching system that can land semi-reliably on good wall, perform some tasks at that location and then takeoff from the wall. Unfortunately, it is hard to remotely detect "good wall" and the best way to figure it out is still to land on it and see if it works. Also, once landed, the plane is not currently able to reorient or explore the wall locally (i.e. follow a crack system for inspection) and needs to take-off, fly away and land again to change its position on the wall.
Most walls on tall building are perfectly suitable for landing. Unfortunately, walls that are only one or two stories high are often surrounded by trees or other features that makes it hard to find a suitable landing location for landing. However, in that kind of environment other locations can be suitable for perching, like roofs or wires.
It will be important to make two points to frame our research:
- No adhesion technology perform well on all surfaces (smooth, rough, soft, hard and dirty surfaces... is that true for electro-adhesion?). Thus, failure will happen for an airplane trying to perch, and we better detect them and be able to recover. It is also quite hard to determine if a surface is suitable for landing from far away (is it dirty? does it have good asperities?)!
- Small UAVs are just starting to be designed for close interaction with the wall/ground. There is a clear lack of lightweight sensors to allow proximity maneuvers (measuring distance and attitude).
Vision
We would like to have a system capable of trying to land on various vertical surfaces, detect failed landing, recover from the failed landing and determine if it is worth adapting its approach and try again. This way, the airplane will be able to "explore around" and gracefully recover from missed attempts and it would make the process of finding a suitable spot for perching much easier (trial-and-error instead of trial-and-failure). Once on the wall, the airplane would use a combination of leg actuation and thrust (hop and cling again, like birds climb up wall) to reorient or locally move around interesting features on the wall.
To increase the number of available perching location, we also want to be able to land on horizontal and slanted surfaces (
up to 30 deg? More if we use spines...). Furthermore, we are interested in achieving high precision landing on those surfaces, as landing close to a ledge would provide a better vantage point for observation and ease the takeoff.
What about wires?
Applications
Inspection of concrete building or bridge for cracks or surface texture. Automatic deployment of reconfigurable sensor networks (forest fire monitoring, environment monitoring, crowd management during unexpected events, search and rescue, etc).
Questions
Sensor design for long and short range interaction with the surface
Review of existing sensors can be found
there
Action item #1: we need to research and briefly summarize the state of the art here, say what is missing and How we propose to address it. It's not perhaps the main intellectual thrust of our work, but it's necessary. Part of this discussion should include long-range, medium-range and close-range sensing for different phases (initial approach, final approach, motion on wall). Also need to recognize that the orientation of the plane changes, so the sensors need to look in different directions.
The best idea I have so far is to project an array of points on the wall and detect them by using a Wii sensor. This way, we should be able to get position and attitude at about 50-100Hz, which should be sufficient for perching. It is unclear yet if we can project points bright enough at a distance of about 7m (to perform the perching maneuver). Worse case, we could always use some "beacons" on the wall and that would allow us to perform most of the work described in this proposal. I can see three applications of this type of sensor:
- One sensor on the nose of the airplane to detect walls and trigger the maneuver.
- One sensor installed at an angle corresponding to the deep stall flight path to be able to identify the distance to the roof, the orientation of the roof (wall are always vertical, roofs have various slope angle) and the orientation of the airplane with respect to the roof.
- One sensor with a shorter range (1-2m) installed on the belly of the airplane to allow maneuvering on the wall by hop, hover and cling.
Some text from the proposal with Aaron
To achieve stable hover in close proximity to a wall, it is important to have information not only about the plane attitude but also about its position with respect to that wall. To obtain this information, we propose to develop a position/attitude sensor based on the Wii optical sensor, a 0.4 g chip that can track up to 4 points at 100 Hz with a native resolution of 128x96 pixels. That sensor has been successfully used by another team to achieve controlled hovering,[3] but only at distance shorter than one meter from a platform containing homing LEDs. Our system will include onboard laser diodes with the required optical power to project a 4-point pattern suitable for attitude and position estimation at distances up to 5 m. A single sensor mounted in the front of the plane would also replace the ultrasonic range finder currently used to detect a wall and trigger the pitch-up maneuver. We believe this system will be a more reliable and lighter weight alternative to the ultrasound sensor (5.9g), which is susceptible to noise from the propeller.
With this solution, we can leverage the extensive development costs put into the sensor for the gaming industry, and acquire a high performance, low weight package with built in onboard pixel tracking at very low cost. Since the sensors are small and light, we plan to mount several sensors facing outwards along each axis on the fuselage (up to 6 additional) that would allow the VSP-MAV to track its position relative to the environment at 100 Hz. This information, coupled with knowledge of the gravity vector and attitudinal changes from an IMU, would allow drift-free stable hover.
Landing failure detection and recovery during wall perching maneuver
I will make a table of failures to clarify everything, but so far, we see failures from:
- Not pitching quickly enough. This is usually caused from a sensor miss-detection as our current sensor only has an update rate of 10Hz and a 6.5m maximum range. As the plane is flying at 10-15m/s, it means that we are 1-1.5m closer to the wall at each update point... We currently need about 5m to perform the maneuver, so one miss-detection can lead to failure!
- Reaching the wall with touchdown states outside the allowable envelope. Usually cause by control or sensing problems
- Wall surface not suitable for landing (too few asperities or surface too smooth).
Q. Can we shorten the maneuver to shorten the required distance to perform the maneuver (and ease sensing requirements) by shifting the center of mass, thus making the airplane less stable and allowing faster pitch up?
Q. Can we detect the failure of the plane to cling on the surface?
Q. How early can we detect the failure of the plane to cling?
- After it failed by detecting free fall, high pitch back or non-steady states?
- During failure by monitoring the airplane motion (is there such a thing as typical failure trajectories) or spines forces (catch and release)?
- Before it impacts the wall by measuring the approach states?
Q. When is it best to resume from a failed attempt? Is there such a time as too late to recover? When does the possibility of successful landing gets low enough that we shouldn't bother waiting for complete failure (i.e. as the spines are dragging down on the wall, their velocity increase and at some point there is little chance for the spines to engage the wall)? Is there a period of time that particularly favors recovery (takeoff) during landing, and should we take advantage of it?
Q. Can we detect the cause of failure? Different causes could lead the airplane to classify the wall as either
unsuitable for perching or
failed but not because of the surface (i.e. bad approach, failure to catch an asperity in time, etc). A rebound would clearly indicate a failed approach (too fast) but seeing some spine drag could be either cause from a bad approach (downward velocity too large) or the lack of asperities.
Q. Can we detect spine release (overload protection)? That would allows us to differentiate between a bad approach (downward velocity too large) and the lack of asperities.
maybe a binary switch that get compressed as you hit the overload protection
Robust approach of vertical wall
Q.
What sensors? How much control? What kind? Should probably discuss that with Tedrake...
Q. How can we take advantage of the touchdown envelope during the approach? A few ideas:
- Maximize the time spent or distance travelled in the touchdown envelope... to do that, and assuming little control in that region (maybe not true if we have thrust and propwash... but probably not in the last instant before touchdown), we can simulate the trajectories starting at various points on the boundaries of the touchdown envelope. Once the trajectories computed, we can evaluate them for some criteria (max time, max forward distance, etc) and select the best. We can then use an RRT algorithm to connect the optimal trajectory once in the landing envelope to the flying approach.
- Evaluate how errors in the wall detection translate in term of touchdown envelope (Monte Carlo simulation?)...
- How is the "robust approach" different from a more traditional one?
Q. How can we adapt the controller for various landing conditions?
Would we like to change anything (there are probably different touchdown states that would be better for some specific conditions), or would a single trajectory be the best?
Q. Can we integrate both wall landing and roof landing (i.e. although both have a velocity component normal to the belly, they would have opposite tangential velocity components... pointing toward the nose for roof landing and toward the tail for wall landing)?
Maneuvering on the wall (hop, hover and cling again)
Instead of crawling quasistatically along the wall we could take a dynamic jumping of hopping approach like birds do to climb trees. Is there anything about that in the literature (points for originality!)?
Q. How does it compares with crawling? Will it be much harder than crawling? How efficient is it in term of complexity, weight and power consumption (less need for actuation in the legs... but probably need T/W > 1)?
Q. How big/small of a hop can we do? What is the accuracy? How good is it at reorienting the plane?
Q. How would we control it? What kind of sensor would be needed? Is IMU enough? Should we use a Wii sensor with multiples projected dots on the belly of the airplane too?
I guess we can get pretty clean release from the wall by using the spine release mechanism, so the transition from attached-to-free would be well defined. Then, we are basically hovering until we touch the wall again.
Q. Is it incompatible with crawling?
Q. Is it a good idea to consider in year 3-4?
Deep stall landing on roofs and ledges (flat to slightly slanted surfaces)
The idea here is to have perform a deep stall maneuver (i.e.
Flight at Supernormal Attitudes or the
Nasa Schweizer 1-36 Deep Stall Research) to rapidly reach the surface of the roofs without being too affected by disturbances. Two options are possibles, starting the maneuver just above the roof or start higher to allow some control during the descent and the possibility to aboard the landing. Note that only two small modification would be needed for the Flatana to perform deep stall maneuver: a stabilator (all moving horizontal tail) with a range of -45 to 90 deg.
Q. What kind of drop rate can we expect? Using the C_L and C_D values from
Flight at Supernormal Attitudes, a Flatana glider would drop at a speed of roughly 3 m/s (vertical and horizontal components). Lower values could be expected when using thrust. It is a nice number, because the suspension designed for wall landing can softly absorb impact up to about 2.5m.
Q. Can we perform high precision landing by starting the deep stall maneuver just above the roof surface (i.e. 1m)? How repeatable is the transition between normal flight and deep stall?
Q. How much control do we have in deep stall landing? Is it enough to perform high accuracy landing? What kind of approach trajectory can we perform? How big of a disturbance can we deal with? How is the controllability changing for various angles of attack? Would some independent ailerons or a split stabilator (all moving horizontal tail) would be desirable?
Q. What are the different ways to abort a deep stall landing? How long, how much space, does it takes to resume normal flight? Would it be faster to bank (aileron) or steer (rudder) the airplane to abort?
Landing on wires
Q. Can we detect wires by using a technique similar to what was use in
High speed obstacle avoidance using monocular vision and reinforcement learning?
Q. Other things?
Other cool ideas
Non-contact (visual) classification of landing surfaces
So we always say that the surface roughness is not a good indicator of the perching quality of the surface. Can we build a sensor that would recognize good surfaces? Can we do it for spines? For dry adhesives?
Q. Can we differentiate surfaces by using high-res images? Looking at visual features (shadow projection, contrast, features size and density, color, smoothness, color, etc)?
Q. Once we are perched on a surface (or failed to), can we classify good and bad surfaces by how they look from far away?
Q. Would some specialized filters help (i.e. polarizing filter to remove reflections)?
Q. How successful such an algorithm can be at classifying the quality of a surface?
--
AlexisLD - 19 Aug 2010