A vision sensor for asperities

-- MicheleLanzetta? - 23 Jul 2004

New related topics:

An alternative to structured light (as indicated in the project proposal) is using directional scattering light, very close to the surface (Ital. radente). Suitable colour should be chosen to optimize vision on various surfaces and to enhance the contrast with environmental light.

The light source should be coherent (laser) or a polarized filter is needed.

The resulting image, with almost no-processing (environmental light adaptable binarization threshold), will highlight a map of larger hooking areas (holes and protrusions). The protrusions in a higher position might occlude the lower ones, but they are the most favourable to hook to. A stroboscopic light source will allow to grab two images (with and without lighting) and enhancing the asperities by image subtraction, with monochrome camera. If the robot has significantly moved between the two acquisitions (this depends on frame rate and speed), one of the two images is shifted accordingly. Red(*) light is good for silicon CCD sensors. By geometrical calibration (lookup table) the corresponding image areas are located (images are grabbed at constant distance from the surface, so optical calibration is simpler and less time consuming).

Two sensors (mirrors) may be necessary (and maybe we can still do with only one camera) depending on the robot leg configuration.

A sensor will scan the surface preceeding a foot and give as output the position/instant to hook.

A scheme of the main vision system elements is shown. A mirror is used to reduce the weight/moment.


The support can also act like a cover to reduce the disturbances of the (changing) environmental light.

(*) Using red light, special care should be taken if the robot will have to climb inside discotheques, photographic development labs, clubs, etc. :-)))

Uses of the sensor

Such sensor can also provide information about the surface type in case different foot configurations are available.

While moving, the sensor will scan the surface preceeding the foot

  • to find the most suitable hooking areas (exact reference) or
  • to provide general indications, such as areas with higher probabability of catching on good asperities

The following feet might use the same information, delayed in time.

For fiber penetrable surfaces, like wood or cork, the main fiber direction should also impact penetration. This kind of information should be retrieved by the sensor to indicated the easier penetration direction for a claw.

It can also be used as relative speed and direction sensor using the principle of an "optical mouse".
Regular motion and good hooking can also be assessed from the speed profile.


Preliminary tests using a wireless optical mouse have shown that it still detects motion at 2-3 mm from the wall surface. Mouse components still need to be installed on a robot for more extensive testing. Optical correction might be necessary to adjust for higher distance.

A quick prototype will be setup using a low resolution camera (like a webcam), a led and Matlab for processing.

Using a 640x480 pixel camera and a field of view of 20x15 mm, we have a resolution of 30 micron, i.e. a 30 micron size asperity can be virtually detected. The field of view should be scanned preceeding EACH leg, so we need at least two cameras for the two front legs AND the following legs operate on the same surface (by being aligned parallel to the robot axis) OR a camera for each leg!

We really need to process (= analyse the image of) the surface when the next step has almost been completed. If the robot proceeds at 15 mm/s, we have 1 second to process each image.

  • Can we use a remote processing unit or should we have a processor on board the robot?

Using an Optical Mouse

See also: OpticalDisplacementSensor.


An optical mouse has lighting, image acquisition and processing embedded, miniaturized and cost effective.


The field of view of an optical mouse is optimized in order to maximize surface texture changes, both coming from surface colour differences and roughness. It is not able to assess the difference between the two. We will need two different light sources and synchronized acquisition to only detect roughness (asperities). Light source is not coherent (laser) and lens is not (?) polarized. The optical mouse grabs monochrome images so it needs couples of images to detect asperities. With a colour sensor and a colour light source only singe acquisition is needed. Not sure if we are throwing away the most valuable parts of the sensor, by changing lens, adding light and doing external image processing (we are only using it as a tiny camera).

An optical mouse (16x16 pixel) only has a 0.48x0.48 mm view field to achieve the required 30 micron spatial resolution, so 1 sensor per claw and per leg (for the difficulty in the alignment of steps) is needed.

  • Is this reasonable?

This site is powered by the TWiki collaboration platformCopyright &© by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback