TWiki > Rise Web>TWikiUsers > ShaiRevzen>SimSoftwareArch (11 Jun 2003, ShaiRevzen? )
-- ShaiRevzen? - 11 Jun 2003

Some general thoughts on the simulation issue:

I believe it is important to separate several layers of simulation. By separating them out, we might be able to provide adequate solutions to develop the upper layers in parallel with the lower ones (rather than after the lower ones are ready):

(1) Component simulation -- simulation of specific physical sub-systems, e.g. surfaces, leg segments, etc.

(2) Dynamics simulation -- the physical layer, integrating various control inputs and mechanical constraints into object trajectories punctuated by contact events.

(3) Robot hardware emulation -- providing the same programming enviroment as the robot hardware, including a simulation of timing and device related events (interrupts, ADC conversion delays, etc.).

(4) Robot control library simulation -- (e.g. RHexLib or the far simpler SprawlOS) providing a desktop version of the same API-s that are used for programming the actual robot.

(5) Software control loops -- "analog" control loops closed in software. These control loops utilize (4) to stabilize behavioral primitives ("walk","run","turn",...)

(6) High level behavioral code.

I know precious little about how to go about (1) & (2), but I'll hazard public ridicule anyway:

(1) It seems to me that (1) is an open topic for many of the sub-systems of interest (anybody out there with a good physical model of tree bark at the millimeter scale?). It is probably worth defining what physical "primitives" we anticipate needing for various qualities of simulation, and which of these are "research grade" (e.g. the physical model for artificial dry adhesives). We should then specify the mathematical interfaces (e.g. "provides an angle of incidence based stress tensor") for these objects, and define "toy" implementations (e.g. Columb friction) to be used for "rough" simulations.

(2) The various simulators Uluc has reviewed cover some of the ground for (2). While one could, in priciple, implement (2) as a home-grown simulator, the amount of debugging and the level of numerical computation savvy required seem prohibitive to me (unless we have 4-5 interested computer science grads... do we?). My guess is that we should define the requirements from this layer, and shop for an existing solution that allows us to plug our layer (3) on top, and is versatile enough to insert our anticipated "research grade" layer (1) objects.

(3) For (3) we should probably get some idea of what types of boards we plan to use. My main input here is that the board support and development environment matter a whole lot more than the actual board capabilities. A CPU that is 10 times faster and uses 1/10 the power, is still usually worse than a slower powermonger if its board support package is buggy. Probably the best bet is to pick a "real" embedded OS as our target (e.g. QNX, RTLinux, ...). That way we can inherit the interface specification without deciding on a board right away.

(4) I have little to say about the choice of robot control library - I haven't done any real programming for robots (yet...). It is worth noting that an "event log" fed into a simulation of such a library can let us develop alot of the software for layers (4) and up without any physical hardware (e.g. what I am developing in SprawlOS, where a Python script can inject events for the "robot controller" to react to).

(5) (5) is a "control-systems"-centric layer. I guess this is what we really want to tune in simulation. If we can make some reasonable (or even conservative) assumptions on the overall "responsiveness" of layer (4), we can try to get some of these algorithms in before a "high quality" physical simulation is available. This is where the templates come in to their own, allowing us to build control models independent of the anchor details.

-- ShaiRevzen? - 11 Jun 2003

(additional clarification) Part of what I am trying to say is that the "knee jerk" reaction of looking for the most exact physical simulation in order to design a system is often wrong. Given a limit on design resources, it is sometimes better to assume that our control (software) is poorly coupled with our hardware (robot+world) -- e.g. events arrive with a large timing variance, measurements are innacurate, etc. -- we then have to come up with a more robust controller. This controller will have better chance of working in the real world because it doesn't depend on hidden assumptions (or bugs) built in to our simulation.

Additional points about this:

* In this case we can also use a simpler, rougher simulation. This means we might make it able to run faster than real-time, further speeding up the design-build-test cycle.

* A subtle point: the "real world" must be somewhere within the our noisy simulation's distribution. This means that we sometimes want to add noise to the simulation, just to blank out our systematic errors.

 
This site is powered by the TWiki collaboration platformCopyright &© by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback