Toward a reconciliation of functional and structural approaches to robot autonomy
PhD thesis of Eric Dedieu (1995)
Robot autonomy will be achieved when robots can act in complex environments without the need of human intervention.
However, the traditional methods of robot programming rely on models having very restricting conditions of validity. The problem of inexpectation arises when these conditions are not met in the real situation.
We argue that robot autonomy can’t be achieved without a systematic way for taking inexpectation into account, and we explain why the classical hierarchical, behavioural or adaptive approaches of robotics are too limited for tackling this problem in natural, not carefully controlled environments.
We then suggest three paths for escaping some of those limits.
Our first point is theoretical: a robot should be able to acknowledge and model its environment being partially ignorant of its world. For this we advocate the theory of “probability as logic” (Jaynes 1995) as a fundamental framework.
Our second point is methodological: we propose an incremental approach for building robots, i.e. a systematic method for structural evolution, the motor of which is the occurence of unexpected events. The concern underlying this approach is the origin and genesis of representations more than their performances.
Our third and last point is conceptual: we propose a notion of “contingent representation”, defining a representation by its structure rather than its function. The representational capacity is intrinsic to the structure, but the representational contents (interpretation) is context -dependant. The classical notion of representation had lead some authors to reject the very notion of representation – thus giving up an unescapable guide for design, too.
Contingent representation is an attempt for tackling the problem of design within new approaches yet unexploited in AI, like that of operational closure.