In distributed autonomous robot systems, each robot (predator or prey) must cooperate with other robots in order to carry out a given task. Thus, each robot must have the capacity for both learning and evolution, to adapt to dynamic environments. This paper develops a pursuit system, in order to apply an emergent characteristic of artificial life to machine learning. A neural network is constructed to represent robot and prey, and a model is proposed to evolve its structure.
The input to the neural network is decided by the existence of other robots, and the distance to the other robots. The output determines the direction in which the robot moves. The connection weight values of this neural network are encoded as genes, and the fitness values are determined using a genetic algorithm.
The validity of the system is verified through simulation. The evolutionary strategies of several selection methods are compared. The author employs the tournament competition method, which has demonstrated a higher realization of a robot’s emergent behavior than others. These techniques considerably reduce time consumption, as fuzzification is not needed, and they can be applied to other complicated application areas.