摘要 |
An object, such as a robot, is located at an initial state in a finite state space area and moves under the control of the unsupervised neural network model of the invention. The network instructs the object to move in one of several directions from the initial state. Upon reaching another state, the model again instructs the object to move in one of several directions. These instructions continue until either: a) the object has completed a cycle by ending up back at a state it has been to previously during this cycle, or b) the object has completed a cycle by reaching the goal state. If the object ends up back at a state it has been to previously during this cycle, the neural network model ends the cycle and immediately begins a new cycle from the present location. When the object reaches the goal state, the neural network model learns that this path is productive towards reaching the goal state, and is given delayed reinforcement in the form of a "reward". Upon reaching a state, the neural network model calculates a level of satisfaction with its progress towards reaching the goal state. If the level of satisfaction is low, the neural network model is more likely to override what has been learned thus far and deviate from a path known to lead to the goal state to experiment with new and possibly better paths.
|