摘要 |
An artificial neuron integrates current and prior information, each of which predicts the state of a part of the world. The neuron's output corresponds to the discrepancy between the two predictions, or prediction error. Inputs contributing prior information are selected in order to minimize the error, which can occur through an anti-Hebbian-type plasticity rule. Current information sources are selected to maximize errors, which can occur through a Hebbian-type rule. This insures that the neuron receives new information from its external world that is not redundant with the prior information that the neuron already possesses. By learning on its own to make predictions, a neuron or network of these neurons acquires information necessary to generate intelligent and advantageous outputs. |