摘要 |
A neural network has reduced requirements for storing intermodal weight values, as a result of a dual-precision training process. In the forward propagation of training samples, low-resolution weight values are employed. During back-propagation of errors to train the network, higher-resolution values are used. After training, only the lower resolution values need to be stored for further run-time operation, thereby reducing memory requirements.
|