发明名称 |
TRAINING DEVICE, SPEECH DETECTION DEVICE, TRAINING METHOD, AND COMPUTER PROGRAM PRODUCT |
摘要 |
According to an embodiment, a training device trains a neural network that outputs a posterior probability that an input signal belongs to a particular class. An output layer of the neural network includes N units respectively corresponding to classes and one additional unit. The training device includes a propagator, a probability calculator, and an updater. The propagator supplies a sample signal to the neural network and acquires (N+1) input values for each unit at the output layer. The probability calculator supplies the input values to a function to generate a probability vector including (N+1) probability values respectively corresponding to the units at the output layer. The updater updates a parameter included in the neural network in such a manner to reduce an error between a teacher vector including (N+1) target values and the probability vector. A target value corresponding to the additional unit is a predetermined constant value. |
申请公布号 |
US2017076200(A1) |
申请公布日期 |
2017.03.16 |
申请号 |
US201615257463 |
申请日期 |
2016.09.06 |
申请人 |
Kabushiki Kaisha Toshiba |
发明人 |
NASU Yu |
分类号 |
G06N3/08;G06N3/04 |
主分类号 |
G06N3/08 |
代理机构 |
|
代理人 |
|
主权项 |
1. A training device configured to train a neural network that outputs a posterior probability that an input signal belongs to a particular class, an output layer of the neural network including N units respectively corresponding to classes and one additional unit, N being an integer of 2 or larger,
the device comprising:
a propagator configured to supply a sample signal to the neural network, and to acquire, for each of the units at the output layer, (N+1) input values that are obtained by connecting signals output from a layer immediately preceding the output layer according to a set parameter;a probability calculator configured to supply the input values to a function for calculating the posterior probability to generate a probability vector including (N+1) probability values respectively corresponding to the units at the output layer; andan updater configured to update the parameter included in the neural network in such a manner to reduce an error between a teacher vector and the probability vector, the teacher vector including (N+1) target values respectively corresponding to the units at the output layer, wherein a target value corresponding to the additional unit is a predetermined constant value. |
地址 |
Minato-ku JP |