发明名称 |
TOKEN-LEVEL INTERPOLATION FOR CLASS-BASED LANGUAGE MODELS |
摘要 |
Optimized language models are provided for in-domain applications through an iterative, joint-modeling approach that interpolates a language model (LM) from a number of component LMs according to interpolation weights optimized for a target domain. The component LMs may include class-based LMs, and the interpolation may be context-specific or context-independent. Through iterative processes, the component LMs may be interpolated and used to express training material as alternative representations or parses of tokens. Posterior probabilities may be determined for these parses and used for determining new (or updated) interpolation weights for the LM components, such that a combination or interpolation of component LMs is further optimized for the domain. The component LMs may be merged, according to the optimized weights, into a single, combined LM, for deployment in an application scenario. |
申请公布号 |
WO2016144988(A1) |
申请公布日期 |
2016.09.15 |
申请号 |
WO2016US21416 |
申请日期 |
2016.03.09 |
申请人 |
MICROSOFT TECHNOLOGY LICENSING, LLC |
发明人 |
LEVIT, Michael;PARTHASARATHY, Sarangarajan;STOLCKE, Andreas;CHANG, Shuangyu |
分类号 |
G10L15/06;G10L15/18;G10L15/197 |
主分类号 |
G10L15/06 |
代理机构 |
|
代理人 |
|
主权项 |
|
地址 |
|