会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Exploiting sparseness in training deep neural networks
    • 在深层神经网络训练中利用稀疏性
    • US08700552B2
    • 2014-04-15
    • US13305741
    • 2011-11-28
    • Dong YuLi DengFrank Torsten Bernd SeideGang Li
    • Dong YuLi DengFrank Torsten Bernd SeideGang Li
    • G06F15/18G06N3/08
    • G06N3/08
    • Deep Neural Network (DNN) training technique embodiments are presented that train a DNN while exploiting the sparseness of non-zero hidden layer interconnection weight values. Generally, a fully connected DNN is initially trained by sweeping through a full training set a number of times. Then, for the most part, only the interconnections whose weight magnitudes exceed a minimum weight threshold are considered in further training. This minimum weight threshold can be established as a value that results in only a prescribed maximum number of interconnections being considered when setting interconnection weight values via an error back-propagation procedure during the training. It is noted that the continued DNN training tends to converge much faster than the initial training.
    • 提出了深层神经网络(DNN)训练技术实施例,其训练DNN,同时利用非零隐藏层互连权重值的稀疏性。 通常,完全连接的DNN最初通过遍历完整的训练集多次进行训练。 那么,在大多数情况下,只有重量大小超过最小重量阈值的互连在进一步的训练中被考虑。 该最小权重阈值可以被建立为在训练期间通过错误反向传播过程设置互连权重值时仅考虑规定的最大数量的互连的值。 值得注意的是,继续进行的DNN训练往往比初始训练快得多。
    • 3. 发明申请
    • COMPUTER-IMPLEMENTED DEEP TENSOR NEURAL NETWORK
    • 计算机实现深度传感器神经网络
    • US20140067735A1
    • 2014-03-06
    • US13597268
    • 2012-08-29
    • Dong YuLi DengFrank Seide
    • Dong YuLi DengFrank Seide
    • G06N3/08
    • G06N3/02G06N3/04G06N3/0454G06N3/084
    • A deep tensor neural network (DTNN) is described herein, wherein the DTNN is suitable for employment in a computer-implemented recognition/classification system. Hidden layers in the DTNN comprise at least one projection layer, which includes a first subspace of hidden units and a second subspace of hidden units. The first subspace of hidden units receives a first nonlinear projection of input data to a projection layer and generates the first set of output data based at least in part thereon, and the second subspace of hidden units receives a second nonlinear projection of the input data to the projection layer and generates the second set of output data based at least in part thereon. A tensor layer, which can converted into a conventional layer of a DNN, generates the third set of output data based upon the first set of output data and the second set of output data.
    • 本文描述了深张量神经网络(DTNN),其中DTNN适合于在计算机实现的识别/分类系统中的使用。 DTNN中的隐藏层包括至少一个投影层,其包括隐藏单元的第一子空间和隐藏单元的第二子空间。 隐藏单元的第一子空间至少部分地将输入数据的第一非线性投影接收到投影层,并且至少部分地生成第一组输出数据,并且隐藏单元的第二子空间接收输入数据的第二非线性投影 投影层并且至少部分地基于其生成第二组输出数据。 可以转换成DNN的常规层的张量层基于第一组输出数据和第二组输出数据产生第三组输出数据。
    • 7. 发明申请
    • Noise Suppressor for Robust Speech Recognition
    • 噪声抑制器用于强大的语音识别
    • US20100153104A1
    • 2010-06-17
    • US12335558
    • 2008-12-16
    • Dong YuLi DengYifan GongJian WuAlejandro Acero
    • Dong YuLi DengYifan GongJian WuAlejandro Acero
    • G10L15/20
    • G10L21/0208G10L15/20
    • Described is noise reduction technology generally for speech input in which a noise-suppression related gain value for the frame is determined based upon a noise level associated with that frame in addition to the signal to noise ratios (SNRs). In one implementation, a noise reduction mechanism is based upon minimum mean square error, Mel-frequency cepstra noise reduction technology. A high gain value (e.g., one) is set to accomplish little or no noise suppression when the noise level is below a threshold low level, and a low gain value set or computed to accomplish large noise suppression above a threshold high noise level. A noise-power dependent function, e.g., a log-linear interpolation, is used to compute the gain between the thresholds. Smoothing may be performed by modifying the gain value based upon a prior frame's gain value. Also described is learning parameters used in noise reduction via a step-adaptive discriminative learning algorithm.
    • 描述了通常用于语音输入的噪声降低技术,其中除了信噪比(SNR)之外,基于与该帧相关联的噪声电平来确定用于帧的噪声抑制相关增益值。 在一个实现中,降噪机制基于最小均方误差,Mel-frequency cepstra降噪技术。 设置高增益值(例如一个),以在噪声电平低于阈值低电平时实现很少或没有噪声抑制,以及设置或计算的低增益值,以实现高于阈值高噪声电平的大噪声抑制。 使用噪声功率相关函数,例如对数线性插值来计算阈值之间的增益。 可以通过基于先前帧的增益值修改增益值来执行平滑化。 还描述了通过步进自适应识别学习算法在降噪中使用的学习参数。
    • 8. 发明申请
    • PIECEWISE-BASED VARIABLE -PARAMETER HIDDEN MARKOV MODELS AND THE TRAINING THEREOF
    • 基于改进的可变参数隐藏式MARKOV模型及其训练
    • US20100070279A1
    • 2010-03-18
    • US12211114
    • 2008-09-16
    • Dong YuLi DengYifan GongAlejandro Acero
    • Dong YuLi DengYifan GongAlejandro Acero
    • G10L15/14
    • G10L15/144
    • A speech recognition system uses Gaussian mixture variable-parameter hidden Markov models (VPHMMs) to recognize speech under many different conditions. Each Gaussian mixture component of the VPHMMs is characterized by a mean parameter μ and a variance parameter Σ. Each of these Gaussian parameters varies as a function of at least one environmental conditioning parameter, such as, but not limited to, instantaneous signal-to-noise-ratio (SNR). The way in which a Gaussian parameter varies with the environmental conditioning parameter(s) can be approximated as a piecewise function, such as a cubic spline function. Further, the recognition system formulates the mean parameter μ and the variance parameter Σ of each Gaussian mixture component in an efficient form that accommodates the use of discriminative training and parameter sharing. Parameter sharing is carried out so that the otherwise very large number of parameters in the VPHMMs can be effectively reduced with practically feasible amounts of training data.
    • 语音识别系统使用高斯混合可变参数隐马尔可夫模型(VPHMM)来识别许多不同条件下的语音。 VPHMM的每个高斯混合分量的特征在于平均参数μ和方差参数&Sgr。 这些高斯参数中的每一个作为至少一个环境调节参数的函数而变化,例如但不限于瞬时信噪比(SNR)。 高斯参数随环境条件参数变化的方式可以近似为分段函数,如三次样条函数。 此外,识别系统制定均值参数μ和方差参数&Sgr; 每个高斯混合分量以有效的形式适应使用歧视性训练和参数共享。 执行参数共享,以便通过实际可行的训练数据量可以有效地减少VPHMM中非常大量的参数。