会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明申请
    • SPIKING MODEL TO LEARN ARBITRARY MULTIPLE TRANSFORMATIONS FOR A SELF-REALIZING NETWORK
    • 用于实现自动实现网络的ARKITARY多种变换的SPIKEING模型
    • US20150026110A1
    • 2015-01-22
    • US14015001
    • 2013-08-30
    • HRL LABORATORIES, LLC
    • Narayan SrinivasaYoungkwan Cho
    • G06N3/08
    • G06N3/08G06N3/049
    • A neural network, wherein a portion of the neural network comprises: a first array having a first number of neurons, wherein the dendrite of each neuron of the first array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; and a second array having a second number of neurons, wherein the second number is smaller than the first number, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array.
    • 神经网络,其中所述神经网络的一部分包括:具有第一数量的神经元的第一阵列,其中所述第一阵列的每个神经元的枝晶被提供用于接收输入信号,所述输入信号指示所测量的参数越接近预定的 分配给所述神经元的值; 以及具有第二数量的神经元的第二阵列,其中所述第二数目小于所述第一数目,所述第二阵列的每个神经元的枝晶与所述第一阵列的多个神经元的轴突形成兴奋性STDP突触; 第二阵列的每个神经元的枝晶形成与第二阵列的相邻神经元的轴突的兴奋性STDP突触。
    • 9. 发明申请
    • NEURAL MODEL FOR REINFORCEMENT LEARNING
    • 用于加强学习的神经模型
    • US20140344202A1
    • 2014-11-20
    • US14293928
    • 2014-06-02
    • HRL LABORATORIES LLC
    • Corey M. THIBEAULTNarayan Srinivasa
    • G06N3/08
    • G06N3/08G06N3/04G06N3/049G06N99/005
    • A neural model for reinforcement-learning and for action-selection includes a plurality of channels, a population of input neurons in each of the channels, a population of output neurons in each of the channels, each population of input neurons in each of the channels coupled to each population of output neurons in each of the channels, and a population of reward neurons in each of the channels. Each channel of a population of reward neurons receives input from an environmental input, and is coupled only to output neurons in a channel that the reward neuron is part of. If the environmental input for a channel is positive, the corresponding channel of a population of output neurons are rewarded and have their responses reinforced, otherwise the corresponding channel of a population of output neurons are punished and have their responses attenuated.
    • 用于加强学习和动作选择的神经模型包括多个通道,每个通道中的输入神经元群体,每个通道中的输出神经元群,每个通道中的输入神经元的每个群体 耦合到每个信道中的每个输出神经元的群体,以及每个信道中的一群奖励神经元。 奖励神经元群体的每个通道从环境输入接收输入,并且仅耦合到奖励神经元属于其中的一个通道中的输出神经元。 如果通道的环境输入为正,输出神经元群体的相应通道将得到奖励,并加强其响应,否则输出神经元群体的相应通道受到惩罚并使其响应减弱。