会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明授权
    • High precision computing with charge domain devices and a
pseudo-spectral method therefor
    • 电荷域设备的高精度计算及其伪光谱方法
    • US5680515A
    • 1997-10-21
    • US534537
    • 1995-09-27
    • Jacob BarhenNikzad ToomarianAmir FijanyMichail Zak
    • Jacob BarhenNikzad ToomarianAmir FijanyMichail Zak
    • G06F17/16G06N3/063G06F15/00
    • G06F17/16G06N3/063
    • The present invention enhances the bit resolution of a CCD/CID MVM processor by storing each bit of each matrix element as a separate CCD charge packet. The bits of each input vector are separately multiplied by each bit of each matrix element in massive parallelism and the resulting products are combined appropriately to synthesize the correct product. In another aspect of the invention, such arrays are employed in a pseudo-spectral method of the invention, in which partial differential equations are solved by expressing each derivative analytically as matrices, and the state function is updated at each computation cycle by multiplying it by the matrices. The matrices are treated as synaptic arrays of a neural network and the state function vector elements are treated as neurons. In a further aspect of the invention, moving target detection is performed by driving the soliton equation with a vector of detector outputs. The neural architecture consists of two synaptic arrays corresponding to the two differential terms of the soliton-equation and an adder connected to the output thereof and to the output of the detector array to drive the soliton equation.
    • 本发明通过将每个矩阵元素的每个比特存储为单独的CCD电荷分组来增强CCD / CID MVM处理器的比特分辨率。 每个输入向量的位分别乘以每个矩阵元素的每个比特的大规模并行性,并将所得到的乘积合并以合成正确的乘积。 在本发明的另一方面,这种阵列用于本发明的伪光谱方法,其中通过将每个导数分析地表示为矩阵来求解偏微分方程,并且通过将每个计算周期的状态函数乘以 矩阵。 矩阵被视为神经网络的突触阵列,状态函数向量元素被视为神经元。 在本发明的另一方面,通过用检测器输出的向量驱动孤子方程来执行移动目标检测。 神经体系结构由两个突变阵列组成,这两个突触阵列对应于孤子方程的两个微分项,以及连接到其输出的加法器和检测器阵列的输出以驱动孤子方程。
    • 5. 发明授权
    • Signal processing applications of massively parallel charge domain
computing devices
    • 大容量并行电荷域计算设备的信号处理应用
    • US5952685A
    • 1999-09-14
    • US598900
    • 1996-02-09
    • Amir FijanyJacob BarhenNikzad Toomarian
    • Amir FijanyJacob BarhenNikzad Toomarian
    • G06F17/16G06N3/063H01L29/76G06G7/00G11C19/18H01L27/148
    • G06F17/16G06N3/063
    • The present invention is embodied in a charge coupled device (CCD)/charge injection device (CID) architecture capable of performing a Fourier transform by simultaneous matrix vector multiplication (MVM) operations in respective plural CCD/CID arrays in parallel in O(1) steps. For example, in one embodiment, a first CCD/CID array stores charge packets representing a first matrix operator based upon permutations of a Hartley transform and computes the Fourier transform of an incoming vector. A second CCD/CID array stores charge packets representing a second matrix operator based upon different permutations of a Hartley transform and computes the Fourier transform of an incoming vector. The incoming vector is applied to the inputs of the two CCD/CID arrays simultaneously, and the real and imaginary parts of the Fourier transform are produced simultaneously in the time required to perform a single MVM operation in a CCD/CID array.
    • 本发明体现在能够在O(1)中并行地在多个CCD / CID阵列中通过同时进行矩阵矢量乘法(MVM)运算的傅里叶变换的电荷耦合器件(CCD)/电荷注入器件(CID) 脚步。 例如,在一个实施例中,第一CCD / CID阵列基于哈特利变换的置换来存储表示第一矩阵运算符的电荷分组,并计算输入向量的傅里叶变换。 基于哈特利变换的不同排列,第二CCD / CID阵列存储表示第二矩阵运算符的电荷分组,并计算进入矢量的傅里叶变换。 输入矢量同时应用于两个CCD / CID阵列的输入,傅立叶变换的实部和虚部在CCD / CID阵列中执行单个MVM操作所需的时间内同时产生。
    • 6. 发明授权
    • High precision computing with charge domain devices and a
pseudo-spectral method therefor
    • 电荷域设备的高精度计算及其伪光谱方法
    • US5491650A
    • 1996-02-13
    • US49829
    • 1993-04-19
    • Jacob BarhenNikzad ToomarianAmir FijanyMichail Zak
    • Jacob BarhenNikzad ToomarianAmir FijanyMichail Zak
    • G06F17/16G06N3/063G06J1/00G06F7/52
    • G06F17/16G06N3/063
    • The present invention discloses increased bit resolution of a charge coupled device (CCD)/charge injection device (CID) matrix vector multiplication (MVM) processor by storing each bit of each matrix element as a separate CCD charge packet. The bits of each input vector are separately multiplied by each bit of each matrix element in massive parallelism and the resulting products are combined appropriately to synthesize the correct product. In addition, such arrays are employed in a pseudo-spectral method of the invention, in which partial differential equations are solved by expressing each derivative analytically as matrices, and the state function is updated at each computation cycle by multiplying it by the matrices. The matrices are treated as synaptic arrays of a neutral network and the state function vector elements are treated as neurons. Further, moving target detection is performed by driving the soliton equation with a vector of detector outputs. The neural architecture consists of two synaptic arrays corresponding to the two differential terms of the soliton equation and an adder connected to the output thereof and to the output of the detector array to drive the soliton equation.
    • 本发明通过将每个矩阵元素的每一位存储为单独的CCD电荷包,提高了电荷耦合器件(CCD)/电荷注入器件(CID)矩阵矢量乘法(MVM)处理器的位分辨率。 每个输入向量的位分别乘以每个矩阵元素的每个比特的大规模并行性,并将所得到的乘积合并以合成正确的乘积。 此外,在本发明的伪光谱方法中采用这样的阵列,其中通过将每个导数分析地表示为矩阵来求解偏微分方程,并且通过将其乘以矩阵在每个计算周期更新状态函数。 矩阵被视为中性网络的突触阵列,状态函数向量元素被视为神经元。 此外,通过用检测器输出的向量驱动孤子方程来执行移动目标检测。 神经结构由两个对应于孤子方程的两个微分项的突触阵列和连接到其输出的加法器和检测器阵列的输出驱动孤子方程组成。
    • 7. 发明授权
    • Neural network training by integration of adjoint systems of equations
forward in time
    • 及时融合方程组合的神经网络训练
    • US5930781A
    • 1999-07-27
    • US969868
    • 1992-10-27
    • Nikzad ToomarianJacob Barhen
    • Nikzad ToomarianJacob Barhen
    • G06N3/04G06N3/08G06F15/18
    • G06N3/049G06N3/08
    • A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically, it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved, but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. The trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.
    • 用于时间依赖轨迹的监督神经学习的方法和装置利用伴随运算符的概念以使得能够以高效的方式相对于网络架构的各种参数来计算目标函数的梯度。 具体来说,它结合了伴随方法固有的计算复杂度的显着降低的优点,以及在时间上一起解决两个方程组合的能力。 不仅节省了大量的计算和存储空间,而且实时应用程序的处理也是可能的。 本发明已经应用于两个最近在开放文献中分析的代表性复杂性的例子,并且证明了与文献中报道的12000相比,可以在大约200次迭代中学习圆形轨迹。 在500次迭代之前实现了图8的轨迹,相比之前需要的20000次。 使用我们的新方法计算的轨迹比以前的研究报道的更接近目标轨迹。
    • 8. 发明授权
    • Fast temporal neural learning using teacher forcing
    • 快速时间神经学习使用教师强迫
    • US5428710A
    • 1995-06-27
    • US908677
    • 1992-06-29
    • Nikzad ToomarianJacob Barhen
    • Nikzad ToomarianJacob Barhen
    • G06N3/08G06F15/18
    • G06N3/08
    • A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neutral network output decreases during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
    • 训练神经网络以响应于在相同时间间隔上定义的时间相关输入向量来输出在预定时间间隔上定义的时间相关目标矢量,通过应用误差向量的相应元素,或者目标矢量与实际 神经元输出向量,输入到相应输出神经元的网络校正反馈。 这种反馈减少了错误,加快了学习过程,从而完成学习过程所需的训练周期要少得多。 采用常规梯度下降算法在预定时间间隔结束时更新神经网络参数。 以重复循环重复上述过程,直到实际输出向量对应于目标矢量。 在优选实施例中,随着中继网络输出的总体误差在连续的训练周期中减小,反馈给输出神经元的误差部分相应地减小,从而允许网络以较大的自由度学习从教师强迫中获得的网络参数 收敛到其最佳值。 本发明还可以用于训练具有固定训练和目标矢量的神经网络。