会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Compact multiport static random access memory cell
    • 紧凑型多端口静态随机存取存储单元
    • US5754468A
    • 1998-05-19
    • US673732
    • 1996-06-26
    • Richard F. Hobson
    • Richard F. Hobson
    • H01L27/11G11C11/00
    • H01L27/1104Y10S257/903
    • A new static random access memory cell for standard logic CMOS processes with three or more metal layers is detailed. The method uses three P-type and three N-type MOS transistors to form a two-port memory cell, which can be configured to perform as a one port, or a two port memory cell. In addition to standard memory applications, specialty memories, like a First-In First-Out (FIFO) buffer, which can benefit from the natural 2-port structure of our invention, are particularly appealing. Additional ports can be added for applications like a 3-port microprocessor register array.
    • 详细介绍了具有三个或更多个金属层的标准逻辑CMOS工艺的新的静态随机存取存储单元。 该方法使用三个P型和三个N型MOS晶体管来形成双端口存储单元,其可被配置为执行一个端口或两端口存储单元。 除了标准存储器应用之外,可以从本发明的自然2端口结构中受益的专业存储器(例如先进先出(FIFO)缓冲器)特别吸引人。 可以为3端口微处理器寄存器阵列等应用添加其他端口。
    • 5. 发明申请
    • Processor Cluster Architecture and Associated Parallel Processing Methods
    • 处理器集群架构和相关并行处理方法
    • US20110047354A1
    • 2011-02-24
    • US12940799
    • 2010-11-05
    • Richard F. HobsonBill ResslAllan R. Dyck
    • Richard F. HobsonBill ResslAllan R. Dyck
    • G06F9/38G06F15/76G06F9/312
    • G06F9/5033G06F9/445G06F9/5061G06F9/5066G06F2209/5012G06F2209/5017
    • A parallel processing architecture comprising a cluster of embedded processors that share a common code distribution bus. Pages or blocks of code are concurrently loaded into respective program memories of some or all of these processors (typically all processors assigned to a particular task) over the code distribution bus, and are executed in parallel by these processors. A task control processor determines when all of the processors assigned to a particular task have finished executing the current code page, and then loads a new code page (e.g., the next sequential code page within a task) into the program memories of these processors for execution. The processors within the cluster preferably share a common memory (1 per cluster) that is used to receive data inputs from, and to provide data outputs to, a higher level processor. Multiple interconnected clusters may be integrated within a common integrated circuit device.
    • 一种并行处理架构,其包括共享公共代码分配总线的嵌入式处理器集群。 代码页或代码块通过代码分配总线同时加载到这些处理器(通常是分配给特定任务的所有处理器)的一些或全部的相应程序存储器中,并且由这些处理器并行执行。 任务控制处理器确定分配给特定任务的所有处理器何时完成执行当前代码页,然后将新代码页(例如,任务内的下一个顺序代码页)加载到这些处理器的程序存储器中 执行。 集群内的处理器优选地共享用于从更高级处理器接收数据输入并向其提供数据输出的公共存储器(每个群集1)。 多个互连的集群可以集成在公共集成电路装置内。
    • 6. 发明授权
    • Processor cluster architecture and associated parallel processing methods
    • 处理器集群架构及相关并行处理方法
    • US07840778B2
    • 2010-11-23
    • US11468826
    • 2006-08-31
    • Richard F. HobsonBill ResslAllan R. Dyck
    • Richard F. HobsonBill ResslAllan R. Dyck
    • G06F9/00
    • G06F9/5033G06F9/445G06F9/5061G06F9/5066G06F2209/5012G06F2209/5017
    • A parallel processing architecture comprising a cluster of embedded processors that share a common code distribution bus. Pages or blocks of code are concurrently loaded into respective program memories of some or all of these processors (typically all processors assigned to a particular task) over the code distribution bus, and are executed in parallel by these processors. A task control processor determines when all of the processors assigned to a particular task have finished executing the current code page, and then loads a new code page (e.g., the next sequential code page within a task) into the program memories of these processors for execution. The processors within the cluster preferably share a common memory (1 per cluster) that is used to receive data inputs from, and to provide data outputs to, a higher level processor. Multiple interconnected clusters may be integrated within a common integrated circuit device.
    • 一种并行处理架构,其包括共享公共代码分配总线的嵌入式处理器集群。 代码页或代码块通过代码分配总线同时加载到这些处理器(通常是分配给特定任务的所有处理器)的一些或全部的相应程序存储器中,并且由这些处理器并行执行。 任务控制处理器确定分配给特定任务的所有处理器何时完成执行当前代码页,然后将新代码页(例如,任务内的下一个顺序代码页)加载到这些处理器的程序存储器中 执行。 集群内的处理器优选地共享用于从更高级处理器接收数据输入并向其提供数据输出的公共存储器(每个群集1)。 多个互连的集群可以集成在公共集成电路装置内。
    • 7. 发明授权
    • Processor cluster architecture and associated parallel processing methods
    • 处理器集群架构及相关并行处理方法
    • US06959372B1
    • 2005-10-25
    • US10369182
    • 2003-02-18
    • Richard F. HobsonBill ResslAllan R. Dyck
    • Richard F. HobsonBill ResslAllan R. Dyck
    • G06F12/00
    • G06F9/5033G06F9/445G06F9/5061G06F9/5066G06F2209/5012G06F2209/5017
    • A parallel processing architecture comprising a cluster of embedded processors that share a common code distribution bus. Pages or blocks of code are concurrently loaded into respective program memories of some or all of these processors (typically all processors assigned to a particular task) over the code distribution bus, and are executed in parallel by these processors. A task control processor determines when all of the processors assigned to a particular task have finished executing the current code page, and then loads a new code page (e.g., the next sequential code page within a task) into the program memories of these processors for execution. The processors within the cluster preferably share a common memory (1 per cluster) that is used to receive data inputs from, and to provide data outputs to, a higher level processor. Multiple interconnected clusters may be integrated within a common integrated circuit device.
    • 一种并行处理架构,其包括共享公共代码分配总线的嵌入式处理器集群。 代码页或代码块通过代码分配总线同时加载到这些处理器(通常是分配给特定任务的所有处理器)的一些或全部的相应程序存储器中,并且由这些处理器并行执行。 任务控制处理器确定分配给特定任务的所有处理器何时完成执行当前代码页,然后将新代码页(例如,任务内的下一个顺序代码页)加载到这些处理器的程序存储器中 执行。 集群内的处理器优选地共享用于从更高级处理器接收数据输入并向其提供数据输出的公共存储器(每个群集1)。 多个互连的集群可以集成在公共集成电路装置内。
    • 8. 发明授权
    • Processor cluster architecture and associated parallel processing methods
    • 处理器集群架构及相关并行处理方法
    • US07210139B2
    • 2007-04-24
    • US11255597
    • 2005-10-20
    • Richard F. HobsonBill ResslAllan R. Dyck
    • Richard F. HobsonBill ResslAllan R. Dyck
    • G06F9/45
    • G06F9/5033G06F9/445G06F9/5061G06F9/5066G06F2209/5012G06F2209/5017
    • A parallel processing architecture comprising a cluster of embedded processors that share a common code distribution bus. Pages or blocks of code are concurrently loaded into respective program memories of some or all of these processors (typically all processors assigned to a particular task) over the code distribution bus, and are executed in parallel by these processors. A task control processor determines when all of the processors assigned to a particular task have finished executing the current code page, and then loads a new code page (e.g., the next sequential code page within a task) into the program memories of these processors for execution. The processors within the cluster preferably share a common memory (1 per cluster) that is used to receive data inputs from, and to provide data outputs to, a higher level processor. Multiple interconnected clusters may be integrated within a common integrated circuit device.
    • 一种并行处理架构,其包括共享公共代码分配总线的嵌入式处理器集群。 代码页或代码块通过代码分配总线同时加载到这些处理器(通常是分配给特定任务的所有处理器)的一些或全部的相应程序存储器中,并且由这些处理器并行执行。 任务控制处理器确定分配给特定任务的所有处理器何时完成执行当前代码页,然后将新代码页(例如,任务内的下一个顺序代码页)加载到这些处理器的程序存储器中 执行。 集群内的处理器优选地共享用于从更高级处理器接收数据输入并向其提供数据输出的公共存储器(每个群集1)。 多个互连的集群可以集成在公共集成电路装置内。
    • 10. 发明授权
    • Processor cluster architecture and associated parallel processing methods
    • 处理器集群架构及相关并行处理方法
    • US08489857B2
    • 2013-07-16
    • US12940799
    • 2010-11-05
    • Richard F. HobsonBill ResslAllan R. Dyck
    • Richard F. HobsonBill ResslAllan R. Dyck
    • G06F9/00
    • G06F9/5033G06F9/445G06F9/5061G06F9/5066G06F2209/5012G06F2209/5017
    • A parallel processing architecture comprising a cluster of embedded processors that share a common code distribution bus. Pages or blocks of code are concurrently loaded into respective program memories of some or all of these processors (typically all processors assigned to a particular task) over the code distribution bus, and are executed in parallel by these processors. A task control processor determines when all of the processors assigned to a particular task have finished executing the current code page, and then loads a new code page (e.g., the next sequential code page within a task) into the program memories of these processors for execution. The processors within the cluster preferably share a common memory (1 per cluster) that is used to receive data inputs from, and to provide data outputs to, a higher level processor. Multiple interconnected clusters may be integrated within a common integrated circuit device.
    • 一种并行处理架构,其包括共享公共代码分配总线的嵌入式处理器集群。 代码页或代码块通过代码分配总线同时加载到这些处理器(通常是分配给特定任务的所有处理器)的一些或全部的相应程序存储器中,并且由这些处理器并行执行。 任务控制处理器确定分配给特定任务的所有处理器何时完成执行当前代码页,然后将新代码页(例如,任务内的下一个顺序代码页)加载到这些处理器的程序存储器中 执行。 集群内的处理器优选地共享用于从更高级处理器接收数据输入并向其提供数据输出的公共存储器(每个群集1)。 多个互连的集群可以集成在公共集成电路装置内。