会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Multi-threaded processing with hardware accelerators
    • 使用硬件加速器进行多线程处理
    • US08949838B2
    • 2015-02-03
    • US13474114
    • 2012-05-17
    • Deepak MitalWilliam BurroughsEran DoshEyal Rosin
    • Deepak MitalWilliam BurroughsEran DoshEyal Rosin
    • G06F9/46G06F9/48G06F9/30G06F9/38H04L12/933H04L12/931H04L12/851
    • G06F9/4881G06F9/3009G06F9/3851G06F9/3877H04L47/2408H04L47/2441H04L49/109H04L49/506
    • Described embodiments process multiple threads of commands in a network processor. One or more tasks are generated corresponding to each received packet, and the tasks are provided to a packet processor module (MPP). A scheduler associates each received task with a command flow. A thread updater writes state data corresponding to the flow to a context memory. The scheduler determines an order of processing of the command flows. When a processing thread of a multi-thread processor is available, the thread updater loads, from the context memory, state data for at least one scheduled flow to one of the multi-thread processors. The multi-thread processor processes a next command of the flow based on the loaded state data. If the processed command requires operation of a co-processor module, the multi-thread processor sends a co-processor request and switches command processing from the first flow to a second flow.
    • 描述的实施例处理网络处理器中的多个命令线程。 对应于每个接收到的分组生成一个或多个任务,并且将任务提供给分组处理器模块(MPP)。 调度器将每个接收到的任务与命令流相关联。 线程更新器将对应于流的状态数据写入上下文存储器。 调度器确定命令流的处理顺序。 当多线程处理器的处理线程可用时,线程更新器从上下文存储器加载至少一个调度流的状态数据到多线程处理器之一。 多线程处理器基于加载的状态数据处理流的下一个命令。 如果处理的命令需要协处理器模块的操作,则多线程处理器发送协处理器请求并将命令处理从第一流切换到第二流。
    • 2. 发明授权
    • Instruction breakpoints in a multi-core, multi-thread network communications processor architecture
    • 指令断点在多核,多线程网络通信处理器架构中
    • US08868889B2
    • 2014-10-21
    • US12976045
    • 2010-12-22
    • Deepak MitalTe Khac MaNarender VangatiWilliam Burroughs
    • Deepak MitalTe Khac MaNarender VangatiWilliam Burroughs
    • G06F9/38G06F15/167H04L12/873
    • G06F15/167G06F9/3851G06F9/3885H04L47/522
    • Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate threads of contexts corresponding to tasks received by the packet classifier from a plurality of processing modules of the network processor. A multi-thread instruction engine processes instructions corresponding to threads received from the scheduler. The multi-thread instruction engine executes instructions by fetching an instruction of the thread from an instruction memory of the packet classifier and determining whether a breakpoint mode of the network processor is enabled. If the breakpoint mode is enabled, and breakpoint indicator of the fetched instruction is set, the packet classifier enters a breakpoint mode. Otherwise, if the breakpoint indicator of the fetched instruction is not set, the multi-thread instruction engine executes the fetched instruction.
    • 描述的实施例提供了一种用于生成与每个接收的分组相对应的任务的网络处理器的分组分类器。 分组分类器包括调度器,用于从网络处理器的多个处理模块生成对应于由分组分类器接收到的任务的上下文的线程。 多线程指令引擎处理与从调度程序接收到的线程相对应的指令。 多线程指令引擎通过从分组分类器的指令存储器取出线程的指令来执行指令,并且确定是否使能网络处理器的断点模式。 如果断点模式被使能,并且获取的指令的断点指示器被设置,则分组分类器进入断点模式。 否则,如果未设置获取的指令的断点指示符,则多线程指令引擎执行读取的指令。
    • 3. 发明授权
    • Thread synchronization in a multi-thread network communications processor architecture
    • 线程同步在多线程网络通信处理器架构中
    • US08514874B2
    • 2013-08-20
    • US12975880
    • 2010-12-22
    • Deepak MitalJames Clee
    • Deepak MitalJames Clee
    • H04L12/56
    • G06F15/167G06F9/3851G06F9/3885H04L47/2441
    • Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate a thread of contexts for each task received by the packet classifier from a plurality of processing modules of the network processor. The scheduler includes one or more output queues to temporarily store contexts. Each thread corresponds to an order of instructions applied to the corresponding packet, and includes an identifier of a corresponding one of the output queues. The scheduler sends the contexts to a multi-thread instruction engine that processes the threads. An arbiter selects one of the output queues in order to provide output packets to the multi-thread instruction engine, the output packets associated with a corresponding thread of contexts. Each output queue transmits output packets corresponding to a given thread contiguously in the order in which the threads started.
    • 描述的实施例提供了一种用于生成与每个接收的分组相对应的任务的网络处理器的分组分类器。 分组分类器包括调度器,用于从网络处理器的多个处理模块生成由分组分类器接收的每个任务的上下文线程。 调度器包括用于临时存储上下文的一个或多个输出队列。 每个线程对应于应用于相应分组的指令的顺序,并且包括输出队列中相应的一个的标识符。 调度器将上下文发送到处理线程的多线程指令引擎。 仲裁器选择一个输出队列,以向多线程指令引擎提供输出分组,输出分组与相应的上下文线程相关联。 每个输出队列按照线程开始的顺序连续传送与给定线程相对应的输出数据包。
    • 5. 发明申请
    • PACKET ASSEMBLY MODULE FOR MULTI-CORE, MULTI-THREAD NETWORK PROCESSORS
    • 多核,多线程网络处理器的分组组件模块
    • US20120155495A1
    • 2012-06-21
    • US13405053
    • 2012-02-24
    • James T. CleeDeepak MitalRobert J. Munoz
    • James T. CleeDeepak MitalRobert J. Munoz
    • H04J3/24
    • G06F15/167H04L49/101H04L49/109H04L49/506
    • Described embodiments provide for processing received data packets into packet reassemblies for transmission as output packets of a network processor. A packet assembler determines an associated packet reassembly of data portions and enqueues an identifier for each data portion in an input queue corresponding to the packet reassembly associated with the data portion. A state data entry corresponding to each packet reassembly identifies whether the packet reassembly is actively processed by the packet assembler. Iteratively, until an eligible data portion is selected, the packet assembler selects a given data portion from a non-empty input queue for processing and determines if the selected data portion corresponds to a reassembly that is actively processed. If the reassembly is active, the packet assembler sets the selected data portion as ineligible for selection. Otherwise, the packet assembler selects the data portion for processing and modifies the packet reassembly based on the selected data portion.
    • 描述的实施例提供将接收到的数据分组处理成分组重新组装以便作为网络处理器的输出分组传输。 分组汇编器确定数据部分的相关联的分组重组,并且对与输入数据部分相关联的分组重新组合对应的输入队列中的每个数据部分进行排队。 与每个分组重组相对应的状态数据条目标识分组重组是否由分组汇编器主动处理。 迭代地,直到选择合格的数据部分为止,分组汇编器从非空输入队列中选择给定的数据部分进行处理,并确定所选择的数据部分是否对应于主动处理的重组。 如果重新组装有效,则分组汇编器将所选择的数据部分设置为不合格以供选择。 否则,分组组合器基于所选择的数据部分选择数据部分进行处理和修改分组重组。
    • 6. 发明申请
    • INSTRUCTION BREAKPOINTS IN A MULTI-CORE, MULTI-THREAD NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE
    • 多核心,多线程网络通信处理器架构中的指导性突破
    • US20110225394A1
    • 2011-09-15
    • US12976045
    • 2010-12-22
    • Deepak MitalTe Khac MaNarender VangatiWilliam Burroughs
    • Deepak MitalTe Khac MaNarender VangatiWilliam Burroughs
    • G06F9/312
    • G06F15/167G06F9/3851G06F9/3885H04L47/522
    • Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate threads of contexts corresponding to tasks received by the packet classifier from a plurality of processing modules of the network processor. A multi-thread instruction engine processes instructions corresponding to threads received from the scheduler. The multi-thread instruction engine executes instructions by fetching an instruction of the thread from an instruction memory of the packet classifier and determining whether a breakpoint mode of the network processor is enabled. If the breakpoint mode is enabled, and breakpoint indicator of the fetched instruction is set, the packet classifier enters a breakpoint mode. Otherwise, if the breakpoint indicator of the fetched instruction is not set, the multi-thread instruction engine executes the fetched instruction.
    • 描述的实施例提供了一种用于生成与每个接收的分组相对应的任务的网络处理器的分组分类器。 分组分类器包括调度器,用于从网络处理器的多个处理模块生成对应于由分组分类器接收到的任务的上下文的线程。 多线程指令引擎处理与从调度程序接收到的线程相对应的指令。 多线程指令引擎通过从分组分类器的指令存储器取出线程的指令来执行指令,并且确定是否使能网络处理器的断点模式。 如果断点模式被使能,并且获取的指令的断点指示符被设置,则分组分类器进入断点模式。 否则,如果未设置获取的指令的断点指示符,则多线程指令引擎执行读取的指令。
    • 7. 发明申请
    • THREAD SYNCHRONIZATION IN A MULTI-THREAD NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE
    • 多线程网络通信处理器架构中的螺纹同步
    • US20110222553A1
    • 2011-09-15
    • US12975880
    • 2010-12-22
    • Deepak MitalJames Clee
    • Deepak MitalJames Clee
    • H04L12/56
    • G06F15/167G06F9/3851G06F9/3885H04L47/2441
    • Described embodiments provide a packet classifier for a network processor that generates tasks corresponding to each received packet. The packet classifier includes a scheduler to generate a thread of contexts for each task received by the packet classifier from a plurality of processing modules of the network processor. The scheduler includes one or more output queues to temporarily store contexts. Each thread corresponds to an order of instructions applied to the corresponding packet, and includes an identifier of a corresponding one of the output queues. The scheduler sends the contexts to a multi-thread instruction engine that processes the threads. An arbiter selects one of the output queues in order to provide output packets to the multi-thread instruction engine, the output packets associated with a corresponding thread of contexts. Each output queue transmits output packets corresponding to a given thread contiguously in the order in which the threads started.
    • 描述的实施例提供了一种用于生成与每个接收的分组相对应的任务的网络处理器的分组分类器。 分组分类器包括调度器,用于从网络处理器的多个处理模块生成由分组分类器接收的每个任务的上下文线程。 调度器包括用于临时存储上下文的一个或多个输出队列。 每个线程对应于应用于相应分组的指令的顺序,并且包括输出队列中相应的一个的标识符。 调度器将上下文发送到处理线程的多线程指令引擎。 仲裁器选择一个输出队列,以向多线程指令引擎提供输出分组,输出分组与相应的上下文线程相关联。 每个输出队列按照线程开始的顺序连续传送与给定线程相对应的输出数据包。
    • 9. 发明授权
    • Hash processing in a network communications processor architecture
    • 网络通讯处理器架构中的哈希处理
    • US08539199B2
    • 2013-09-17
    • US13046717
    • 2011-03-12
    • William BurroughsDeepak MitalMohammed Reza HakamiMichael R. Betker
    • William BurroughsDeepak MitalMohammed Reza HakamiMichael R. Betker
    • G06F12/08
    • G06F15/167G06F9/3851G06F9/3885H04L45/7453
    • Described embodiments provide a hash processor for a system having multiple processing modules and a shared memory. The hash processor includes a descriptor table with N entries, each entry corresponding to a hash table of the hash processor. A direct mapped table in the shared memory includes at least one memory block including N hash buckets. The direct mapped table includes a predetermined number of hash buckets for each hash table. Each hash bucket includes one or more hash key and value pairs, and a link value. Memory blocks in the shared memory include dynamic hash buckets available for allocation to a hash table. A dynamic hash bucket is allocated to a hash table when the hash buckets in the direct mapped table are filled beyond a threshold. The link value in the hash bucket is set to the address of the dynamic hash bucket allocated to the hash table.
    • 描述的实施例为具有多个处理模块和共享存储器的系统提供散列处理器。 散列处理器包括具有N个条目的描述符表,每个条目对应于散列处理器的散列表。 共享存储器中的直接映射表包括至少一个包括N个散列桶的存储器块。 直接映射表包括用于每个散列表的预定数量的散列桶。 每个哈希桶包括一个或多个哈希键和值对以及链接值。 共享内存中的内存块包括可用于分配到散列表的动态哈希桶。 当直接映射表中的哈希桶被填充超过阈值时,动态哈希桶被分配给散列表。 哈希桶中的链路值被设置为分配给哈希表的动态哈希桶的地址。
    • 10. 发明授权
    • Memory manager for a network communications processor architecture
    • 用于网络通信处理器架构的内存管理器
    • US08499137B2
    • 2013-07-30
    • US12963895
    • 2010-12-09
    • Joseph HastingDeepak Mital
    • Joseph HastingDeepak Mital
    • G06F12/00
    • G06F15/167G06F9/3851G06F9/3885G06F12/023G06F12/0284G06F13/1663H04L49/103H04L49/901
    • Described embodiments provide a memory manager for a network processor having a plurality of processing modules and a shared memory. The memory manager allocates blocks of the shared memory to requesting ones of the plurality of processing modules. A free block list tracks availability of memory blocks of the shared memory. A reference counter maintains, for each allocated memory block, a reference count indicating a number of access requests to the memory block by ones of the plurality of processing modules. The reference count is located with data at the allocated memory block. For subsequent access requests to a given memory block concurrent with processing of a prior access request to the memory block, a memory access accumulator (i) accumulates an incremental value corresponding to the subsequent access requests, (ii) updates the reference count associated with the memory block, and (iii) updates the memory block with the accumulated result.
    • 所描述的实施例提供了具有多个处理模块和共享存储器的网络处理器的存储器管理器。 存储器管理器分配共享存储器的块以请求多个处理模块中的一个。 空闲块列表跟踪共享内存的内存块的可用性。 参考计数器为每个分配的存储器块维护指示多个处理模块中的存储块的访问请求数量的引用计数。 引用计数位于分配的内存块中的数据。 对于与对存储器块的先前访问请求的处理同时进行的对给定存储器块的后续访问请求,存储器访问累加器(i)累积对应于后续访问请求的增量值,(ii)更新与 存储块,以及(iii)利用累积结果更新存储块。