会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Task queuing in a network communications processor architecture
    • 任务在网络通信处理器架构中排队
    • US08407707B2
    • 2013-03-26
    • US12782411
    • 2010-05-18
    • David P. SonnierBalakrishnan SundararamanShailendra AulakhDeepak Mital
    • David P. SonnierBalakrishnan SundararamanShailendra AulakhDeepak Mital
    • G06F9/46
    • G06F15/167H04L49/101H04L49/109H04L49/506
    • Described embodiments provide a method of assigning tasks to queues of a processing core. Tasks are assigned to a queue by sending, by a source processing core, a new task having a task identifier. A destination processing core receives the new task and determines whether another task having the same identifier exists in any of the queues corresponding to the destination processing core. If another task with the same identifier as the new task exists, the destination processing core assigns the new task to the queue containing a task with the same identifier as the new task. If no task with the same identifier as the new task exists in the queues, the destination processing core assigns the new task to the queue having the fewest tasks. The source processing core writes the new task to the assigned queue. The destination processing core executes the tasks in its queues.
    • 描述的实施例提供了将任务分配给处理核心的队列的方法。 通过源处理核心发送具有任务标识符的新任务来将任务分配给队列。 目的地处理核心接收新的任务,并且确定具有相同标识符的另一任务是否存在于与目的地处理核心对应的任何队列中。 如果存在与新任务具有相同标识符的另一任务,目标处理核心将新任务分配给包含与新任务具有相同标识符的任务的队列。 如果队列中不存在与新任务具有相同标识符的任务,则目标处理核心将新任务分配给具有最少任务的队列。 源处理核心将新任务写入分配的队列。 目标处理核心执行其队列中的任务。
    • 2. 发明申请
    • TASK QUEUING IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE
    • 任务队伍在网络通信处理器架构中的应用
    • US20100293353A1
    • 2010-11-18
    • US12782411
    • 2010-05-18
    • David P. SonnierBalakrishnan SundararamanShailendra AulakhDeepak Mital
    • David P. SonnierBalakrishnan SundararamanShailendra AulakhDeepak Mital
    • G06F9/46G06F12/02
    • G06F15/167H04L49/101H04L49/109H04L49/506
    • Described embodiments provide a method of assigning tasks to queues of a processing core. Tasks are assigned to a queue by sending, by a source processing core, a new task having a task identifier. A destination processing core receives the new task and determines whether another task having the same identifier exists in any of the queues corresponding to the destination processing core. If another task with the same identifier as the new task exists, the destination processing core assigns the new task to the queue containing a task with the same identifier as the new task. If no task with the same identifier as the new task exists in the queues, the destination processing core assigns the new task to the queue having the fewest tasks. The source processing core writes the new task to the assigned queue. The destination processing core executes the tasks in its queues.
    • 描述的实施例提供了将任务分配给处理核心的队列的方法。 通过源处理核心发送具有任务标识符的新任务来将任务分配给队列。 目的地处理核心接收新的任务,并且确定具有相同标识符的另一任务是否存在于与目的地处理核心对应的任何队列中。 如果存在与新任务具有相同标识符的另一任务,则目标处理核心将新任务分配给包含与新任务具有相同标识符的任务的队列。 如果队列中不存在与新任务具有相同标识符的任务,则目标处理核心将新任务分配给具有最少任务的队列。 源处理核心将新任务写入分配的队列。 目标处理核心执行其队列中的任务。
    • 3. 发明授权
    • Multicasting traffic manager in a network communications processor architecture
    • 组播流量管理器在网络通信处理器架构中
    • US08917738B2
    • 2014-12-23
    • US13232422
    • 2011-09-14
    • Balakrishnan SundararamanShailendra AulakhDavid P. SonnierRachel Flood
    • Balakrishnan SundararamanShailendra AulakhDavid P. SonnierRachel Flood
    • H04L12/28H04L12/931H04L12/933
    • H04L49/506H04L49/00H04L49/101H04L49/109H04L49/201
    • Described embodiments provide a method of processing packets of a network processor. One or more tasks are generated corresponding to received packets associated with one or more data flows. A traffic manager receives a task corresponding to a data flow, the task provided by a processing module of the network processor. The traffic manager determines whether the received task corresponds to a unicast data flow or a multicast data flow. If the received task corresponds to a multicast data flow, the traffic manager determines, based on identifiers corresponding to the task, an address of launch data stored in launch data tables in a shared memory, and reads the launch data. Based on the identifiers and the read launch data, two or more output tasks are generated corresponding to the multicast data flow, and the two or more output tasks are added at the tail end of a scheduling queue.
    • 描述的实施例提供了一种处理网络处理器的分组的方法。 对应于与一个或多个数据流相关联的接收分组生成一个或多个任务。 流量管理器接收与数据流相对应的任务,即由网络处理器的处理模块提供的任务。 流量管理器确定接收的任务是否对应于单播数据流或多播数据流。 如果接收到的任务对应于多播数据流,则流量管理器基于与该任务对应的标识符,确定存储在共享存储器中的发射数据表中的发射数据的地址,并读取发射数据。 基于标识符和读取启动数据,生成对应于组播数据流的两个或多个输出任务,并且在调度队列的尾端添加两个或多个输出任务。
    • 8. 发明授权
    • Flexible traffic management and shaping processing for multimedia distribution
    • 针对多媒体分发灵活的流量管理和整形处理
    • US08214868B2
    • 2012-07-03
    • US11409438
    • 2006-04-21
    • Christopher W. HamiltonDavid P. SonnierMilan Zoranovic
    • Christopher W. HamiltonDavid P. SonnierMilan Zoranovic
    • H04N7/173
    • H04N7/17318H04L65/80H04N21/23805H04N21/4384H04N21/472H04N21/6125H04N21/6581
    • Apparatus for distributing streaming multimedia to at least one end client over a network includes memory and at least one processor operatively connected to the memory. The processor is operative: (i) to receive the streaming multimedia from at least one multimedia source via at least one of a plurality of channels in the network; (ii) when a channel change request generated by the end client for changing a channel and corresponding multimedia content from the multimedia source is not detected, to deliver the at least one multimedia stream to the end client at a first data rate; and (iii) when the channel change request has been detected, to deliver the at least one multimedia stream to the end client at a second rate for a prescribed period of time after receiving the channel change request and, after the prescribed period of time, to deliver the at least one multimedia stream to the end client at the first data rate, wherein the second data rate is greater than the first data rate.
    • 用于通过网络将流媒体分发到至少一个终端客户端的装置包括存储器和至少一个可操作地连接到存储器的处理器。 处理器是可操作的:(i)通过网络中的多个信道中的至少一个从至少一个多媒体源接收流媒体; (ii)当没有检测到由终端客户端生成的用于改变频道的频道改变请求和来自多媒体源的对应的多媒体内容时,以至少一个多媒体流以第一数据速率传送到终端客户端; 以及(iii)当已经检测到频道改变请求时,在接收到频道改变请求之后,以规定的时间段以第二速率向终端客户端发送至少一个多媒体流,并且在规定的时间段之后, 以所述第一数据速率将所述至少一个多媒体流传送到所述终端客户端,其中所述第二数据速率大于所述第一数据速率。
    • 9. 发明申请
    • DATA CACHING IN A NETWORK COMMUNICATIONS PROCESSOR ARCHITECTURE
    • 网络通信处理器架构中的数据缓存
    • US20110289180A1
    • 2011-11-24
    • US13192187
    • 2011-07-27
    • David P. SonnierDavid A. BrownCharles Edward Peet, JR.
    • David P. SonnierDavid A. BrownCharles Edward Peet, JR.
    • G06F15/167
    • G06F12/0831G06F12/0811G06F12/0813G06F12/084G06F12/0884
    • Described embodiments provide for storing data in a local cache of one of a plurality of processing modules of a network processor. A control processing module determines presence of data stored in its local cache while concurrently sending a request to read the data from a shared memory and from one or more local caches corresponding to other of the plurality of processing modules. Each of the plurality of processing modules responds whether the data is located in one or more corresponding local caches. The control processing module determines, based on the responses, presence of the data in the local caches corresponding to the other processing modules. If the data is present in one of the local caches corresponding to one of the other processing modules, the control processing module reads the data from the local cache containing the data and cancels the read request to the shared memory.
    • 所描述的实施例提供将数据存储在网络处理器的多个处理模块之一的本地高速缓存中。 控制处理模块确定存储在其本地高速缓存中的数据的存在,同时发送从共享存储器读取数据的请求以及与多个处理模块中的其他处理模块对应的一个或多个本地高速缓存。 多个处理模块中的每一个响应数据是否位于一个或多个对应的本地高速缓存中。 控制处理模块基于响应确定在与其他处理模块对应的本地高速缓存中的数据的存在。 如果数据存在于对应于其他处理模块之一的本地缓存之一中,则控制处理模块从包含数据的本地高速缓存中读取数据,并将读取请求取消给共享存储器。