会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Avoiding unfair advantage in weighted round robin (WRR) scheduling
    • 避免加权轮询(WRR)调度中的不公平优势
    • US08363668B2
    • 2013-01-29
    • US12640417
    • 2009-12-17
    • Sarin ThomasSrihari Vegesna
    • Sarin ThomasSrihari Vegesna
    • H04L12/56
    • H04L47/527H04L47/524
    • A network device includes multiple queues to store packets to be scheduled, and a weighted round-robin (WRR) scheduler. The WRR scheduler performs a first WRR scheduling iteration including processing of at least one packet from a particular queue of the multiple queues, identifies the particular queue as an empty queue during the performing of the first WRR scheduling iteration, identifies the particular queue as a non-empty queue after the identifying the particular queue as the empty queue, and performs a second WRR scheduling iteration including processing of only one packet of a group of packets from the particular queue of the multiple queues.
    • 网络设备包括多个队列来存储要调度的分组,以及加权循环(WRR)调度器。 WRR调度器执行包括处理来自多队列的特定队列的至少一个分组的第一WRR调度迭代,在执行第一WRR调度迭代期间将特定队列识别为空队列,将特定队列识别为非 在将特定队列识别为空队列之后的空队列,并且执行第二WRR调度迭代,包括仅处理来自多队列的特定队列的一组分组的一个分组。
    • 4. 发明授权
    • Work-conserving packet scheduling in network devices
    • 网络设备中的工作节省分组调度
    • US08230110B2
    • 2012-07-24
    • US12835481
    • 2010-07-13
    • Srihari VegesnaSarin Thomas
    • Srihari VegesnaSarin Thomas
    • G06F15/173
    • H04L12/56
    • In general, techniques are described for performing work conserving packet scheduling in network devices. For example, a network device comprising queues that store packets and a control unit may implement these techniques. The control unit stores data defining hierarchically-ordered nodes, which include leaf nodes from which one or more of the queues depend. The control unit executes first and second dequeue operations concurrently to traverse the hierarchically-ordered nodes and schedule processing of packets stored to the queues. During execution, the first dequeue operation masks at least one of the selected ones of the leaf nodes from which one of the queues depends based on scheduling data stored by the control unit. The scheduling data indicates valid child node counts in some instances. The masking occurs to exclude the node from consideration by the second dequeue operation concurrently executing with the first dequeue operation, which may preserve work in certain instances.
    • 通常,描述了在网络设备中执行保存分组调度的工作的技术。 例如,包括存储分组的队列的网络设备和控制单元可以实现这些技术。 控制单元存储定义分层有序节点的数据,其包括一个或多个队列依赖于的叶节点。 控制单元同时执行第一和第二出队操作,以遍历层次顺序的节点并对存储在队列中的分组进行调度处理。 在执行期间,第一出队操作基于由控制单元存储的调度数据来屏蔽所选叶节点中的一个叶节点中的至少一个队列。 在某些情况下,调度数据表示有效的子节点计数。 发生掩蔽,以通过与第一出队操作同时执行的第二出队操作排除节点,这可能会在某些情况下保留工作。
    • 6. 发明授权
    • Virtual output queue allocation using dynamic drain bandwidth
    • 虚拟输出队列分配使用动态排除带宽
    • US08797877B1
    • 2014-08-05
    • US13570419
    • 2012-08-09
    • Srinivas PerlaSanjeev KumarAvanindra GodboleSrihari VegesnaSarin ThomasMahesh Dorai
    • Srinivas PerlaSanjeev KumarAvanindra GodboleSrihari VegesnaSarin ThomasMahesh Dorai
    • H04J3/14
    • H04L49/90
    • In general, techniques are described for allocating virtual output queue (VOQ) buffer space to ingress forwarding units of a network device based on drain rates at which network packets are forwarded from VOQs of the ingress forwarding units. For example, a network device includes multiple ingress forwarding units that each forward network packets to an output queue of an egress forwarding unit. Ingress forwarding units each include a VOQ that corresponds to the output queue. The drain rate at any particular ingress forwarding unit corresponds to its share of bandwidth to the output queue, as determined by the egress forwarding unit. Each ingress forwarding unit configures its VOQ buffer size in proportion to its respective drain rate in order to provide an expected delay bandwidth buffering for the output queue of the egress forwarding unit.
    • 通常,描述了基于从入口转发单元的VOQ转发网络分组的排出速率来将虚拟输出队列(VOQ)缓冲区空间分配给网络设备的入口转发单元的技术。 例如,网络设备包括多个入口转发单元,每个转发单元将网络分组转发到出口转发单元的输出队列。 入口转发单元各自包括对应于输出队列的VOQ。 任何特定入口转发单元的排出速率对应于由出口转发单元确定的到输出队列的带宽份额。 每个入口转发单元将其VOQ缓冲器大小与其相应的排出速率成比例地配置,以便为出口转发单元的输出队列提供期望的延迟带宽缓冲。
    • 10. 发明申请
    • WORK-CONSERVING PACKET SCHEDULING IN NETWORK DEVICES
    • 网络设备中的工作维护分组调度
    • US20110216773A1
    • 2011-09-08
    • US12835481
    • 2010-07-13
    • Srihari VegesnaSarin Thomas
    • Srihari VegesnaSarin Thomas
    • H04L12/56
    • H04L12/56
    • In general, techniques are described for performing work conserving packet scheduling in network devices. For example, a network device comprising queues that store packets and a control unit may implement these techniques. The control unit stores data defining hierarchically-ordered nodes, which include leaf nodes from which one or more of the queues depend. The control unit executes first and second dequeue operations concurrently to traverse the hierarchically-ordered nodes and schedule processing of packets stored to the queues. During execution, the first dequeue operation masks at least one of the selected ones of the leaf nodes from which one of the queues depends based on scheduling data stored by the control unit. The scheduling data indicates valid child node counts in some instances. The masking occurs to exclude the node from consideration by the second dequeue operation concurrently executing with the first dequeue operation, which may preserve work in certain instances.
    • 通常,描述了在网络设备中执行保存分组调度的工作的技术。 例如,包括存储分组的队列的网络设备和控制单元可以实现这些技术。 控制单元存储定义分层有序节点的数据,其包括一个或多个队列依赖于的叶节点。 控制单元同时执行第一和第二出队操作,以遍历层次顺序的节点并对存储在队列中的分组进行调度处理。 在执行期间,第一出队操作基于由控制单元存储的调度数据来屏蔽所述一个队列所依赖的叶节点中的至少一个叶节点。 在某些情况下,调度数据表示有效的子节点计数。 发生掩蔽,以通过与第一出队操作同时执行的第二出队操作排除节点,这可能会在某些情况下保留工作。