会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明申请
    • DYNAMICALLY ADJUSTING A DATA COMPUTE NODE GROUP
    • 动态调整数据计算机节点组
    • US20160094631A1
    • 2016-03-31
    • US14815838
    • 2015-07-31
    • Nicira, Inc.
    • Jayant JainAnirban SenguptaMohan ParthasarathyAllwyn SequeiraSerge MaskalikRick Lund
    • H04L29/08
    • H04L47/125H04L45/24H04L45/44H04L47/70H04L61/2069H04L61/2521H04L61/6022H04L67/1002H04L67/1017H04L67/1025H04L67/1029
    • Some embodiments provide a novel method for load balancing data messages that are sent by a source compute node (SCN) to one or more different groups of destination compute nodes (DCNs). In some embodiments, the method deploys a load balancer in the source compute node's egress datapath. This load balancer receives each data message sent from the source compute node, and determines whether the data message is addressed to one of the DCN groups for which the load balancer spreads the data traffic to balance the load across (e.g., data traffic directed to) the DCNs in the group. When the received data message is not addressed to one of the load balanced DCN groups, the load balancer forwards the received data message to its addressed destination. On the other hand, when the received data message is addressed to one of load balancer's DCN groups, the load balancer identifies a DCN in the addressed DCN group that should receive the data message, and directs the data message to the identified DCN. To direct the data message to the identified DCN, the load balancer in some embodiments changes the destination address (e.g., the destination IP address, destination port, destination MAC address, etc.) in the data message from the address of the identified DCN group to the address (e.g., the destination IP address) of the identified DCN.
    • 一些实施例提供了用于负载平衡由源计算节点(SCN)发送到一个或多个不同目的地计算节点(DCN)组的数据消息的新颖方法。 在一些实施例中,该方法在源计算节点的出口数据路径中部署负载均衡器。 该负载平衡器接收从源计算节点发送的每个数据消息,并且确定数据消息是否寻址到负载均衡器扩展数据流量以平衡负载的DCN组之一(例如,指向的数据流量) 组中的DCN。 当接收到的数据消息未被寻址到一个负载平衡DCN组时,负载平衡器将接收的数据消息转发到其寻址的目的地。 另一方面,当接收到的数据消息被寻址到负载平衡器的DCN组之一时,负载均衡器识别应该接收数据消息的寻址的DCN组中的DCN,并将数据消息引导到所识别的DCN。 为了将数据消息引导到所识别的DCN,在一些实施例中,负载平衡器从所识别的DCN组的地址改变数据消息中的目的地地址(例如,目的地IP地址,目的地端口,目的地MAC地址等) 到所识别的DCN的地址(例如,目的地IP地址)。
    • 5. 发明申请
    • INLINE LOAD BALANCING
    • US20190288947A1
    • 2019-09-19
    • US16427294
    • 2019-05-30
    • Nicira, Inc.
    • Jayant JainAnirban SenguptaMohan ParthasarathyAllwyn SequeiraSerge MaskalikRick Lund
    • H04L12/803H04L29/08H04L12/721H04L12/911H04L12/707
    • Some embodiments provide a novel method for load balancing data messages that are sent by a source compute node (SCN) to one or more different groups of destination compute nodes (DCNs). In some embodiments, the method deploys a load balancer in the source compute node's egress datapath. This load balancer receives each data message sent from the source compute node, and determines whether the data message is addressed to one of the DCN groups for which the load balancer spreads the data traffic to balance the load across (e.g., data traffic directed to) the DCNs in the group. When the received data message is not addressed to one of the load balanced DCN groups, the load balancer forwards the received data message to its addressed destination. On the other hand, when the received data message is addressed to one of load balancer's DCN groups, the load balancer identifies a DCN in the addressed DCN group that should receive the data message, and directs the data message to the identified DCN. To direct the data message to the identified DCN, the load balancer in some embodiments changes the destination address (e.g., the destination IP address, destination port, destination MAC address, etc.) in the data message from the address of the identified DCN group to the address (e.g., the destination IP address) of the identified DCN.
    • 6. 发明授权
    • Distributed load balancing systems
    • US10135737B2
    • 2018-11-20
    • US14557290
    • 2014-12-01
    • Nicira, Inc.
    • Jayant JainAnirban SenguptaMohan ParthasarathyAllwyn SequeiraSerge MaskalikRick Lund
    • H04L12/803H04L12/721H04L29/08H04L12/911H04L12/707H04L29/12
    • Some embodiments provide a novel method for load balancing data messages that are sent by a source compute node (SCN) to one or more different groups of destination compute nodes (DCNs). In some embodiments, the method deploys a load balancer in the source compute node's egress datapath. This load balancer receives each data message sent from the source compute node, and determines whether the data message is addressed to one of the DCN groups for which the load balancer spreads the data traffic to balance the load across (e.g., data traffic directed to) the DCNs in the group. When the received data message is not addressed to one of the load balanced DCN groups, the load balancer forwards the received data message to its addressed destination. On the other hand, when the received data message is addressed to one of load balancer's DCN groups, the load balancer identifies a DCN in the addressed DCN group that should receive the data message, and directs the data message to the identified DCN. To direct the data message to the identified DCN, the load balancer in some embodiments changes the destination address (e.g., the destination IP address, destination port, destination MAC address, etc.) in the data message from the address of the identified DCN group to the address (e.g., the destination IP address) of the identified DCN.
    • 7. 发明申请
    • Sticky Service Sessions in a Datacenter
    • 数据中心中的粘性服务会话
    • US20160094661A1
    • 2016-03-31
    • US14841654
    • 2015-08-31
    • Nicira, Inc.
    • Jayant JainAnirban SenguptaRick LundRaju KogantyXinhua Hong
    • H04L29/08H04L29/06
    • H04L41/0803H04L41/00H04L47/125H04L47/825H04L51/18H04L67/10H04L67/1002H04L67/14H04L67/16H04L67/327H04L69/16H04L69/22H04W76/12
    • Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs). In some embodiments, the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapath). The inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes. Alternatively, or conjunctively, the inline service switches of some embodiments (1) identify service-nodes cluster for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters. The service-node clusters can perform the same service or can perform different services in some embodiments. This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.
    • 一些实施例提供将数据消息从源计算节点(SCN)分发到目的地服务计算节点(DSCN)的不同组的新颖的在线交换机。 在一些实施例中,内联交换机部署在源计算节点数据路径(例如,出口数据路径)中。 一些实施例中的内联交换机是(1)从SCN接收数据消息的服务交换机,(2)基于交换机实现的服务策略来识别服务节点集群中的服务节点来处理数据消息,以及(3) 使用隧道将接收到的数据消息发送到其标识的服务节点。 或者或联合地,一些实施例(1)的内联服务交换机基于交换机实现的服务策略来识别用于处理数据消息的服务节点集群,以及(2)使用隧道将接收的数据消息发送到所识别的服务 节点集群。 在一些实施例中,服务节点集群可以执行相同的服务或者可以执行不同的服务。 这种将数据消息分发到服务节点/集群的基于隧道的方法有利于在数据中心中无缝实现基于云的XaaS模型(其中XaaS代表X作为服务,X代表任何东西),其中任何数量的 服务由云中的服务提供商提供。
    • 8. 发明申请
    • Inline Service Switch
    • 内联服务开关
    • US20160094632A1
    • 2016-03-31
    • US14841647
    • 2015-08-31
    • Nicira, Inc.
    • Jayant JainAnirban SenguptaMohan ParthasarathyAllwyn SequeiraSerge MaskalikRick Lund
    • H04L29/08H04L12/58H04L12/911
    • H04L41/0803H04L47/125H04L47/825H04L51/18H04L67/10H04L67/1002H04L67/14H04L67/16H04L67/327H04L69/16H04L69/22H04W76/12
    • Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs). In some embodiments, the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapath). The inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes. Alternatively, or conjunctively, the inline service switches of some embodiments (1) identify service-nodes cluster for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters. The service-node clusters can perform the same service or can perform different services in some embodiments. This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.
    • 一些实施例提供将数据消息从源计算节点(SCN)分发到目的地服务计算节点(DSCN)的不同组的新颖的在线交换机。 在一些实施例中,内联交换机部署在源计算节点数据路径(例如,出口数据路径)中。 一些实施例中的内联交换机是(1)从SCN接收数据消息的服务交换机,(2)基于交换机实现的服务策略来识别服务节点集群中的服务节点来处理数据消息,以及(3) 使用隧道将接收到的数据消息发送到其标识的服务节点。 或者或联合地,一些实施例(1)的内联服务交换机基于交换机实现的服务策略来识别用于处理数据消息的服务节点集群,以及(2)使用隧道将接收的数据消息发送到所识别的服务 节点集群。 在一些实施例中,服务节点集群可以执行相同的服务或者可以执行不同的服务。 这种将数据消息分发到服务节点/集群的基于隧道的方法有利于在数据中心中无缝实现基于云的XaaS模型(其中XaaS代表X作为服务,X代表任何东西),其中任何数量的 服务由云中的服务提供商提供。
    • 9. 发明申请
    • Controller Driven Reconfiguration of a Multi-Layered Application or Service Model
    • 多层应用程序或服务模型的控制器驱动重新配置
    • US20160094384A1
    • 2016-03-31
    • US14841659
    • 2015-08-31
    • Nicira, Inc.
    • Jayant JainAnirban SenguptaRick LundRaju KogantyXinhua Hong
    • H04L12/24H04L29/06
    • H04L41/0803H04L41/00H04L47/125H04L47/825H04L51/18H04L67/10H04L67/1002H04L67/14H04L67/16H04L67/327H04L69/16H04L69/22H04W76/12
    • Some embodiments provide novel inline switches that distribute data messages from source compute nodes (SCNs) to different groups of destination service compute nodes (DSCNs). In some embodiments, the inline switches are deployed in the source compute nodes datapaths (e.g., egress datapath). The inline switches in some embodiments are service switches that (1) receive data messages from the SCNs, (2) identify service nodes in a service-node cluster for processing the data messages based on service policies that the switches implement, and (3) use tunnels to send the received data messages to their identified service nodes. Alternatively, or conjunctively, the inline service switches of some embodiments (1) identify service-nodes cluster for processing the data messages based on service policies that the switches implement, and (2) use tunnels to send the received data messages to the identified service-node clusters. The service-node clusters can perform the same service or can perform different services in some embodiments. This tunnel-based approach for distributing data messages to service nodes/clusters is advantageous for seamlessly implementing in a datacenter a cloud-based XaaS model (where XaaS stands for X as a service, and X stands for anything), in which any number of services are provided by service providers in the cloud.
    • 一些实施例提供将数据消息从源计算节点(SCN)分发到目的地服务计算节点(DSCN)的不同组的新颖的在线交换机。 在一些实施例中,内联交换机部署在源计算节点数据路径(例如,出口数据路径)中。 一些实施例中的内联交换机是(1)从SCN接收数据消息的服务交换机,(2)基于交换机实现的服务策略来识别服务节点集群中的服务节点来处理数据消息,以及(3) 使用隧道将接收到的数据消息发送到其标识的服务节点。 或者或联合地,一些实施例(1)的内联服务交换机基于交换机实现的服务策略来识别用于处理数据消息的服务节点集群,以及(2)使用隧道将接收的数据消息发送到所识别的服务 节点集群。 在一些实施例中,服务节点集群可以执行相同的服务或者可以执行不同的服务。 这种将数据消息分发到服务节点/集群的基于隧道的方法有利于在数据中心中无缝实现基于云的XaaS模型(其中XaaS代表X作为服务,X代表任何东西),其中任何数量的 服务由云中的服务提供商提供。
    • 10. 发明授权
    • Inline load balancing
    • US11075842B2
    • 2021-07-27
    • US16427294
    • 2019-05-30
    • Nicira, Inc.
    • Jayant JainAnirban SenguptaMohan ParthasarathyAllwyn SequeiraSerge MaskalikRick Lund
    • H04L12/803H04L29/12H04L12/741H04L12/707H04L12/721H04L29/08H04L12/911
    • Some embodiments provide a novel method for load balancing data messages that are sent by a source compute node (SCN) to one or more different groups of destination compute nodes (DCNs). In some embodiments, the method deploys a load balancer in the source compute node's egress datapath. This load balancer receives each data message sent from the source compute node, and determines whether the data message is addressed to one of the DCN groups for which the load balancer spreads the data traffic to balance the load across (e.g., data traffic directed to) the DCNs in the group. When the received data message is not addressed to one of the load balanced DCN groups, the load balancer forwards the received data message to its addressed destination. On the other hand, when the received data message is addressed to one of load balancer's DCN groups, the load balancer identifies a DCN in the addressed DCN group that should receive the data message, and directs the data message to the identified DCN. To direct the data message to the identified DCN, the load balancer in some embodiments changes the destination address (e.g., the destination IP address, destination port, destination MAC address, etc.) in the data message from the address of the identified DCN group to the address (e.g., the destination IP address) of the identified DCN.