会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Low latency cluster computing
    • 低延迟集群计算
    • US09560117B2
    • 2017-01-31
    • US13994478
    • 2011-12-30
    • Mark S. HeftyArlin DavisRobert WoodruffSayantan SurShiow-wen Cheng
    • Mark S. HeftyArlin DavisRobert WoodruffSayantan SurShiow-wen Cheng
    • G06F13/28H04L29/08G06F13/14G06F9/06G06F11/14
    • H04L67/10G06F9/06G06F11/00G06F11/1407G06F11/1438G06F11/1464G06F11/1466G06F11/1471G06F13/14
    • An embodiment includes a low-latency mechanism for performing a checkpoint on a distributed application. More specifically, an embodiment of the invention includes processing a first application on a compute node, which is included in a cluster, to produce first computed data and then storing the first computed data in volatile memory included locally in the compute node; halting the processing of the first application, based on an initiated checkpoint, and storing first state data corresponding to the halted first application in the volatile memory; storing the first state information and the first computed data in non-volatile memory included locally in the compute node; and resuming processing of the halted first application and then continuing the processing the first application to produce second computed data while simultaneously pulling the first state information and the first computed data from the non-volatile memory to an input/output (IO) node.
    • 一个实施例包括用于在分布式应用上执行检查点的低延迟机制。 更具体地,本发明的实施例包括处理包括在群集中的计算节点上的第一应用以产生第一计算数据,然后将第一计算数据存储在本地包括在计算节点中的易失性存储器中; 基于发起的检查点停止第一应用的处理,并将对应于停止的第一应用的第一状态数据存储在易失性存储器中; 将第一状态信息和第一计算数据存储在本地包括在计算节点中的非易失性存储器中; 以及恢复停止的第一应用的处理,然后继续处理第一应用以产生第二计算数据,同时将第一状态信息和第一计算数据从非易失性存储器提取到输入/输出(IO)节点。
    • 3. 发明申请
    • REMOTE DIRECT MEMORY ACCESS WITH REDUCED LATENCY
    • 远程直接内存访问减少了延迟
    • US20140201306A1
    • 2014-07-17
    • US13996400
    • 2012-04-10
    • Mark S. Hefty
    • Mark S. Hefty
    • H04L29/08
    • H04L67/1097G06F13/28
    • The present disclosure provides systems and methods for remote direct memory access (RDMA) with reduced latency. RDMA allows information to be transferred directly between memory buffers in networked devices without the need for substantial processing. While RDMA requires registration/deregistration for buffers that are not already preregistered, RDMA with reduced latency transfers information to intermediate buffers during registration/deregistration, utilizing time that would have ordinarily been wasted waiting for these processes to complete, and reducing the amount of information to transfer while the source buffer is registered. In this way the RDMA transaction may be completed more quickly. RDMA with reduced latency may be employed to expedite various information transactions. For example, RMDA with reduced latency may be utilized to stream information within a device, or may be used to transfer information for an information source external to the device directly to an application buffer.
    • 本公开提供了具有减少的等待时间的用于远程直接存储器访问(RDMA)的系统和方法。 RDMA允许将信息直接传送到网络设备中的内存缓冲区之间,无需实质的处理。 虽然RDMA需要对尚未预先注册的缓冲区进行注册/注销,但是在注册/注销期间,减少延迟的RDMA会将信息传输到中间缓冲区,利用通常浪费的时间等待这些进程完成,并将信息量减少到 源缓冲区注册时进行传输。 以这种方式,RDMA事务可以更快地完成。 减少延迟的RDMA可用于加速各种信息交易。 例如,具有减小的延迟的RMDA可以用于流式传输设备内的信息,或者可以用于将设备外部的信息源的信息直接传送到应用缓冲区。
    • 4. 发明授权
    • Method and systems for flow control of transmissions over channel-based switched fabric connections
    • 用于通过基于通道的交换矩阵连接进行传输的流量控制的方法和系统
    • US06735174B1
    • 2004-05-11
    • US09537396
    • 2000-03-29
    • Mark S. HeftyJerrie L. Coffman
    • Mark S. HeftyJerrie L. Coffman
    • H04J116
    • H04L49/552H04L47/10H04L47/33H04L47/39H04L49/506
    • Methods and systems for flow control over channel-based switched fabric connections between a first side and a second side. At least one posted receive buffer is stored in a receive buffer queue at the first side. A number of credits is incremented based on the at least one posted receive buffer. The second side is notified of the number of credits. A number of send credits is incremented at the second side based on the number of credits. A message is sent from the second side to the first side if the number of send credits is larger than or equal to two or the number of send credits is equal to one and a second number of credits is larger than or equal to one. The second number of credits is based on at least one second posted receive buffer at the second side. Therefore, communication of messages between the first side and the second side is prevented from deadlocking.
    • 用于在第一侧和第二侧之间的基于通道的交换结构连接上的流控制的方法和系统。 在第一侧的接收缓冲区队列中至少存储一个发送的接收缓冲区。 基于至少一个发布的接收缓冲器来增加许多信用。 第二方通知学分数。 许多发送信用额在第二方根据信用数量递增。 如果发送信用次数大于或等于2,或者发送信用次数等于1,而第二个信用数量大于或等于1则从第二方向第一方发送消息。 第二数量的信用是基于在第二侧的至少一个第二张贴的接收缓冲器。 因此,防止在第一侧和第二侧之间的消息的通信死锁。
    • 6. 发明授权
    • GID capable switching in an infiniband fabric
    • GID能够切换无限尺寸的面料
    • US09288160B2
    • 2016-03-15
    • US13994154
    • 2011-08-23
    • Mark S. Hefty
    • Mark S. Hefty
    • H04L12/28H04L12/947H04L12/701
    • H04L49/25H04L45/00
    • Methods, systems, and apparatus for extending the size of Infiniband subnets using GID switching in an Infiniband fabric. An Infiniband subnet is defined to include multiple local identifier (LID) domains, each including multiple nodes interconnected via one or more LID switches. In turn, the LID domains are interconnected via one or more GID switches. Messages may be transferred between nodes in a given LID domain using LID switches in the domain. Messages may be transferred between nodes in separate LID domains by routing the messages via one or more GID switches. In various embodiments, GID switches may be implemented to also operate as LID switches and perform routing based on selected packet header fields.
    • 使用Infiniband架构中的GID切换扩展Infiniband子网的大小的方法,系统和设备。 Infiniband子网被定义为包括多个本地标识符(LID)域,每个域包括通过一个或多个LID交换机互连的多个节点。 反过来,LID域通过一个或多个GID交换机互连。 可以使用域中的LID开关在给定的LID域中的节点之间传送消息。 可以通过经由一个或多个GID交换机路由消息来在不同的LID域中的节点之间传送消息。 在各种实施例中,GID交换机可以被实现为也作为LID交换机操作,并且基于所选择的分组报头字段来执行路由。
    • 7. 发明申请
    • GID CAPABLE SWITCHING IN AN INFINIBAND FABRIC
    • GID可开关在无限制的织物
    • US20130259033A1
    • 2013-10-03
    • US13994154
    • 2011-08-23
    • Mark S. Hefty
    • Mark S. Hefty
    • H04L12/70
    • H04L49/25H04L45/00
    • Methods, systems, and apparatus for extending the size of Infiniband subnets using GID switching in an Infiniband fabric. An Infiniband subnet is defined to include multiple local identifier (LID) domains, each including multiple nodes interconnected via one or more LID switches. In turn, the LID domains are interconnected via one or more GID switches. Messages may be transferred between nodes in a given LID domain using LID switches in the domain. Messages may be transferred between nodes in separate LID domains by routing the messages via one or more GID switches. In various embodiments, GID switches may be implemented to also operate as LID switches and perform routing based on selected packet header fields.
    • 使用Infiniband架构中的GID切换扩展Infiniband子网的大小的方法,系统和设备。 Infiniband子网被定义为包括多个本地标识符(LID)域,每个域包括通过一个或多个LID交换机互连的多个节点。 反过来,LID域通过一个或多个GID交换机互连。 可以使用域中的LID交换机在给定的LID域中的节点之间传送消息。 可以通过经由一个或多个GID交换机路由消息来在不同的LID域中的节点之间传送消息。 在各种实施例中,GID交换机可以被实现为也作为LID交换机操作,并且基于所选择的分组报头字段来执行路由。
    • 9. 发明授权
    • Methods and system for message resource pool with asynchronous and synchronous modes of operation
    • 具有异步和同步操作模式的消息资源池的方法和系统
    • US06553438B1
    • 2003-04-22
    • US09556318
    • 2000-04-24
    • Jerrie L. CoffmanMark S. HeftyFabian S. Tillier
    • Jerrie L. CoffmanMark S. HeftyFabian S. Tillier
    • G06F300
    • G06F9/544
    • Methods and system for a message resource pool with asynchronous and synchronous modes of operation. One or more buffers, descriptors, and message elements are allocated for a user. Each element is associated with one descriptor and at least one buffer. The allocation is performed by the message resource pool. The buffers and the descriptors are registered with a unit management function by the message resource pool. Control of an element and associated descriptor and at least one buffer is passed from the message resource pool to the user upon request by the user. The control of the element and associated descriptor and at least one buffer is returned from the user to the message resource pool once use of the element and associated descriptor and at least one buffer by the user has completed.
    • 具有异步和同步操作模式的消息资源池的方法和系统。 为用户分配一个或多个缓冲区,描述符和消息元素。 每个元素与一个描述符和至少一个缓冲区相关联。 分配由消息资源池执行。 缓冲区和描述符通过消息资源池向单元管理功能注册。 由用户请求,将元素和关联描述符以及至少一个缓冲区的控制从消息资源池传递给用户。 一旦使用元素和相关联的描述符并且用户至少一个缓冲区已经完成,则元素和关联的描述符和至少一个缓冲区的控制从用户返回到消息资源池。
    • 10. 发明申请
    • EFFICIENT DISTRIBUTION OF SUBNET ADMINISTRATION DATA OVER AN RDMA NETWORK
    • 通过RDMA网络高效地分配子网管理数据
    • US20130262613A1
    • 2013-10-03
    • US13850339
    • 2013-03-26
    • Mark S. Hefty
    • Mark S. Hefty
    • G06F15/167
    • G06F15/167G06F15/17331H04L41/04
    • One embodiment provides a method for receiving subnet administration (SA) data using a remote direct memory access (RDMA) transfer. The method includes formatting, by a network node element, an SA data query with an RDMA-capable flag; configuring, by the network node element, a reliably-connected queue pair (RCQP) to receive an RDMA transfer from a subnet manager in communication with the network node element on an RDMA-capable network; and allocating, by the network node element, an RDMA write target buffer to receive the SA data using an RDMA transfer from the subnet manager in response to the SA data query.
    • 一个实施例提供了一种使用远程直接存储器访问(RDMA)传送来接收子网管理(SA)数据的方法。 该方法包括由网络节点元素格式化具有RDMA能力标志的SA数据查询; 由网络节点单元配置可靠连接的队列对(RCQP),以从与RDMA能力的网络上的网络节点元素通信的子网管理器接收RDMA传输; 以及由所述网络节点元素分配RDMA写入目标缓冲器,以响应于所述SA数据查询使用来自所述子网管理器的RDMA传输来接收所述SA数据。