会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 81. 发明授权
    • Method of data communications with reduced latency
    • 减少延迟的数据通信方法
    • US08578068B2
    • 2013-11-05
    • US12947520
    • 2010-11-16
    • Michael A. BlocksomeJeffrey J. Parker
    • Michael A. BlocksomeJeffrey J. Parker
    • G06F3/00G06F15/167
    • G06F13/28
    • Data communications with reduced latency, including: writing, by a producer, a descriptor and message data into at least two descriptor slots of a descriptor buffer, the descriptor buffer comprising allocated computer memory segmented into descriptor slots, each descriptor slot having a fixed size, the descriptor buffer having a header pointer that identifies a next descriptor slot to be processed by a DMA controller, the descriptor buffer having a tail pointer that identifies a descriptor slot for entry of a next descriptor in the descriptor buffer; recording, by the producer, in the descriptor a value signifying that message data has been written into descriptor slots; and setting, by the producer, in dependence upon the recorded value, a tail pointer to point to a next open descriptor slot.
    • 具有减少的等待时间的数据通信,包括:由制作者将描述符和消息数据写入描述符缓冲器的至少两个描述符时隙中,所述描述符缓冲器包括分割成描述符时隙的分配的计算机存储器,每个描述符时隙具有固定大小, 所述描述符缓冲器具有标识要由DMA控制器处理的下一个描述符时隙的标题指针,所述描述符缓冲器具有标识用于所述描述符缓冲器中的下一描述符的条目的描述符时隙的尾指针; 在描述符中由制作者记录表示消息数据已被写入描述符时隙的值; 并且由制作者根据所记录的值设置指向下一个打开的描述符时隙的尾部指针。
    • 82. 发明授权
    • Data communications in a parallel active messaging interface of a parallel computer
    • 并行计算机的并行活动消息接口中的数据通信
    • US08572629B2
    • 2013-10-29
    • US12963694
    • 2010-12-09
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • G06F9/46
    • G06F9/546
    • Data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.
    • 并行计算机的并行活动消息接口(“PAMI”)中的数据通信,并行计算机包括执行并行应用的多个计算节点,由数据通信端点组成的PAMI,每个端点包括数据通信参数的规范 对于在计算节点上执行的线程,包括客户端,上下文和任务的规范,所述计算节点和端点被耦合用于通过PAMI进行数据通信,并且通过数据通信资源,包括在源终端中接收 PAMI数据通信指令,以指令类型为特征的指令,指示传输数据从原始端点传输到目标端点的指令,并根据指令类型将传输数据从原点终端发送到目标 端点。
    • 87. 发明授权
    • Performing a deterministic reduction operation in a compute node organized into a branched tree topology
    • 在组织成分支树拓扑的计算节点中执行确定性简化操作
    • US08489859B2
    • 2013-07-16
    • US12790037
    • 2010-05-28
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • Charles J. ArcherMichael A. BlocksomeJoseph D. RattermanBrian E. Smith
    • G06F9/00
    • G06F15/76G06F15/17318
    • Performing a deterministic reduction operation in a parallel computer that includes compute nodes, each of which includes computer processors and a CAU (Collectives Acceleration Unit) that couples computer processors to one another for data communications, including organizing processors and a CAU into a branched tree topology in which the CAU is a root and the processors are children; receiving, from each of the processors in any order, dummy contribution data, where each processor is restricted from sending any other data to the root CAU prior to receiving an acknowledgement of receipt from the root CAU; sending, by the root CAU to the processors in the branched tree topology, in a predefined order, acknowledgements of receipt of the dummy contribution data; receiving, by the root CAU from the processors in the predefined order, the processors' contribution data to the reduction operation; and reducing, by the root CAU, the processors' contribution data.
    • 在包括计算节点的并行计算机中执行确定性简化操作,每个节点包括计算机处理器和将计算机处理器彼此耦合以用于数据通信的CAU(集体加速单元),包括将处理器和CAU组织成分支树形拓扑 其中CAU是根,处理器是孩子; 从每个处理器以任何顺序接收虚拟贡献数据,其中每个处理器在从根CAU接收到接收确认之前被限制不发送任何其他数据到根CAU; 由根CAU以分支树拓扑结构向处理器发送预定义的顺序,接收虚拟贡献数据的确认; 根据CAU从预定义的顺序从处理器接收处理器对减少操作的贡献数据; 并由根CAU减少处理器的贡献数据。
    • 88. 发明授权
    • Performing a local reduction operation on a parallel computer
    • 在并行计算机上执行局部缩减操作
    • US08458244B2
    • 2013-06-04
    • US13585993
    • 2012-08-15
    • Michael A. BlocksomeDaniel A. Faraj
    • Michael A. BlocksomeDaniel A. Faraj
    • G06F15/76G06F15/16G06F9/02G06F12/00
    • G06F15/17387G06F15/17318
    • A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.
    • 并行计算机包括计算节点,每个包括两个减少处理核心,一个网络写入处理核心和一个网络读取处理核心,每个处理核心分配一个输入缓冲器。 通过缩小处理核心在交织块中将缩小处理核心的输入缓冲器的内容复制到共享存储器中的交错缓冲器; 通过一个还原处理核心将网络写处理核心的输入缓冲器的内容复制到共享存储器; 通过另一个还原处理核心将网络读处理核心的输入缓冲器的内容复制到共享存储器; 并通过还原处理核心并行减少:还原处理核心的输入缓冲器的内容; 交错缓冲器的每隔一个交错块; 复制内容的网络写入处理核心的输入缓冲区; 以及网络读取处理核心的输入缓冲区的复制内容。
    • 90. 发明授权
    • Remote direct memory access
    • 远程直接内存访问
    • US08325633B2
    • 2012-12-04
    • US11740361
    • 2007-04-26
    • Charles J. ArcherMichael A. Blocksome
    • Charles J. ArcherMichael A. Blocksome
    • H04B1/44
    • G06F13/28
    • Methods, parallel computers, and computer program products are disclosed for remote direct memory access. Embodiments include transmitting, from an origin DMA engine on an origin compute node to a plurality target DMA engines on target compute nodes, a request to send message, the request to send message specifying a data to be transferred from the origin DMA engine to data storage on each target compute node; receiving, by each target DMA engine on each target compute node, the request to send message; preparing, by each target DMA engine, to store data according to the data storage reference and the data length, including assigning a base storage address for the data storage reference; sending, by one or more of the target DMA engines, an acknowledgment message acknowledging that all the target DMA engines are prepared to receive a data transmission from the origin DMA engine; receiving, by the origin DMA engine, the acknowledgement message from the one or more of the target DMA engines; and transferring, by the origin DMA engine, data to data storage on each of the target compute nodes according to the data storage reference using a single direct put operation.
    • 公开了用于远程直接存储器访问的方法,并行计算机和计算机程序产品。 实施例包括从源计算节点上的原始DMA引擎向目标计算节点上的多个目标DMA引擎发送发送消息的请求,发送消息的请求,该消息指定要从原始DMA引擎传输的数据到数据存储 在每个目标计算节点上; 由每个目标计算节点上的每个目标DMA引擎接收发送消息的请求; 通过每个目标DMA引擎准备根据数据存储参考和数据长度存储数据,包括分配用于数据存储参考的基本存储地址; 由一个或多个目标DMA引擎发送确认消息,确认所有目标DMA引擎准备好从原始DMA引擎接收数据传输; 由原始DMA引擎从一个或多个目标DMA引擎接收确认消息; 并且由原始DMA引擎使用单个直接放置操作根据数据存储参考将数据传送到每个目标计算节点上的数据存储。