会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 3. 发明授权
    • Method to determine retries for parallel ECC correction in a pipeline
    • 确定管道中并行ECC校正重试的方法
    • US06654925B1
    • 2003-11-25
    • US09650153
    • 2000-08-29
    • Patrick J. MeaneyPak-kin Mak
    • Patrick J. MeaneyPak-kin Mak
    • G11C2900
    • G06F12/0855
    • Disclosed is an apparatus and means for searching a cache directory with full ECC support without the latency of the ECC logic on every directory search. The apparatus allows for bypassing the ECC logic in an attempt to search the directory. When a correctable error occurs which causes the search results to differ, a retry will occur with the corrected results used on the subsequent pass. This allows for the RAS characteristics of full ECC but the delay of the ECC path will only be experienced when a correctable error occurs, thus reducing average latency of the directory pipeline significantly. Disclosed is also a means for notifying the requester of a retry event and the ability to retry the search in the event that the directory is allowed to change between passes.
    • 公开了一种用于在没有ECC逻辑在每个目录搜索上的等待时间的情况下用完整的ECC支持来搜索高速缓存目录的装置和装置。 该设备允许绕过ECC逻辑以尝试搜索目录。 当出现可导致搜索结果不同的可纠正错误时,会在随后的通过中使用更正的结果进行重试。 这允许完整ECC的RAS特性,但是仅当出现可纠正错误时才会经历ECC路径的延迟,从而显着减少目录管道的平均延迟。 还公开了一种用于通知请求者重试事件的手段,以及在目录被允许在遍之间改变的情况下重试搜索的能力。
    • 5. 发明授权
    • Least recently used (LRU) compartment capture in a cache memory system
    • 在缓存存储器系统中最近使用的(LRU)隔离区
    • US08180970B2
    • 2012-05-15
    • US12035906
    • 2008-02-22
    • Arthur J. O'Neill, Jr.Michael F. FeePak-kin Mak
    • Arthur J. O'Neill, Jr.Michael F. FeePak-kin Mak
    • G06F12/00G06F13/00G06F13/28
    • G06F12/123G06F12/0859
    • A two pipe pass method for least recently used (LRU) compartment capture in a multiprocessor system. The method includes receiving a fetch request via a requesting processor and accessing a cache directory based on the received fetch request, performing a first pipe pass by determining whether a fetch hit or a fetch miss has occurred in the cache directory, and determining an LRU compartment associated with a specified congruence class of the cache directory based on the fetch request received, when it is determined that a fetch miss has occurred, and performing a second pipe pass by using the LRU compartment determined and the specified congruence class to access the cache directory and to select an LRU address to be cast out of the cache directory.
    • 在多处理器系统中用于最近最少使用(LRU)隔室捕获的两个管道通过方法。 该方法包括:通过请求处理器接收提取请求,并基于接收的提取请求访问高速缓存目录;通过确定高速缓存目录中是否发生了提取命中或提取丢失,执行第一管道通路,以及确定LRU隔间 当确定已经发生提取未命中时,基于所接收的获取请求与缓存目录的指定同余类相关联,并且通过使用确定的LRU隔离区和指定的一致等级来访问高速缓存目录来执行第二管道传递 并选择要从缓存目录中抛出的LRU地址。
    • 8. 发明申请
    • Coherency management for a
    • “无切换”分布式共享内存计算机系统的一致性管理
    • US20060184750A1
    • 2006-08-17
    • US11402599
    • 2006-04-12
    • Michael BlakePak-kin MakAdrian SeiglerGary VanHuben
    • Michael BlakePak-kin MakAdrian SeiglerGary VanHuben
    • G06F12/00
    • G06F12/0813G06F12/0831
    • A shared memory symmetrical processing system including a plurality of nodes each having a system control element for routing internodal communications. A first ring and a second ring interconnect the plurality of nodes, wherein data in said first ring flows in opposite directions with respect to said second ring. A receiver receives a plurality of incoming messages via the first or second ring and merges a plurality of incoming message responses with a local outgoing message response to provide a merged response. Each of the plurality of nodes includes any combination of the following: at least one processor, cache memory, a plurality of I/O adapters, and main memory. The system control element includes a plurality of controllers for maintaining coherency in the system.
    • 一种共享存储器对称处理系统,包括多个节点,每个节点具有用于路由节点间通信的系统控制元件。 第一环和第二环互连多个节点,其中所述第一环中的数据相对于所述第二环相反的方向流动。 接收器经由第一或第二环接收多个传入消息,并将多个传入消息响应与本地传出消息响应合并以提供合并响应。 多个节点中的每一个包括以下的任何组合:至少一个处理器,高速缓冲存储器,多个I / O适配器和主存储器。 系统控制元件包括用于维持系统中一致性的多个控制器。
    • 9. 发明授权
    • Bus protocol for a switchless distributed shared memory computer system
    • 总线协议用于无交换分布式共享内存计算机系统
    • US06988173B2
    • 2006-01-17
    • US10435878
    • 2003-05-12
    • Michael A. BlakeSteven M. GermanPak-kin MakAdrian E. SeiglerGary A. Van Huben
    • Michael A. BlakeSteven M. GermanPak-kin MakAdrian E. SeiglerGary A. Van Huben
    • G06F12/00
    • G06F12/0831G06F12/0813
    • A bus protocol is disclosed for a symmetric multiprocessing computer system consisting of a plurality of nodes, each of which contains a multitude of processors, I/O devices, main memory and a system controller comprising an integrated switch with a top level cache. The nodes are interconnected by a dual concentric ring topology. The bus protocol is used to exchange snoop requests and addresses, data, coherency information and operational status between nodes in a manner that allows partial coherency results to be passed in parallel with a snoop request and address as an operation is forwarded along each ring. Each node combines it's own coherency results with the partial coherency results it received prior to forwarding the snoop request, address and updated partial coherency results to the next node on the ring. The protocol allows each node in the system to see the final coherency results without requiring the requesting node to broadcast these results to all the other nodes in the system. The bus protocol also allows data to be returned on one of the two rings, with the ring selection determined by the relative placement of the source and destination nodes on each ring, in order to control latency and data bus utilization.
    • 公开了一种用于由多个节点组成的对称多处理计算机系统的总线协议,每个节点包含多个处理器,I / O设备,主存储器和包括具有顶级高速缓存的集成交换机的系统控制器。 节点通过双同心环拓扑互连。 总线协议用于以一种允许部分一致性结果与窥探请求和地址并行传送的方式来交换窥探请求和地址,数据,相关性信息和节点之间的操作状态,因为操作沿着每个环转发。 每个节点将其自身的一致性结果与在将窥探请求转发之前接收的部分一致性结果相结合,将地址和更新的部分一致性结果合并到环上的下一个节点。 该协议允许系统中的每个节点查看最终的一致性结果,而不需要请求节点将这些结果广播到系统中的所有其他节点。 总线协议还允许在两个振铃中的一个上返回数据,其中环选择由每个振铃上的源节点和目的节点的相对位置确定,以便控制等待时间和数据总线的利用。
    • 10. 发明授权
    • False exception for cancelled delayed requests
    • 取消延迟请求的假异常
    • US06219758B1
    • 2001-04-17
    • US09047579
    • 1998-03-25
    • Jennifer Almoradie NavarroBarry Watson KrummChung-Lung Kevin ShumPak-kin MakMichael Fee
    • Jennifer Almoradie NavarroBarry Watson KrummChung-Lung Kevin ShumPak-kin MakMichael Fee
    • G06F1200
    • G06F12/1054
    • A central processor uses virtual addresses to access data via cache logic including a DAT and ART, and the cache logic accesses data in the hierarchical storage subsystem using absolute addresses to access data, a part of the first level of the cache memory includes a translator for virtual or real addresses to absolute addresses. When requests are sent for a data fetch and the requested data are not resident in the first level of cache the request for data is delayed and may be forwarded to a lower level of said hierarchical memory, and a delayed request may result in cancellation of any process during a delayed request that has the ability to send back an exception. A delayed request may be rescinded if the central processor has reached an interruptible stage in its pipeline logic at which point, a false exception is forced clearing all I the wait states while the central processor ignores the false exception. Forcing of an exception occurs during dynamic address translation (DAT) or during access register translation (ART). A request for data signal to the storage subsystem cancellation is settable by the first hierarchical level of cache logic. A false exception signal to the first level cache is settable by the storage subsystem logic.
    • 中央处理器使用虚拟地址通过包括DAT和ART的高速缓存逻辑来访问数据,并且高速缓存逻辑使用绝对地址访问分层存储子系统中的数据来访问数据,高速缓冲存储器的第一级的一部分包括用于 虚拟或实际地址到绝对地址。 当请求被发送用于数据提取并且所请求的数据不驻留在高速缓存的第一级时,数据请求被延迟并且可以被转发到所述分层存储器的较低级别,并且延迟的请求可能导致任何 在具有发回异常的能力的延迟请求过程中。 如果中央处理器在其流水线逻辑中达到可中断阶段,则可能会撤销延迟请求,此时在中央处理器忽略错误异常时,强制清除所有I等待状态的错误异常。 动态地址转换(DAT)或访问寄存器转换(ART)期间发生异常的强制。 对存储子系统取消的数据信号的请求可以由高速缓存逻辑的第一层级设置。 存储子系统逻辑可以设置到第一级高速缓存的错误异常信号。