会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • Method of and system for performing differential lossless compression
    • 执行差分无损压缩的方法和系统
    • US07375662B2
    • 2008-05-20
    • US10725008
    • 2003-12-02
    • Sophie WilsonJohn Redford
    • Sophie WilsonJohn Redford
    • H03M7/00
    • H03M7/3088H03M7/30
    • A method of decompressing data words of an instruction set includes: A. filling a primary dictionary with at least one primary data word of the instruction set, each of the at least one primary data word being stored in the primary dictionary in a location associated with a distinct primary dictionary index; B. filling at least one secondary dictionary with at least one difference bit stream, each of the at least one difference bit stream being stored in one of the at least one secondary dictionary in a location associated with a distinct secondary dictionary index; C. receiving a code word, the code word comprising: a. a header which identifies the primary dictionary and a specific one of the at least one secondary dictionary; b. a first bit stream; and c. a second bit stream; wherein the first bit stream comprises the distinct primary dictionary index and the second bit stream comprises the distinct secondary dictionary index; D. retrieving the primary data word stored at the location in the primary dictionary location associated with the distinct primary dictionary index; E. retrieving the difference bit stream stored at the location in the at least one secondary dictionary location associated with the distinct secondary dictionary index; and F. performing a logic operation on the primary data word and the difference bit stream to obtain a resultant data word that is not stored in either the at least one primary dictionary or the at least one secondary dictionary.
    • 一种解压缩指令集的数据字的方法包括:A.用指令集的至少一个主数据字填充主字典,所述至少一个主数据字中的每一个都存储在与主字典相关联的位置中的主字典中 一个独特的主要字典索引; B.用至少一个差分比特流填充至少一个辅助词典,所述至少一个差分比特流中的每一个存储在与不同次级词典索引相关联的位置中的所述至少一个辅助词典之一中; C.接收码字,码字包括:a。 识别所述主字典和所述至少一个辅助字典中的特定一个的标题; b。 第一位流; 和c。 第二位流 其中所述第一比特流包括所述不同的主词典索引,并且所述第二比特流包括所述不同的辅助词典索引; D.检索存储在与不同主词典索引相关联的主字典位置中的位置处的主数据字; E.检索存储在与所述不同次级词典索引相关联的所述至少一个辅助字典位置中的位置处的差异比特流; 以及F.对所述主数据字和所述差分比特流执行逻辑运算,以获得未被存储在所述至少一个主字典或所述至少一个辅助字典中的结果数据字。
    • 2. 发明申请
    • Loader module, and method for loading program code into a memory
    • 装载器模块,以及将程序代码加载到存储器中的方法
    • US20050193384A1
    • 2005-09-01
    • US10896053
    • 2004-07-22
    • John Redford
    • John Redford
    • G06F9/44G06F9/445
    • G06F9/44521
    • A loader module for loading program code into a memory is described, whereby the memory may be partially defective, with non-defective parts of the memory being indicated by diagnostic information. The loader module is adapted for loading program code, in accordance with the diagnostic information, into non-defective parts of the memory, and for relinking the program code in accordance with the memory locations it has been loaded to. Furthermore, a method for loading program code into a memory is described. The method comprises the following steps which may be carried out in arbitrary order: loading program code, in accordance with diagnostic information, into non-defective parts of the memory, and relinking the program code in accordance with the memory locations it has been loaded to.
    • 描述用于将程序代码加载到存储器中的加载器模块,由此存储器可能是部分缺陷的,存储器的非缺陷部分由诊断信息指示。 加载器模块适于将根据诊断信息的程序代码加载到存储器的无缺陷部分中,以及根据其加载到的存储器位置重新连接程序代码。 此外,描述了将程序代码加载到存储器中的方法。 该方法包括可以按任意顺序执行的以下步骤:根据诊断信息将程序代码加载到存储器的无缺陷部分,并根据其加载的存储器位置重新链接程序代码 。
    • 3. 发明授权
    • Method of and system for performing differential lossless compression
    • 执行差分无损压缩的方法和系统
    • US06720894B2
    • 2004-04-13
    • US10232728
    • 2002-09-03
    • Sophie WilsonJohn Redford
    • Sophie WilsonJohn Redford
    • H03M700
    • H03M7/3088H03M7/30
    • A method of decompressing data words of an instruction set includes: A. filling a primary dictionary with at least one primary data word of the instruction set, each of the at least one primary data word being stored in the primary dictionary in a location associated with a distinct primary dictionary index; B. filling at least one secondary dictionary with at least one difference bit stream, each of the at least one difference bit stream being stored in one of the at least one secondary dictionary in a location associated with a distinct secondary dictionary index; C. receiving a code word, the code word comprising: a. a header which identifies the primary dictionary and a specific one of the at least one secondary dictionary; b. a first bit stream; and c. a second bit stream; wherein the first bit stream comprises the distinct primary dictionary index and the second bit stream comprises the distinct secondary dictionary index; D. retrieving the primary data word stored at the location in the primary dictionary location associated with the distinct primary dictionary index; E. retrieving the difference bit stream stored at the location in the at least one secondary dictionary location associated with the distinct secondary dictionary index; and F. performing a logic operation on the primary data word and the difference bit stream to obtain a resultant data word that is not stored in either the at least one primary dictionary or the at least one secondary dictionary.
    • 解压缩指令集的数据字的方法包括:A。 用指令集的至少一个主数据字填充主字典,所述至少一个主数据字中的每一个在与不同主词典索引相关联的位置中存储在主词典中; B。 用至少一个差分比特流填充至少一个辅助词典,所述至少一个差分比特流中的每一个存储在与不同次级词典索引相关联的位置中的至少一个辅助词典中的一个中; C。 接收码字,码字包括:a。 标识所述主字典和所述至少一个辅助字典中的特定一个的标题; b。 第一位流; andc。 第二比特流;其中所述第一比特流包括所述不同的主词典索引,所述第二比特流包括所述不同的辅助词典索引; D。 检索存储在与不同主词典索引相关联的主字典位置中的位置处的主数据字; E。 检索存储在与所述不同次级字典索引相关联的所述至少一个辅助字典位置中的所述位置处的差异比特流; 和F。 对所述主数据字和所述差分比特流执行逻辑运算以获得未被存储在所述至少一个主词典或所述至少一个辅助词典中的结果数据字。
    • 5. 发明申请
    • Microprocessor with integrated high speed memory
    • 具有集成高速存储器的微处理器
    • US20050273577A1
    • 2005-12-08
    • US10857979
    • 2004-06-02
    • Sophie WilsonJohn Redford
    • Sophie WilsonJohn Redford
    • G06F9/345G06F9/38G06F9/40
    • G06F9/30036G06F9/30043G06F9/3455
    • The present invention relates to the field of (micro)computer design and architecture, and in particular to microarchitecture associated with moving data values between a (micro)processor and memory components. Particularly, the present invention relates to a computer system with an processor architecture in which register addresses are generated with more than one execution channel controlled by one central processing unit with at least one load/store unit for loading and storing data objects, and at least one cache memory associated to the processor holding data objects accessed by the processor, wherein said processor's load/store unit contains a high speed memory directly interfacing said load/store unit to the cache. The present invention improves the of architectures with dual ported microprocessor implementations comprising two execution pipelines capable of two load/store data transactions per cycle. By including a cache memory inside the load/store unit, the processor is directly interfaced from its load/store units to the caches. Thus, the present invention accelerates data accesses and transactions from and to the load/store units of the processor and the data cache memory.
    • 本发明涉及(微)计算机设计和架构领域,特别涉及与(微)处理器和存储器组件之间移动数据值相关联的微体系结构。 特别地,本发明涉及一种具有处理器结构的计算机系统,其中通过一个具有至少一个用于加载和存储数据对象的加载/存储单元的中央处理单元控制的多于一个执行通道生成寄存器地址,并且至少 与处理器相关联的一个缓存存储器,其保存由处理器访问的数据对象,其中所述处理器的加载/存储单元包含将所述加载/存储单元直接连接到高速缓存的高速存储器。 本发明改进了具有双端口微处理器实现的架构,其包括能够在每个周期进行两次加载/存储数据事务的两条执行管线。 通过在加载/存储单元内部包括高速缓冲存储器,处理器从其加载/存储单元直接连接到高速缓存。 因此,本发明加速了来自处理器和数据高速缓冲存储器的加载/存储单元的数据访问和事务。
    • 6. 发明授权
    • Branching around conditional processing if states of all single instruction multiple datapaths are disabled and the computer program is non-deterministic
    • 如果禁用所有单个指令多个数据路径的状态,并且计算机程序是非确定性的,则分支进行条件处理
    • US06931518B1
    • 2005-08-16
    • US09724196
    • 2000-11-28
    • John Redford
    • John Redford
    • G06F9/44G06F15/00
    • G06F8/45
    • A method of determining whether datapaths executing in a computer program should execute conditional processing block includes determining whether processor enable (PE) states of all of the datapaths are disabled, and branching around the conditional processing if the PE states of all of the datapaths are disabled. Branching is not performed, even if the PE states of all of the datapaths are disabled, if the program is determined to be deterministic. That determination is made by evaluating the state of a deterministic bit. Instructions are also provided for carrying out the determining and branching operations. The instructions may also be combined with operations that maintain the PE states during conditional processing.
    • 确定在计算机程序中执行的数据通路是否应执行条件处理块的方法包括确定是否禁用所有数据路径的处理器使能(PE)状态,以及如果所有数据路径的PE状态被禁用,则在条件处理之前分支 。 即使所有数据路径的PE状态被禁用,如果程序被确定为确定性,则不执行分支。 该确定是通过评估确定性位的状态来进行的。 还提供了执行确定和分支操作的说明。 指令还可以与在条件处理期间维持PE状态的操作相结合。
    • 8. 发明授权
    • Data integrity checking
    • 数据完整性检查
    • US07415654B2
    • 2008-08-19
    • US10928961
    • 2004-08-30
    • John Redford
    • John Redford
    • G06F11/00G11C29/00
    • G11C29/56
    • A tester unit for evaluating data integrity of a block of data is described. The tester unit comprises a checksum determination facility adapted for deriving a checksum value from a block of data stored in a memory, and a checksum evaluation facility adapted for comparing the derived checksum value with a predetermined checksum value, and for initiating a reload of the block in case the derived checksum value differs from the predetermined checksum value.
    • 描述用于评估数据块的数据完整性的测试器单元。 测试器单元包括校验和确定设备,其适于从存储在存储器中的数据块导出校验和值,以及校验和评估设备,其适于将导出的校验和值与预定校验和值进行比较,并用于启动块的重新加载 在导出的校验和值与预定校验和值不同的情况下。
    • 9. 发明申请
    • Memory control system and method in which prefetch buffers are assigned uniquely to multiple burst streams
    • 其中预取缓冲区被唯一地分配给多个突发流的存储器控​​制系统和方法
    • US20050253858A1
    • 2005-11-17
    • US10846995
    • 2004-05-14
    • Takahide OhkamiJohn Redford
    • Takahide OhkamiJohn Redford
    • G06F12/08G06F12/14G06F13/16G09G5/39
    • G06F13/1673G06F12/0862G06F2212/6022
    • In a prefetch buffering system and method, a pool of prefetch buffers are organized in such a manner that there is a tight connection between the buffer pool and the data streams of interest. In this manner, efficient prefetching of data from memory is achieved and the amount of required buffer space is reduced. A memory control system controls the reading of data from a memory. A plurality of buffers buffer data read from the memory. A buffer assignment unit assigns a plurality of data streams to the plurality of buffers. The buffer assignment unit assigns to each data stream a primary buffer and a secondary buffer of the plurality of buffers, such that upon receiving a data request from a first data stream, the primary buffer assigned to the first data stream contains fetch data of the data request and the secondary buffer assigned to the first data stream contains prefetch data of the data request.
    • 在预取缓冲系统和方法中,预取缓冲器池以这样的方式组织,使得缓冲池和感兴趣的数据流之间存在紧密的连接。 以这种方式,实现了来自存储器的数据的有效预取,并且减少了所需的缓冲空间量。 存储器控制系统控制从存储器读取数据。 从存储器读取的多个缓冲器缓冲器数据。 缓冲器分配单元将多个数据流分配给多个缓冲器。 缓冲器分配单元向每个数据流分配多个缓冲器中的主缓冲器和次缓冲器,使得在从第一数据流接收到数据请求时,分配给第一数据流的主缓冲器包含数据的提取数据 分配给第一数据流的请求和辅助缓冲器包含数据请求的预取数据。