会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 5. 发明授权
    • Microprocessor with multiple operating modes dynamically configurable by a device driver based on currently running applications
    • 具有多种工作模式的微处理器可以由基于当前运行的应用程序的设备驱动程序动态配置
    • US08566565B2
    • 2013-10-22
    • US12170591
    • 2008-07-10
    • Rodney E. HookerColin EddyG. Glenn Henry
    • Rodney E. HookerColin EddyG. Glenn Henry
    • G06F17/00G06F9/445
    • G06F1/3203G06F9/3017G06F9/30189G06F9/3814G06F9/383G06F9/3836G06F9/3842G06F9/3844G06F9/3846G06F9/3855G06F9/3869
    • A computing system includes a microprocessor that receives values for configuring operating modes thereof. A device driver monitors which software applications currently running on the microprocessor are in a predetermined list and responsively dynamically writes the values to the microprocessor to configure its operating modes. Examples of the operating modes the device driver may configure relate to the following: data prefetching; branch prediction; instruction cache eviction; instruction execution suspension; sizes of cache memories, reorder buffer, store/load/fill queues; hashing algorithms related to data forwarding and branch target address cache indexing; number of instruction translation, formatting, and issuing per clock cycle; load delay mechanism; speculative page tablewalks; instruction merging; out-of-order execution extent; caching of non-temporal hinted data; and serial or parallel access of an L2 cache and processor bus in response to an instruction cache miss.
    • 计算系统包括接收用于配置其操作模式的值的微处理器。 设备驱动程序监视当前在微处理器上运行的软件应用程序处于预定列表中并且响应地动态地将值写入微处理器以配置其操作模式。 设备驱动程序可以配置的操作模式的示例涉及以下内容:数据预取; 分支预测; 指令缓存驱逐; 指令执行暂停; 高速缓冲存储器的大小,重新排序缓冲器,存储/加载/填充队列; 与数据转发和分支目标地址缓存索引相关的散列算法; 每个时钟周期的指令翻译,格式化和发布的数量; 负载延迟机制; 投机页面 指令合并 无序执行程度; 缓存非时间暗示数据; 以及响应于指令高速缓存未命中的L2高速缓存和处理器总线的串行或并行访问。
    • 6. 发明授权
    • Overlay instruction accessing unit and overlay instruction accessing method
    • 覆盖指令访问单元和覆盖指令访问方法
    • US08286151B2
    • 2012-10-09
    • US12239070
    • 2008-09-26
    • Liang ChenKuan FengWang ZhengMin Zhu
    • Liang ChenKuan FengWang ZhengMin Zhu
    • G06F3/45G00F15/00
    • G06F9/3802G06F8/433G06F9/3017G06F9/30178G06F9/382G06F9/3846G06F9/3885G06F9/3891
    • The present invention provides an overlay instruction accessing unit and method, and a method and apparatus for compressing and storing a program. The overlay instruction accessing unit is used to execute a program stored in a memory in the form of a plurality of compressed program segments, and compresses: a buffer; a processing unit for issuing an instruction reading request, reading an instruction from the buffer, and executing the instruction; and a decompressing unit for reading a requested compressed instruction segment from the memory in response to the instruction reading request of the processing unit, decompressing the compressed instruction segment, and storing the decompressed instruction segment in the buffer, wherein while the processing unit is executing the instruction segment, the decompressing unit reads, according to a storage address of a compressed program segment to be invoked in a header corresponding to the instruction segment, a corresponding compressed instruction segment from the memory, decompresses the compressed instruction segment, and stores the decompressed instruction segment in the buffer for later use by the processing unit.
    • 本发明提供一种覆盖指令访问单元和方法,以及用于压缩和存储程序的方法和装置。 覆盖指令访问单元用于以多个压缩程序段的形式执行存储在存储器中的程序,并且压缩:缓冲器; 处理单元,用于发出指令读取请求,从缓冲器读取指令并执行指令; 以及解压缩单元,用于响应于处理单元的指令读取请求从存储器读取所请求的压缩指令段,解压缩压缩指令段,并将解压缩的指令段存储在缓冲器中,其中当处理单元正在执行时 指令段,解压缩单元根据在与指令段对应的报头中要调用的压缩程序段的存储地址读取来自存储器的对应的压缩指令段,解压缩压缩指令段,并存储解压缩指令 缓冲区中的段,以供稍后由处理单元使用。
    • 8. 发明申请
    • Branch Prediction Mechanisms Using Multiple Hash Functions
    • 使用多个哈希函数的分支预测机制
    • US20090265533A1
    • 2009-10-22
    • US12493768
    • 2009-06-29
    • Robert E. CypherStevan A. Vlaovic
    • Robert E. CypherStevan A. Vlaovic
    • G06F9/38
    • G06F9/3846G06F9/3848
    • In one embodiment, the branch prediction mechanism includes a first storage including a first plurality of locations for storing a first set of partial prediction information. The branch prediction mechanism also includes a second storage including a second plurality of locations for storing a second set of partial prediction information. Further, the branch prediction mechanism includes a control unit that performs a first hash function on input branch information to generate a first index for accessing a selected location within the first storage. The control unit also performs a second hash function on the input branch information to generate a second index for accessing a selected location within the second storage. Lastly, the control unit further provides a prediction value based on corresponding partial prediction information in the selected locations of the first and the second storages.
    • 在一个实施例中,分支预测机制包括第一存储器,其包括用于存储第一组部分预测信息的第一多个位置。 分支预测机制还包括包括用于存储第二组部分预测信息的第二多个位置的第二存储器。 此外,分支预测机构包括:控制单元,其对输入的分支信息执行第一散列函数,以生成用于访问第一存储器内的选定位置的第一索引。 控制单元还对输入的分支信息执行第二散列函数以产生用于访问第二存储器内的所选位置的第二索引。 最后,控制单元进一步提供基于第一和第二存储器的选定位置中的相应部分预测信息的预测值。
    • 9. 发明授权
    • Speculative execution for java hardware accelerator
    • Java硬件加速器的推测执行
    • US07243350B2
    • 2007-07-10
    • US10259704
    • 2002-09-27
    • Menno Menasshe Lindwer
    • Menno Menasshe Lindwer
    • G06F9/455G06F9/30G06F9/40G06F15/10G06F7/38G06F9/00G06F9/44
    • G06F9/3846G06F9/30174
    • Conditional branch bytecodes are processed by a Virtual Machine Interpreter (VMI) hardware accelerator that utilizes a branch prediction scheme to determine whether to speculatively process bytecodes while waiting for the CPU to return a condition control variable. The VMI assumes the branch condition will be fulfilled if a conditional branch bytecode calls for a backward jump and that the branch condition will not be fulfilled if a conditional branch bytecode calls for a forward jump. Alternatively, the VMI makes an assumption only if a conditional branch bytecode calls for a backward jump or the VMI assumes that the branch condition will be fulfilled whenever it processes a conditional branch bytecode. The VMI only speculatively processes bytecodes that are easily reversible, and suspends speculative processing of bytecodes upon encountering a bytecode that is not easily reversible. If a VMI assumption is invalidated, any speculatively processed bytecodes are reversed.
    • 条件分支字节码由虚拟机解释器(VMI)硬件加速器处理,虚拟机解释器(VMI)硬件加速器利用分支预测方案来确定是否在等待CPU返回条件控制变量时推测性地处理字节码。 如果条件分支字节码要求后向跳转,则VMI假定分支条件将被满足,如果条件分支字节码要求前向跳转,则分支条件将不会被满足。 或者,VMI仅在条件分支字节码要求后向跳转时作出假设,或者VMI假定每当处理条件分支字节码时,分支条件将被满足。 VMI仅推测性地处理容易可逆的字节码,并且在遇到不易逆转的字节码时暂停对字节码的推测性处理。 如果VMI假设无效,任何推测性处理的字节码都会相反。