会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 5. 发明授权
    • Future execution prefetching technique and architecture
    • 未来执行预取技术和架构
    • US07730263B2
    • 2010-06-01
    • US11335829
    • 2006-01-20
    • Martin BurtscherIlya Ganusov
    • Martin BurtscherIlya Ganusov
    • G06F12/00G06F13/00G06F13/28
    • G06F9/3851G06F9/30047G06F9/3017G06F9/3455G06F9/3832G06F12/0862
    • A prefetching technique referred to as future execution (FE) dynamically creates a prefetching thread for each active thread in a processor by simply sending a copy of all committed, register-writing instructions in a primary thread to an otherwise idle processor. On the way to the second processor, a value predictor replaces each predictable instruction with a load immediate instruction, where the immediate is the predicted result that the instruction is likely to produce during its nth next dynamic execution. Executing this modified instruction stream (i.e., the prefetching thread) in another processor allows computation of the future results of the instructions that are not directly predictable. This causes the issuance of prefetches into the shared memory hierarchy, thereby reducing the primary thread's memory access time and speeding up the primary thread's execution.
    • 称为未来执行(FE)的预取技术通过简单地将主线程中的所有提交的寄存器写入指令的副本发送到另外空闲的处理器来动态地为处理器中的每个活动线程创建预取线程。 在到第二处理器的途中,值预测器用负载立即指令替换每个可预测指令,其中立即数是指令在其第n次下一次动态执行期间可能产生的预测结果。 在另一个处理器中执行该修改的指令流(即,预取线程)允许计算未直接预测的指令的未来结果。 这将导致预取到共享内存层次结构中,从而减少主线程的内存访问时间并加快主线程的执行速度。
    • 6. 发明申请
    • Future execution prefetching technique and architecture
    • 未来执行预取技术和架构
    • US20070174555A1
    • 2007-07-26
    • US11335829
    • 2006-01-20
    • Martin BurtscherIlya Ganusov
    • Martin BurtscherIlya Ganusov
    • G06F12/00
    • G06F9/3851G06F9/30047G06F9/3017G06F9/3455G06F9/3832G06F12/0862
    • A prefetching technique referred to as future execution (FE) dynamically creates a prefetching thread for each active thread in a processor by simply sending a copy of all committed, register-writing instructions in a primary thread to an otherwise idle processor. On the way to the second processor, a value predictor replaces each predictable instruction with a load immediate instruction, where the immediate is the predicted result that the instruction is likely to produce during its nth next dynamic execution. Executing this modified instruction stream (i.e., the prefetching thread) in another processor allows computation of the future results of the instructions that are not directly predictable. This causes the issuance of prefetches into the shared memory hierarchy, thereby reducing the primary thread's memory access time and speeding up the primary thread's execution.
    • 称为未来执行(FE)的预取技术通过简单地将主线程中的所有提交的寄存器写入指令的副本发送到另外空闲的处理器来动态地为处理器中的每个活动线程创建预取线程。 在到第二处理器的途中,值预测器用负载立即指令替换每个可预测指令,其中立即数是指令在其第n次下一次动态执行期间可能产生的预测结果。 在另一个处理器中执行该修改的指令流(即,预取线程)允许计算未直接预测的指令的未来结果。 这将导致预取到共享内存层次结构中,从而减少主线程的内存访问时间并加快主线程的执行速度。