会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 4. 发明申请
    • PARALLEL RUNTIME EXECUTION ON MULTIPLE PROCESSORS
    • 并行执行多个处理器
    • US20130055272A1
    • 2013-02-28
    • US13597119
    • 2012-08-28
    • Aaftab MunshiJeremy Sandmel
    • Aaftab MunshiJeremy Sandmel
    • G06F9/46
    • G06F9/445G06F8/41G06F8/447G06F9/44542G06F9/4843G06F9/5044G06F9/541
    • A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads.
    • 描述了在一个或多个物理计算设备(例如CPU或GPU)中同时调度用于在一个或多个物理计算设备中执行的调度队列中的多个可执行程序的方法和装置。 一个或多个可执行文件在来自具有用于不同于一个或多个物理计算设备的物理计算设备的类型的现有可执行程序的源的在线编译。 确定与调度的可执行程序相对应的元件之间的依赖性关系,以在多个物理计算设备中同时选择要被多个线程执行的可执行文件。 如果GPU忙于图形处理线程,则初始化用于在物理计算设备的GPU中执行可执行程序的线程被初始化以在物理计算设备的另一个CPU中执行。
    • 5. 发明授权
    • Shared stream memory on multiple processors
    • 多个处理器上的共享流内存
    • US08108633B2
    • 2012-01-31
    • US11800256
    • 2007-05-03
    • Aaftab MunshiJeremy Sandmel
    • Aaftab MunshiJeremy Sandmel
    • G06F12/00
    • G06F9/5016G06F9/5044
    • A method and an apparatus that allocate a stream memory and/or a local memory for a variable in an executable loaded from a host processor to the compute processor according to whether a compute processor supports a storage capability are described. The compute processor may be a graphics processing unit (GPU) or a central processing unit (CPU). Alternatively, an application running in a host processor configures storage capabilities in a compute processor, such as CPU or GPU, to determine a memory location for accessing a variable in an executable executed by a plurality of threads in the compute processor. The configuration and allocation are based on API calls in the host processor.
    • 描述了根据计算处理器是否支持存储能力来分配从主处理器向计算处理器加载的可执行文件中的变量的流存储器和/或本地存储器的方法和装置。 计算处理器可以是图形处理单元(GPU)或中央处理单元(CPU)。 或者,运行在主处理器中的应用程序配置计算处理器(例如CPU或GPU)中的存储能力,以确定用于访问由计算处理器中的多个线程执行的可执行文件中的变量的存储器位置。 配置和分配基于主机处理器中的API调用。
    • 6. 发明授权
    • Data parallel computing on multiple processors
    • 多处理器上的数据并行计算
    • US09207971B2
    • 2015-12-08
    • US13614975
    • 2012-09-13
    • Aaftab MunshiJeremy Sandmel
    • Aaftab MunshiJeremy Sandmel
    • G06F9/54G06F9/48G06F9/50
    • G06F9/5044G06F9/4843G06F2209/5018
    • A method and an apparatus that allocate one or more physical compute devices such as CPUs or GPUs attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently.
    • 描述分配一个或多个物理计算设备(诸如连接到运行用于执行应用的一个或多个线程的应用的主机处理单元的CPU或GPU)的方法和装置。 分配可以基于表示来自用于在一个或多个线程中执行可执行程序的应用程序的处理能力要求的数据。 计算设备标识符可以与所分配的物理计算设备相关联,以在一个或多个所分配的物理计算设备中同时调度和执行一个或多个线程中的可执行文件。
    • 7. 发明授权
    • Data parallel computing on multiple processors
    • 多处理器上的数据并行计算
    • US08276164B2
    • 2012-09-25
    • US11800185
    • 2007-05-03
    • Aaftab MunshiJeremy Sandmel
    • Aaftab MunshiJeremy Sandmel
    • G06F9/06
    • G06F9/5044G06F9/4843G06F2209/5018
    • A method and an apparatus that allocate one or more physical compute devices such as CPUs or GPUs attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently.
    • 描述分配一个或多个物理计算设备(诸如连接到运行用于执行应用的一个或多个线程的应用的主机处理单元的CPU或GPU)的方法和装置。 分配可以基于表示来自用于在一个或多个线程中执行可执行程序的应用程序的处理能力要求的数据。 计算设备标识符可以与所分配的物理计算设备相关联,以在一个或多个所分配的物理计算设备中同时调度和执行一个或多个线程中的可执行文件。
    • 8. 发明授权
    • Parallel runtime execution on multiple processors
    • 多个处理器上的并行运行时执行
    • US08286196B2
    • 2012-10-09
    • US11800319
    • 2007-05-03
    • Aaftab MunshiJeremy Sandmel
    • Aaftab MunshiJeremy Sandmel
    • G06F9/54G06F9/46
    • G06F9/445G06F8/41G06F8/447G06F9/44542G06F9/4843G06F9/5044G06F9/541
    • A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads. Sources and existing executables for an API function are stored in an API library to execute a plurality of executables in a plurality of physical compute devices, including the existing executables and online compiled executables from the sources.
    • 描述了在一个或多个物理计算设备(例如CPU或GPU)中同时调度用于在一个或多个物理计算设备中执行的调度队列中的多个可执行程序的方法和装置。 一个或多个可执行文件在来自具有用于不同于一个或多个物理计算设备的物理计算设备的类型的现有可执行程序的源的在线编译。 确定与调度的可执行程序相对应的元件之间的依赖性关系,以在多个物理计算设备中同时选择要被多个线程执行的可执行文件。 如果GPU忙于图形处理线程,则初始化用于在物理计算设备的GPU中执行可执行程序的线程被初始化以在物理计算设备的另一个CPU中执行。 用于API函数的源和现有可执行文件存储在API库中以在多个物理计算设备中执行多个可执行程序,包括来自源的现有可执行文件和在线编译的可执行文件。
    • 9. 发明申请
    • Parallel runtime execution on multiple processors
    • 多个处理器上的并行运行时执行
    • US20080276262A1
    • 2008-11-06
    • US11800319
    • 2007-05-03
    • Aaftab MunshiJeremy Sandmel
    • Aaftab MunshiJeremy Sandmel
    • G06F9/46G06F9/45
    • G06F9/445G06F8/41G06F8/447G06F9/44542G06F9/4843G06F9/5044G06F9/541
    • A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads. Sources and existing executables for an API function are stored in an API library to execute a plurality of executables in a plurality of physical compute devices, including the existing executables and online compiled executables from the sources.
    • 描述了在一个或多个物理计算设备(例如CPU或GPU)中同时调度用于在一个或多个物理计算设备中执行的调度队列中的多个可执行程序的方法和装置。 一个或多个可执行文件在来自具有用于不同于一个或多个物理计算设备的物理计算设备的类型的现有可执行程序的源的在线编译。 确定与调度的可执行程序相对应的元件之间的依赖性关系,以在多个物理计算设备中同时选择要被多个线程执行的可执行文件。 如果GPU忙于图形处理线程,则初始化用于在物理计算设备的GPU中执行可执行程序的线程被初始化以在物理计算设备的另一个CPU中执行。 用于API函数的源和现有可执行文件存储在API库中以在多个物理计算设备中执行多个可执行程序,包括来自源的现有可执行文件和在线编译的可执行文件。