会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明申请
    • PARTITIONABLE ACCOUNTING OF MEMORY UTLIZATION
    • 存储利益分配的可分配会计
    • US20080189502A1
    • 2008-08-07
    • US11670412
    • 2007-02-01
    • Blake A. JonesGeorge R. CameronEric E. Lowe
    • Blake A. JonesGeorge R. CameronEric E. Lowe
    • G06F12/00
    • G06F12/023G06F9/5016G06F12/12G06F2209/504Y02D10/22
    • Managing physical memory for one or more processes with both a minimum and a maximum amount of physical memory. Memory sets are created, each specifying a number of credits. The total number of credits specified by all memory sets are equal to the total number of pages in physical memory. One or more processes are bound to a memory set. All of the processes bound to a memory set are collectively referred to as the workload of the memory set. Each physical page is accounted for to ensure that each workload can utilize at least the number of physical pages equaling the number of credits in its memory set. Additionally, a workload is permitted to use physical pages that are being explicitly shared by workloads of other memory sets. Accordingly, a workload with both a minimum and a maximum amount of physical memory is specified by its memory set.
    • 管理具有最小和最大物理内存量的一个或多个进程的物理内存。 创建内存集,每个都指定一定数量的积分。 所有存储器集合指定的信用总数等于物理内存中的总页数。 一个或多个进程绑定到一个内存集。 绑定到存储器集合的所有进程统称为存储器集合的工作负载。 每个物理页面被考虑以确保每个工作负载可以利用至少等于其存储器组中的信用数量的物理页面的数量。 另外,允许工作负载使用由其他存储器集合的工作负载显式共享的物理页面。 因此,具有最小和最大物理存储量的工作负载由其存储器集指定。
    • 2. 发明申请
    • PHYSICAL MEMORY USAGE PREDICTION
    • 物理内存使用预测
    • US20130290669A1
    • 2013-10-31
    • US13460681
    • 2012-04-30
    • Eric E. LoweBlake A. JonesJonathan William Adams
    • Eric E. LoweBlake A. JonesJonathan William Adams
    • G06F12/10
    • G06F12/0223
    • In general, in one aspect, the invention relates to a system that includes memory and a prediction subsystem. The memory includes a first memgroup and a second memgroup, wherein the first memgroup comprises a first physical page and a second physical page, wherein the first physical page is a first subtype, and wherein the second physical page is a second subtype. The prediction subsystem is configured to obtain a status value indicating an amount of freed physical pages on the memory, store the status value in a sample buffer comprising a plurality of previous status values, determine, using the status value and the plurality of previous status values, a deficiency subtype state for the first subtype based on an anticipated need for the first subtype on the memory, and instruct, based on the determination, an allocation subsystem to coalesce the second physical page to the first subtype.
    • 通常,一方面,本发明涉及一种包括存储器和预测子系统的系统。 所述存储器包括第一存储器组和第二存储器组,其中所述第一存储器组包括第一物理页和第二物理页,其中所述第一物理页是第一子类型,并且其中所述第二物理页是第二子类型。 预测子系统被配置为获得指示存储器上释放的物理页面的量的状态值,将状态值存储在包括多个先前状态值的采样缓冲器中,使用状态值和多个先前状态值 基于对存储器上的第一子类型的预期需要的第一子类型的不足子类型状态,并且基于该确定指示分配子系统将第二物理页面合并到第一子类型。
    • 3. 发明授权
    • Partitionable accounting of memory utilization
    • 内存利用率的分区计费
    • US07873801B2
    • 2011-01-18
    • US11670412
    • 2007-02-01
    • Blake A. JonesGeorge R. CameronEric E. Lowe
    • Blake A. JonesGeorge R. CameronEric E. Lowe
    • G06F12/00
    • G06F12/023G06F9/5016G06F12/12G06F2209/504Y02D10/22
    • Managing physical memory for one or more processes with both a minimum and a maximum amount of physical memory. Memory sets are created, each specifying a number of credits. The total number of credits specified by all memory sets are equal to the total number of pages in physical memory. One or more processes are bound to a memory set. All of the processes bound to a memory set are collectively referred to as the workload of the memory set. Each physical page is accounted for to ensure that each workload can utilize at least the number of physical pages equaling the number of credits in its memory set. Additionally, a workload is permitted to use physical pages that are being explicitly shared by workloads of other memory sets. Accordingly, a workload with both a minimum and a maximum amount of physical memory is specified by its memory set.
    • 管理具有最小和最大物理内存量的一个或多个进程的物理内存。 创建内存集,每个都指定一定数量的积分。 所有存储器集合指定的信用总数等于物理内存中的总页数。 一个或多个进程绑定到一个内存集。 绑定到存储器集合的所有进程统称为存储器集合的工作负载。 每个物理页面被考虑以确保每个工作负载可以利用至少等于其存储器组中的信用数量的物理页面的数量。 另外,允许工作负载使用由其他存储器集合的工作负载显式共享的物理页面。 因此,具有最小和最大物理存储量的工作负载由其存储器集指定。
    • 4. 发明申请
    • Relocation of active DMA pages
    • 活动DMA页面的重定位
    • US20080005495A1
    • 2008-01-03
    • US11451785
    • 2006-06-12
    • Eric E. LoweWesley Shao
    • Eric E. LoweWesley Shao
    • G06F12/14
    • G06F12/1081
    • According to one embodiment of the invention, a technique is provided for facilitating the relocation of data from a source page to a destination page in a computing system in which I/O devices may conduct DVMA transactions via an IOMMU. Before the relocation, it is determined whether any devices potentially are accessing the source page. If it is determined that a device potentially is accessing the source page, then the IOMMU's device driver (“bus nexus”) “suspends” the bus. The bus nexus allows any pending memory transactions to finish. While the bus is suspended, the kernel moves the contents of the source page to the destination page. After the kernel has moved the contents, the IOMMU's TLB is updated so that the virtual address that was mapped to the source page's physical address is mapped to the destination page's physical address. The bus nexus “unsuspends” the bus.
    • 根据本发明的一个实施例,提供了一种技术,用于便于将数据从源页面重新定位到计算系统中的目的地页面,其中I / O设备可经由IOMMU进行DVMA交易。 在重定位之前,确定是否有任何设备可能正在访问源页面。 如果确定设备可能正在访问源页面,则IOMMU的设备驱动程序(“总线连接”)“暂停”总线。 总线连接允许任何待处理的内存事务完成。 当总线暂停时,内核将源页面的内容移动到目标页面。 在内核移动内容之后,IOMMU的TLB被更新,以便将映射到源页的物理地址的虚拟地址映射到目标页的物理地址。 公共汽车线路“重悬”公交车。
    • 5. 发明授权
    • Method and apparatus for memory management in a multi-processor computer system
    • 用于多处理器计算机系统中的存储器管理的方法和装置
    • US07188229B2
    • 2007-03-06
    • US10769586
    • 2004-01-30
    • Eric E. Lowe
    • Eric E. Lowe
    • G06F12/00
    • G06F12/08G06F12/1027G06F12/1036G06F12/1072G06F2212/682G06F2212/684
    • Improved techniques and systems for accommodating TLB shootdown events in multi-processor computer systems are disclosed. A memory management unit (MMU) having a TLB miss handler and miss exception handler is provided. The MMU receives instructions relative to a virtual address. A TLB is searched for the virtual address, if the virtual address is not found in the TLB, secondary memory assets are searched for a TTE that corresponds to the virtual address and its associated context identifier. The context identifier is tested to determine if the TTE is available. Where the TTE is available, the TLB and secondary memory assets are updated as necessary and the method initiates memory access instructions. Where the TTE is unavailable, the method either resolves the unavailability or waits until the unavailability is resolved and then initiates memory access instructions, thereby enabling the desired virtual address information to be accessed.
    • 公开了用于在多处理器计算机系统中适应TLB击倒事件的改进的技术和系统。 提供具有TLB未命中处理程序和未命中异常处理程序的存储器管理单元(MMU)。 MMU接收关于虚拟地址的指令。 搜索TLB的虚拟地址,如果在TLB中没有找到虚拟地址,则会对辅助存储资产进行与虚拟地址及其关联的上下文标识符相对应的TTE的搜索。 测试上下文标识符以确定TTE是否可用。 在TTE可用的情况下,必要时更新TLB和辅助内存资产,并且该方法启动内存访问指令。 在TTE不可用的情况下,该方法可解决不可用性或等待直到不可用性被解析,然后启动存储器访问指令,从而使得能够访问所需的虚拟地址信息。
    • 6. 发明授权
    • Physical memory usage prediction
    • 物理内存使用预测
    • US09367439B2
    • 2016-06-14
    • US13460681
    • 2012-04-30
    • Eric E. LoweBlake A. JonesJonathan William Adams
    • Eric E. LoweBlake A. JonesJonathan William Adams
    • G06F12/00G06F12/02
    • G06F12/0223
    • In general, in one aspect, the invention relates to a system that includes memory and a prediction subsystem. The memory includes a first memgroup and a second memgroup, wherein the first memgroup comprises a first physical page and a second physical page, wherein the first physical page is a first subtype, and wherein the second physical page is a second subtype. The prediction subsystem is configured to obtain a status value indicating an amount of freed physical pages on the memory, store the status value in a sample buffer comprising a plurality of previous status values, determine, using the status value and the plurality of previous status values, a deficiency subtype state for the first subtype based on an anticipated need for the first subtype on the memory, and instruct, based on the determination, an allocation subsystem to coalesce the second physical page to the first subtype.
    • 通常,一方面,本发明涉及一种包括存储器和预测子系统的系统。 所述存储器包括第一存储器组和第二存储器组,其中所述第一存储器组包括第一物理页和第二物理页,其中所述第一物理页是第一子类型,并且其中所述第二物理页是第二子类型。 预测子系统被配置为获得指示存储器上释放的物理页面的量的状态值,将状态值存储在包括多个先前状态值的采样缓冲器中,使用状态值和多个先前状态值 基于对存储器上的第一子类型的预期需要的第一子类型的不足子类型状态,并且基于该确定指示分配子系统将第二物理页面合并到第一子类型。
    • 7. 发明授权
    • Scalable resource allocation
    • 可扩展的资源分配
    • US08127295B1
    • 2012-02-28
    • US11833907
    • 2007-08-03
    • Blake A. JonesGeorge R. CameronEric E. Lowe
    • Blake A. JonesGeorge R. CameronEric E. Lowe
    • G06F9/46
    • G06F9/5011G06F2209/5011
    • A device, system, and method are directed towards managing limited resources in a computer system with multiple processing units. Each processing unit has a corresponding bucket. Each thread executing on a processing unit has a corresponding wallet. Buckets and wallets contain credits corresponding to units of the limited resource. When a request for the resource is made, mechanisms of the invention attempt to fulfill the request by looking in a local wallet, a local bucket, or non-local buckets. In a resource shortage situation, credits may be moved to a primary bucket. A load balancing mechanism may distribute credits among buckets, or move credits from wallets to buckets.
    • 设备,系统和方法旨在管理具有多个处理单元的计算机系统中的有限资源。 每个处理单元都有相应的桶。 在处理单元上执行的每个线程具有相应的钱包。 桶和钱包包含对应于有限资源单位的信用额度。 当对资源进行请求时,本发明的机制尝试通过查看本地钱包,本地桶或非本地桶来满足请求。 在资源短缺情况下,信用额度可能会移动到主存储桶。 负载平衡机制可以在桶中分配信用,或将信用从钱包移动到桶。
    • 8. 发明授权
    • Relocation of active DMA pages
    • 活动DMA页面的重定位
    • US07721068B2
    • 2010-05-18
    • US11451785
    • 2006-06-12
    • Eric E. LoweWesley Shao
    • Eric E. LoweWesley Shao
    • G06F9/26G06F13/00
    • G06F12/1081
    • According to one embodiment of the invention, a technique is provided for facilitating the relocation of data from a source page to a destination page in a computing system in which I/O devices may conduct DVMA transactions via an IOMMU. Before the relocation, it is determined whether any devices potentially are accessing the source page. If it is determined that a device potentially is accessing the source page, then the IOMMU's device driver (“bus nexus”) “suspends” the bus. The bus nexus allows any pending memory transactions to finish. While the bus is suspended, the kernel moves the contents of the source page to the destination page. After the kernel has moved the contents, the IOMMU's TLB is updated so that the virtual address that was mapped to the source page's physical address is mapped to the destination page's physical address. The bus nexus “unsuspends” the bus.
    • 根据本发明的一个实施例,提供了一种技术,用于便于将数据从源页面重新定位到计算系统中的目的地页面,其中I / O设备可经由IOMMU进行DVMA交易。 在重定位之前,确定是否有任何设备可能正在访问源页面。 如果确定设备可能正在访问源页面,则IOMMU的设备驱动程序(“总线连接”)“暂停”总线。 总线连接允许任何待处理的内存事务完成。 当总线暂停时,内核将源页面的内容移动到目标页面。 在内核移动内容之后,IOMMU的TLB被更新,以便将映射到源页的物理地址的虚拟地址映射到目标页的物理地址。 公共汽车线路“重悬”公交车。