会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明申请
    • INFINITE MEMORY FABRIC HARDWARE IMPLEMENTATION WITH ROUTER
    • US20220261168A1
    • 2022-08-18
    • US17582416
    • 2022-01-24
    • Ultrata, LLC
    • Steven J. FrankLarry Reback
    • G06F3/06G06F16/22G06F12/109
    • Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. More specifically, embodiments of the present invention are directed to a hardware-based processing node of an object memory fabric. The processing node may include a memory module storing and managing one or more memory objects, the one or more memory objects each include at least a first memory and a second memory, wherein: each memory object is created natively within the memory module, and each memory object is accessed using a single memory reference instruction without Input/Output (I/O) instructions; and a router configured to interface between a processor on the memory module and the one or more memory objects; wherein a set of data is stored within the first memory of the memory module; wherein the memory module dynamically determines that at least a portion of the set of data will be transferred from the first memory to the second memory; and wherein, in response to the determination that at least a portion of the set of data will be transferred from the first memory to the second memory, the router is configured to identify the portion to be transferred and to facilitate execution of the transfer.
    • 10. 发明授权
    • Infinite memory fabric hardware implementation with memory
    • US11256438B2
    • 2022-02-22
    • US16883701
    • 2020-05-26
    • Ultrata, LLC
    • Steven J. FrankLarry Reback
    • G06F3/06
    • Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. More specifically, embodiments of the present invention are directed to a hardware-based processing node of an object memory fabric. The processing node may include a memory module storing and managing one or more memory objects, the one or more memory objects each include at least a first memory and a second memory, wherein the first memory has a lower latency than the second memory, and wherein each memory object is created natively within the memory module, and each memory object is accessed using a single memory reference instruction without Input/Output (I/O) instructions, wherein a set of data is stored within the first memory of the memory module; wherein the memory module is configured to receive an indication of a subset of the set of data that is eligible to be transferred between the first memory and the second memory; and wherein the memory module dynamically determines which of the subset of data will be transferred to the second memory based on access patterns associated with the object memory fabric.