会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Method and apparatus for flow control in packet-switched computer system
    • 分组交换计算机系统中流控制的方法和装置
    • US5907485A
    • 1999-05-25
    • US414875
    • 1995-03-31
    • William C. Van LooZahir EbrahimSatyanarayana NishtalaKevin B. NormoyleLeslie KohnLouis F. Coffin, III
    • William C. Van LooZahir EbrahimSatyanarayana NishtalaKevin B. NormoyleLeslie KohnLouis F. Coffin, III
    • G06F9/46G06F13/24G05B15/00
    • G06F9/546G06F13/24
    • This invention describes a link-by-link flow control method for packet-switched uniprocessor and multiprocessor computer systems that maximizes system resource utilization and throughput, and minimizes system latency. The computer system comprises one or more master interfaces, one or more slave interfaces, and an interconnect system controller which provides dedicated transaction request queues for each master interface and controls the forwarding of transactions to each slave interface. The master interface keeps track of the number of requests in the dedicated queue in the system controller, and the system controller keeps track of the number of requests in each slave interface queue. Both the master interface, and system controller know the maximum capacity of the queue immediately downstream from it, and does not issue more transaction requests than what the downstream queue can accommodate. An acknowledgment from the downstream queue indicates to the sender that there is space in it for another transaction. Thus no system resources are wasted trying to send a request to a queue that is already full.
    • 本发明描述了一种用于分组交换单处理器和多处理器计算机系统的链路链路流控制方法,其使系统资源利用率和吞吐量最大化,并最小化系统等待时间。 计算机系统包括一个或多个主接口,一个或多个从接口和互连系统控制器,其为每个主接口提供专用事务请求队列,并且控制事务到每个从接口的转发。 主接口跟踪系统控制器中专用队列中的请求数,系统控制器跟踪每个从接口队列中的请求数。 主接口和系统控制器都知道其下游队列的最大容量,并且不会比下游队列可以容纳更多的事务请求。 来自下游队列的确认向发送方指示在其中存在另一个事务的空间。 因此,尝试将请求发送到已满的队列时,不会浪费任何系统资源。
    • 3. 发明授权
    • Method and apparatus for implementing non-faulting load instruction
    • 用于实现非故障加载指令的方法和装置
    • US5842225A
    • 1998-11-24
    • US395579
    • 1995-02-27
    • Leslie Kohn
    • Leslie Kohn
    • G06F9/38G06F9/312G06F9/32G06F12/10G06F12/14
    • G06F9/30043G06F12/1027G06F12/145G06F12/1458G06F2212/684
    • A non-fault-only (NFO) bit is included in the translation table entry for each page. If the NFO bit is set, non-faulting loads accessing the page will cause translations to occur. Any other access to the non-fault-only page is an error, and will cause the processor to fault. A non-faulting load behaves like a normal load except that it never produces a fault even when applied to a page with the NFO bit set. The NFO bit in a translation table entry marks a page that is mapped for safe access by non-faulting loads, but can still cause a fault by other, normal accesses. The NFO bit indicates which pages are illegal. Selected pages, such as the virtual page 0x0, can be mapped in the translation table. Whenever a null-pointer is dereferenced by a non-faulting load, a translation lookaside buffer (TLB) hit will occur, and zero will be returned immediately without trapping to software to find the requested page. A second embodiment provides that when the operating system software routine invoked by a TLB miss discovers that a non-faulting load has attempted to access an illegal virtual page that was not previously translated in the translation table, the operating system creates a translation table entry for that virtual page mapping it to a physical page of all zeros and asserting the NFO bit for that virtual page.
    • 每个页面的转换表项中都包含非故障(NFO)位。 如果NFO位被设置,访问页面的无故障加载将导致转换。 对非故障页面的任何其他访问都是错误,并将导致处理器出现故障。 非故障负载的作用就像正常负载,除了即使应用于设置了NFO位的页面,它也不会产生故障。 翻译表条目中的NFO位标记了一个被非故障负载安全访问映射的页面,但仍然可能由其他正常访问引起故障。 NFO位指示哪些页面是非法的。 所选页面,如虚拟页面0x0,可以映射到转换表中。 无论何时一个空指针由非故障负载解除引用,将会发生转换后备缓冲区(TLB)命中,零将立即返回,而不会陷入软件以查找请求的页面。 第二实施例规定,当由TLB错误调用的操作系统软件例程发现非故障负载尝试访问以前未在转换表中翻译的非法虚拟页面时,操作系统创建用于 该虚拟页面将其映射到全零的物理页面并断言该虚拟页面的NFO位。
    • 4. 发明授权
    • Method of protecting high definition video signal
    • 保护高分辨率视频信号的方法
    • US06570990B1
    • 2003-05-27
    • US09192102
    • 1998-11-13
    • Leslie KohnDavid A. BarrDidier Le Gall
    • Leslie KohnDavid A. BarrDidier Le Gall
    • H04N7167
    • H04N11/002H04N7/1675H04N7/1696H04N21/2347H04N21/4408
    • A system controls reproduction of a video transmission between a transmitter and a receiver. The system includes an encryptor with an offset generator adapted to receive the encrypted frame key and to generate a sequence, of pseudo-random values for the color component; and an adder coupled to the offset generator and to the color component signal for providing an encoded color component signal. The system also includes a decryptor with a decryptor offset generator adapted to receive the encrypted frame key and to generate a decryptor pseudo-random value for the color component; and a subtractor coupled to the offset generator and to the color component signal for subtracting the offset signal from the color component signal.
    • 系统控制发射机和接收机之间的视频传输的再现。 该系统包括具有偏移生成器的加密器,其适于接收加密的帧密钥并生成用于颜色分量的伪随机值的序列; 以及耦合到偏移发生器和用于提供编码颜色分量信号的颜色分量信号的加法器。 所述系统还包括具有解密器偏移生成器的解密器,所述解密器偏移生成器适于接收所述加密的帧密钥并且生成所述颜色分量的解密器伪随机值; 以及耦合到偏移发生器的减法器和用于从颜色分量信号中减去偏移信号的颜色分量信号。
    • 5. 发明授权
    • Out of order instruction processing using dual memory banks
    • 使用双存储器的乱序指令处理
    • US6044206A
    • 2000-03-28
    • US949991
    • 1997-10-14
    • Leslie Kohn
    • Leslie Kohn
    • G06F9/30G06F9/38
    • G06F9/3012G06F9/30087G06F9/3824G06F9/3885
    • A process of synchronizing two execution units sharing a common memory with a plurality of memory banks starts by assigning a first memory bank to a one of two execution units. The other memory bank is assigned to the other execution unit. Then a sequence of operations is processed within one of the execution units while another sequence of operations is processed within the other execution unit. When the first execution unit completes a sequence of operations, a synchronizing operation is performed which causes that first execution unit to suspend processing if a corresponding sequence of operations in the other execution unit has not been completed. When both execution units have completed their respective sequences of operations, the assignment of memory banks is swapped between the two execution units, thereby preventing erroneous reads and writes.
    • 将共享公共存储器的两个执行单元与多个存储器组同步的过程通过将第一存储器组分配给两个执行单元之一来开始。 另一个存储体被分配给另一个执行单元。 然后在一个执行单元内处理一系列操作,而另一个执行单元处理另一个操作序列。 当第一执行单元完成一系列操作时,执行同步操作,如果另一执行单元中的相应操作序列尚未完成,则使第一执行单元暂停处理。 当两个执行单元完成其各自的操作序列时,在两个执行单元之间交换存储器组的分配,从而防止错误的读取和写入。
    • 6. 发明授权
    • Cachability attributes of virtual addresses for optimizing performance
of virtually and physically indexed caches in maintaining multiply
aliased physical addresses
    • 虚拟地址的高速缓存属性,用于优化虚拟和物理索引高速缓存的性能,以维护乘法混叠物理地址
    • US6006312A
    • 1999-12-21
    • US391389
    • 1995-02-27
    • Leslie KohnKen OkinDale Greenley
    • Leslie KohnKen OkinDale Greenley
    • G06F12/08G06F12/10G06F12/00
    • G06F12/1045
    • A separate cacheable-in-virtual-cache attribute bit (CV) is maintained for each page of memory in the translation table maintained by the operating system. The CV bit indicates whether the memory addresses on the page to which the translation table entry refers are cacheable in virtually indexed caches. According to a first embodiment, when there are two or more aliases which are not offset by multiples of the virtual cache size, all of the aliases are made non-cacheable in virtually indexed caches by deasserting the CV bits for all aliases. With regards to the contents of the translation lookaside buffer (TLB), the translations for all aliases may simultaneously coexist in the TLB because no software intervention is required to insure data coherency between the aliases. According to second and third embodiments of the present invention, when there are two or more aliases which are not offset by multiples of the virtual cache size, only one of those aliases may remain cacheable in virtual caches. For the other aliases, the CV bits for the translation pages containing those aliases are deasserted. The operating system has the responsibility of flushing data from the virtually indexed internal cache before deasserting the CV attribute for a page. The second embodiment allows the newer mapping to a physical address to remain in the first-level cache, while the third embodiment allows the older alias to remain in the first-level cache when a newer alias is mapped.
    • 为由操作​​系统维护的转换表中的每个存储器页面维护单独的可缓存的虚拟高速缓存属性位(CV)。 CV位表示翻译表项引用的页面上的存储器地址是否可在实际索引的高速缓存中高速缓存。 根据第一实施例,当存在不被虚拟高速缓存大小的倍数偏移的两个或更多个别名时,通过将所有别名的CV位解除,所有这些别名在虚拟索引的高速缓存中被使得不可缓存。 关于翻译后备缓冲器(TLB)的内容,所有别名的翻译可能会同时存在于TLB中,因为不需要软件干预来确保别名之间的数据一致性。 根据本发明的第二和第三实施例,当存在不被虚拟高速缓存大小的倍数偏移的两个或更多个别名时,这些别名中只有一个可以在虚拟高速缓存中保持高速缓存。 对于其他别名,包含这些别名的翻译页面的CV位将被取消置位。 在解除页面的CV属性之前,操作系统有责任从虚拟索引的内部缓存中刷新数据。 第二实施例允许将更新的物理地址映射保留在第一级高速缓存中,而第三实施例允许在映射较新的别名时将较旧的别名保留在第一级高速缓存中。
    • 7. 发明授权
    • Methods and apparatuses for servicing load instructions
    • 用于维修加载指令的方法和装置
    • US5745729A
    • 1998-04-28
    • US389636
    • 1995-02-16
    • Dale GreenleyLeslie KohnMing YehGreg Williams
    • Dale GreenleyLeslie KohnMing YehGreg Williams
    • G06F9/312G06F9/38G06F12/08
    • G06F9/30043G06F12/0853G06F9/3824G06F9/3834G06F9/3875G06F12/0859
    • A dual-ported tag array of a cache allows simultaneous access of the tag array by miss data of older LOAD instructions being returned during the same cycle that a new LOAD instruction is accessing the tag array to check for a cache hit. Because a load buffer queues LOAD instructions, the cache tags for older LOAD instructions which missed the cache return later when new LOAD instructions are accessing a tag array to check for cache hits. A method and apparatus for calculating and maintaining a hit bit in a load buffer perform the determination of whether or not a newly dispatched LOAD will hit the cache after it has been queued into the load buffer and waited for all older LOADs to be processed. A load buffer data entry includes the hit bit and all information necessary to process the LOAD instruction and calculate the hit bits for future LOAD instructions which must be buffered. A method and apparatus for servicing LOAD instructions, in which the access of the data array portion of a cache and the tag array portion are decoupled, allows the delayed access of the data array after a LOAD has been delayed in the load buffer without reaccessing the tag array. A method and apparatus allow access to the first level cache and second level cache to occur simultaneously for two separate LOADs in the load buffer.
    • 缓存的双端口标签阵列允许通过在新的LOAD指令正在访问标签阵列以检查缓存命中的相同周期期间返回的旧LOAD指令的未命中数据来同时访问标签数组。 因为加载缓冲区排队LOAD指令,所以当新的LOAD指令正在访问标记数组以检查缓存命中时,错过高速缓存的较旧LOAD指令的缓存标签将返回。 用于计算和维护加载缓冲器中的命中位的方法和装置执行新分派的LOAD在其已经排队到加载缓冲器中并等待所有较旧的LOAD被处理之后是否将击中高速缓存的确定。 加载缓冲区数据条目包括命中位和处理LOAD指令所需的所有信息,并计算必须缓冲的未来LOAD指令的命中位。 一种用于服务LOAD指令的方法和装置,其中高速缓存的数据阵列部分和标签阵列部分的访问被解耦,允许在加载缓冲器中的LOAD被延迟之后数据阵列的延迟访问,而不重新加载 标签数组。 一种方法和装置允许访问第一级高速缓存和第二级高速缓存以同时发生在加载缓冲器中的两个单独的LOAD。
    • 9. 发明授权
    • Method and apparatus for reducing power consumption in a computer
network without sacrificing performance
    • 用于在不牺牲性能的情况下降低计算机网络中的功耗的方法和装置
    • US5692197A
    • 1997-11-25
    • US414879
    • 1995-03-31
    • Charles E. NaradZahir EbrahimSatyanarayana NishtalaWilliam C. Van LooKevin B. NormoyleLouis F. Coffin, IIILeslie Kohn
    • Charles E. NaradZahir EbrahimSatyanarayana NishtalaWilliam C. Van LooKevin B. NormoyleLouis F. Coffin, IIILeslie Kohn
    • G06F1/32G06F15/16G06F15/177
    • G06F1/3209
    • A method and apparatus for actively managing the overall power consumption of a computer network which includes a plurality of computer systems interconnected to each other. In turn, each computer system has one or more modules. Each computer system of the computer network is capable of independently initiating a transition into a power-conserving mode, i.e., a "sleep" state, while keeping its network interface "alive" and fully operational. Subsequently, each computer system can independently transition back into fully operational state, i.e., an "awake" state, when triggered by either a deterministic or an asynchronous event. As a result, the sleep states of the computer systems are transparent to the computer network. Deterministic events are events triggered internally by a computer system, e.g., an internal timer waking the computer system up at midnight to perform housekeeping chores such as daily tape backups. Conversely, the source of asynchronous events are external in nature and include input/output (I/O) activity. The illusion of the entire network being always fully operational is possible because the system controllers, the interconnects and network interfaces of each computer system remain fully operational while selected modules and peripheral devices are powered down. As a result, each computer system is able to rapidly awake from sleep state in response to stimuli by powering down selected modules thereby accomplishing power conservation without requiring a static shut down of the computer network, i.e., without the overall performance and response of the computer network.
    • 一种用于主动管理计算机网络的整体功耗的方法和装置,其包括彼此互连的多个计算机系统。 反过来,每个计算机系统具有一个或多个模块。 计算机网络的每个计算机系统能够独立地启动向省电模式转变,即“休眠”状态,同时保持其网络接口“活着”并且完全可操作。 随后,当由确定性或异步事件触发时,每个计算机系统可以独立地转换回完全操作状态,即“清醒”状态。 因此,计算机系统的睡眠状态对于计算机网络是透明的。 确定性事件是由计算机系统在内部触发的事件,例如内部定时器在午夜唤醒计算机系统以执行诸如日常磁带备份的家务杂务。 相反,异步事件的来源本质上是外部的,包括输入/​​输出(I / O)活动。 整个网络的错觉始终是完全可操作的,因为每个计算机系统的系统控制器,互连和网络接口在选定的模块和外围设备关闭电源时保持完全可操作。 因此,每个计算机系统能够通过断电所选择的模块来迅速地从睡眠状态唤醒,从而实现功率节省,而不需要静态关闭计算机网络,即没有计算机的整体性能和响应 网络。
    • 10. 发明授权
    • Packet switched cache coherent multiprocessor system
    • 分组交换缓存一致多处理器系统
    • US5634068A
    • 1997-05-27
    • US415175
    • 1995-03-31
    • Satyanarayana NishtalaZahir EbrahimWilliam C. Van LooKevin NormoyleLeslie KohnLouis F. Coffin, III
    • Satyanarayana NishtalaZahir EbrahimWilliam C. Van LooKevin NormoyleLeslie KohnLouis F. Coffin, III
    • G06F12/08G06F13/00
    • G06F12/0822
    • A multiprocessor computer system has a multiplicity of sub-systems and a main memory coupled to a system controller. An interconnect module, interconnects the main memory and sub-systems in accordance with interconnect control signals received from the system controller. All of the sub-systems include a port that transmits and receives data as data packets of a fixed size. At least two of the sub-systems are data processors, each having a respective cache memory and a respective set of master cache tags (Etags), including one cache tag for each data block stored by the cache memory. The system controller maintains a set of duplicate cache tags (Dtags) for each of the data processors. The data processors each include master cache logic for updating the master cache tags, while the system controller includes logic for updating the duplicate cache tags. Memory transaction request logic simultaneously looks up the second cache tag in each of the sets of duplicate cache tags corresponding to the memory transaction request. It then determines which one of the cache memories and main memory to couple to the requesting data processor based on the second cache states and the address tags stored in the corresponding second cache tags. Duplicate cache update logic simultaneously updates all of the corresponding second cache tags in accordance with predefined cache tag update criteria.
    • 多处理器计算机系统具有多个子系统和耦合到系统控制器的主存储器。 互连模块根据从系统控制器接收的互连控制信号,互连主存储器和子系统。 所有子系统都包括一个端口,该端口作为固定大小的数据包发送和接收数据。 至少两个子系统是数据处理器,每个数据处理器具有相应的高速缓冲存储器和相应的主缓存标签集(Etags),包括由高速缓存存储器存储的每个数据块的一个高速缓存标签。 系统控制器为每个数据处理器维护一组重复的缓存标签(Dtags)。 数据处理器各自包括用于更新主缓存标签的主缓存逻辑,而系统控制器包括用于更新重复高速缓存标签的逻辑。 存储器事务请求逻辑同时查找对应于存储器事务请求的每组重复高速缓存标签中的第二高速缓存标签。 然后,基于存储在相应的第二高速缓存标签中的第二高速缓存状态和地址标签,确定哪个高速缓冲存储器和主存储器耦合到请求数据处理器。 重复的高速缓存更新逻辑根据预定义的缓存标签更新标准同时更新所有相应的第二高速缓存标签。