会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 1. 发明授权
    • System and method for dynamic remote object activation
    • 用于动态远程对象激活的系统和方法
    • US07793302B2
    • 2010-09-07
    • US10372464
    • 2003-02-21
    • Prasad PeddadaAdam MessingerAnno R. Langen
    • Prasad PeddadaAdam MessingerAnno R. Langen
    • G06F9/44G06F9/54
    • G06F9/548
    • A system and a method for dynamic or as-needed activation of Remote Method Invocation (RMI) layer remote objects in response to a client request. Object activation allows the system to clean up or delete currently unused remote objects, and then reactivate them when a client actually needs them. An object implementation can first be created in response to a client request. The client receives a remote reference (remote ref) and an activation identifier (activation id) identifying that particular implementation. The implementation can subsequently be cleaned up or deleted during garbage collection so as to save server resources, or alternatively the object can be reused if the system is set up to maintain a pool of objects. When the client requests the same object at a later point in time, the system activates an object based on the activation ID previously received from the server.
    • 用于响应于客户端请求动态或按需激活远程方法调用(RMI)层远程对象的系统和方法。 对象激活允许系统清理或删除当前未使用的远程对象,然后在客户端实际需要时重新激活它们。 可以首先根据客户端请求创建对象实现。 客户端接收远程引用(远程引用)和标识该特定实现的激活标识符(激活标识符)。 随后可以在垃圾收集期间清除或删除该实现,以便节省服务器资源,或者如果系统设置为维护对象池,则可以重新使用该对象。 当客户端在稍后的时间点请求相同的对象时,系统将基于先前从服务器接收到的激活ID激活对象。
    • 2. 发明授权
    • Parallel transaction execution with a thread pool
    • 使用线程池执行并行事务
    • US08375359B2
    • 2013-02-12
    • US12643775
    • 2009-12-21
    • Alexander J. SomogyiAdam MessingerAnno R. Langen
    • Alexander J. SomogyiAdam MessingerAnno R. Langen
    • G06F9/44G06F9/46
    • G06Q10/063
    • A method for using available server threads to process resources and reduce the overall time of performing XA interactions in two-phase commit protocol implemented by the transaction manager. A TM processing XA interactions dispatches interaction commands for multiple resources to a thread manager, which dispatches the commands to idle server threads. In one embodiment, the TM attempts to dispatch all but one of the interaction commands to separate threads. The primary thread then processes the remaining resource command. Any commands relating to dispatch requests that were unable to be dispatched to separate threads due to unavailability are processed by the primary thread. Once the primary server has processed its interaction commands and received a signal indicating the threads receiving dispatch requests have completed their respective processing of dispatched commands, the next group of commands is processed in a similar manner.
    • 一种使用可用的服务器线程处理资源并减少由事务管理器实现的两阶段提交协议执行XA交互的总时间的方法。 TM处理XA交互将多个资源的交互命令分配给线程管理器,线程管理器将命令分派到空闲服务器线程。 在一个实施例中,TM尝试将除了一个交互命令的所有命令分发到单独的线程。 然后主线程处理剩余的资源命令。 任何与由于不可用而无法分派到单独线程的调度请求相关的命令都由主线程处理。 一旦主服务器处理了其交互命令并且接收到指示接收分派请求的线程的信号已经完成了它们对分派命令的相应处理,则以类似的方式处理下一组命令。
    • 3. 发明授权
    • Method for transaction processing with parallel execution
    • 并行执行事务处理方法
    • US07640535B2
    • 2009-12-29
    • US10762944
    • 2004-01-22
    • Alexander J. SomogyiAdam MessingerAnno R. Langen
    • Alexander J. SomogyiAdam MessingerAnno R. Langen
    • G06F9/44G06F9/46
    • G06Q10/063
    • A method for using available server threads to process resources and reduce the overall time of performing XA interactions in two-phase commit protocol implemented by the transaction manager. A TM processing XA interactions dispatches interaction commands for multiple resources to a thread manager, which dispatches the commands to idle server threads. In one embodiment, the TM attempts to dispatch all but one of the interaction commands to separate threads. The primary thread then processes the remaining resource command. Any commands relating to dispatch requests that were unable to be dispatched to separate threads due to unavailability are processed by the primary thread. Once the primary server has processed its interaction commands and received a signal indicating the threads receiving dispatch requests have completed their respective processing of dispatched commands, the next group of commands is processed in a similar manner.
    • 一种使用可用的服务器线程处理资源并减少由事务管理器实现的两阶段提交协议执行XA交互的总时间的方法。 TM处理XA交互将多个资源的交互命令分配给线程管理器,线程管理器将命令分派到空闲服务器线程。 在一个实施例中,TM尝试将除了一个交互命令的所有命令分发到单独的线程。 然后主线程处理剩余的资源命令。 任何与由于不可用而无法分派到单独线程的调度请求相关的命令都由主线程处理。 一旦主服务器处理了其交互命令并且接收到指示接收分派请求的线程的信号已经完成了它们对分派命令的相应处理,则以类似的方式处理下一组命令。
    • 4. 发明申请
    • PARALLEL TRANSACTION EXECUTION WITH A THREAD POOL
    • 并行交易执行与螺纹池
    • US20100100624A1
    • 2010-04-22
    • US12643775
    • 2009-12-21
    • Alexander J. SomogyiAdam MessingerAnno R. Langen
    • Alexander J. SomogyiAdam MessingerAnno R. Langen
    • G06F15/16
    • G06Q10/063
    • A method for using available server threads to process resources and reduce the overall time of performing XA interactions in two-phase commit protocol implemented by the transaction manager. A TM processing XA interactions dispatches interaction commands for multiple resources to a thread manager, which dispatches the commands to idle server threads. In one embodiment, the TM attempts to dispatch all but one of the interaction commands to separate threads. The primary thread then processes the remaining resource command. Any commands relating to dispatch requests that were unable to be dispatched to separate threads due to unavailability are processed by the primary thread. Once the primary server has processed its interaction commands and received a signal indicating the threads receiving dispatch requests have completed their respective processing of dispatched commands, the next group of commands is processed in a similar manner.
    • 一种使用可用的服务器线程处理资源并减少由事务管理器实现的两阶段提交协议执行XA交互的总时间的方法。 TM处理XA交互将多个资源的交互命令分配给线程管理器,线程管理器将命令分派到空闲服务器线程。 在一个实施例中,TM尝试将除了一个交互命令的所有命令分发到单独的线程。 然后主线程处理剩余的资源命令。 任何与由于不可用而无法分派到单独线程的调度请求相关的命令都由主线程处理。 一旦主服务器处理了其交互命令并且接收到指示接收分派请求的线程的信号已经完成了它们对分派命令的相应处理,则以类似的方式处理下一组命令。
    • 8. 发明授权
    • Clustered enterprise Java™ in a secure distributed processing system
    • 集群企业Java(TM)在安全的分布式处理系统中
    • US07334232B2
    • 2008-02-19
    • US11176768
    • 2005-07-07
    • Dean B. JacobsAnno R. Langen
    • Dean B. JacobsAnno R. Langen
    • G06F9/46
    • G06F9/465G06F9/544G06F9/546G06F9/548
    • A clustered enterprise distributed processing system. The distributed processing system includes a first and a second computer coupled to a communication medium. The first computer includes a virtual machine (JVM) and kernel software layer for transferring messages, including a remote virtual machine (RJVM). The second computer includes a JVM and a kernel software layer having a RJVM. Messages are passed from a RJVM to the JVM in one computer to the JVM and RJVM in the second computer. Messages may be forwarded through an intermediate server or rerouted after a network reconfiguration. Each computer includes a smart stub having a replica handler, including a load balancing software component and a failover software component. Each computer includes a duplicated service naming tree for storing a pool of smart stubs at a node.
    • 集群企业分布式处理系统。 分布式处理系统包括耦合到通信介质的第一和第二计算机。 第一台计算机包括用于传输消息的虚拟机(JVM)和内核软件层,包括远程虚拟机(RJVM)。 第二台计算机包括一个JVM和一个具有RJVM的内核软件层。 消息从一台计算机的RJVM传递到JVM,并传送到第二台计算机中的JVM和RJVM。 消息可以通过中间服务器转发或在网络重新配置后重新路由。 每个计算机都包含一个具有复制处理程序的智能存根,包括负载平衡软件组件和故障转移软件组件。 每个计算机都包含一个重复的服务命名树,用于在一个节点上存储智能存根池。
    • 9. 发明授权
    • Engine near cache for reducing latency in a telecommunications environment
    • 发动机靠近缓存,以减少电信环境中的延迟
    • US08112525B2
    • 2012-02-07
    • US11748791
    • 2007-05-15
    • Anno R. LangenRao Nasir KhanJohn D. BeattyIoannis Cosmadopoulos
    • Anno R. LangenRao Nasir KhanJohn D. BeattyIoannis Cosmadopoulos
    • G06F15/173
    • H04L67/1095H04L65/1006H04L67/1002H04L67/2842
    • The SIP server can be comprised of an engine tier and a state tier distributed on a cluster network environment. The engine tier can send, receive and process various messages. The state tier can maintain in-memory state data associated with various SIP sessions. A near cache can be residing on the engine tier in order to maintain a local copy of a portion of the state data contained in the state tier. Various engines in the engine tier can determine whether the near cache contains a current version of the state needed to process a message before retrieving the state data from the state tier. Accessing the state from the near cache can save on various latency costs such as serialization, transport and deserialization of state to and from the state tier. Furthermore, the near cache and JVM can be tuned to further improve performance of the SIP server.
    • SIP服务器可以由分布在集群网络环境上的引擎层和状态层组成。 引擎层可以发送,接收和处理各种消息。 状态层可以维护与各种SIP会话相关联的内存状态数据。 靠近缓存可以驻留在引擎层上,以便维护状态层中包含的状态数据的一部分的本地副本。 在从状态层检索状态数据之前,引擎层中的各种引擎可以确定近端缓存是否包含处理消息所需状态的当前版本。 从近端缓存访问状态可以节省各种延迟成本,例如状态到国家层的状态的序列化,传输和反序列化。 此外,可以调整近端缓存和JVM以进一步提高SIP服务器的性能。