会员体验
专利管家(专利管理)
工作空间(专利管理)
风险监控(情报监控)
数据分析(专利分析)
侵权分析(诉讼无效)
联系我们
交流群
官方交流:
QQ群: 891211   
微信请扫码    >>>
现在联系顾问~
热词
    • 2. 发明授权
    • Three dimensional user interface effects on a display by using properties of motion
    • 通过使用运动属性对显示器的三维用户界面效果
    • US09417763B2
    • 2016-08-16
    • US14571062
    • 2014-12-15
    • Apple Inc.
    • Mark ZimmerGeoff StahlDavid HaywardFrank Doepke
    • G06T15/00G06F3/0481G06T15/20G06F3/0488G06F3/00G06F3/0346G06F3/01
    • G06F3/04815G06F3/005G06F3/013G06F3/017G06F3/0346G06F3/0488G06F2203/0381G06T15/20
    • The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.
    • 本文公开的技术使用罗盘,MEMS加速度计,GPS模块和MEMS陀螺仪来推断用于手持设备的参考框架。 这可以提供真正的Frenet帧,即用于显示的X和Y向量,以及垂直于显示器指向的Z向量。 事实上,随着来自加速度计,陀螺仪和其他实时报告状态的仪器的各种惯性线索,可以实时跟踪设备的Frenet帧,以提供连续的3D参考帧。 一旦知道了这个连续的参考框架,用户的眼睛的位置可以通过使用设备的前置摄像机来直接推断或计算。 随着用户眼睛的位置和用于显示器的连续的3D参考框架,可以创建和显示设备显示器上的对象的更逼真的虚拟3D描绘,并由用户进行交互。
    • 3. 发明申请
    • Three Dimensional User Interface Effects On A Display
    • 三维用户界面对显示器的影响
    • US20150009130A1
    • 2015-01-08
    • US14329777
    • 2014-07-11
    • Apple Inc.
    • Ricardo MottaMark ZimmerGeoff StahlDavid HaywardFrank Doepke
    • G06F3/01G06T15/20
    • G06F3/04815G06F3/005G06F3/012G06F3/013G06F3/0346G06F3/04883G06F2203/0381G06T15/20G06T2200/04G06T2200/24
    • The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.
    • 本文公开的技术可以使用各种传感器来推断用于手持设备的参考帧。 事实上,由于来自加速度计,陀螺仪和其他实时报告其状态的仪器的各种惯性线索,可以实时跟踪设备的Frenet帧,以提供瞬时(或连续)3D帧 - 参考。 除了或代替计算这个瞬时(或连续的)参照系,用户头部的位置可以通过使用设备的一个或多个光学传感器(例如,光学摄像机)来直接推断或计算, 红外摄像机,激光器等。通过了解用于显示和/或用户头部位置的知识的3D参考框架,可以创建设备显示器上的图形对象的更逼真的虚拟3D描绘,以及 与用户进行交互。
    • 4. 发明申请
    • Avatars Reflecting User States
    • 形象反映用户国
    • US20140143693A1
    • 2014-05-22
    • US14163697
    • 2014-01-24
    • Apple Inc.
    • Thomas GoossensLaurent BaumannGeoff Stahl
    • G06F3/0484G06F3/0488
    • G06F3/04845G06F3/04883G06Q10/10G06Q50/01
    • Methods, systems, and computer-readable media for creating and using customized avatar instances to reflect current user states are disclosed. In various implementations, the user states can be defined using trigger events based on user-entered textual data, emoticons, or states of the device being used. For each user state, a customized avatar instance having a facial expression, body language, accessories, clothing items, and/or a presentation scheme reflective of the user state can be generated. When one or more trigger events indicating occurrence of a particular user state are detected on the device, the avatar presented on the device is updated with the customized avatar instance associated with the particular user state.
    • 公开了用于创建和使用自定义化身实例以反映当前用户状态的方法,系统和计算机可读介质。 在各种实现中,可以基于用户输入的文本数据,表情符号或正在使用的设备的状态的触发事件来定义用户状态。 对于每个用户状态,可以生成具有面部表情,身体语言,附件,服装项目和/或反映用户状态的呈现方案的定制化身实例。 当在设备上检测到指示出现特定用户状态的一个或多个触发事件时,使用与特定用户状态相关联的定制化身实例更新在设备上呈现的化身。
    • 6. 发明授权
    • Three dimensional user interface effects on a display
    • 显示器上的三维用户界面效果
    • US09411413B2
    • 2016-08-09
    • US14329777
    • 2014-07-11
    • Apple Inc.
    • Ricardo MottaMark ZimmerGeoff StahlDavid HaywardFrank Doepke
    • G06T15/00G06F3/01G06T15/20G06F3/00G06F3/0481G06F3/0346
    • G06F3/04815G06F3/005G06F3/012G06F3/013G06F3/0346G06F3/04883G06F2203/0381G06T15/20G06T2200/04G06T2200/24
    • The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.
    • 本文公开的技术可以使用各种传感器来推断用于手持设备的参考帧。 事实上,由于来自加速度计,陀螺仪和其他实时报告其状态的仪器的各种惯性线索,可以实时跟踪设备的Frenet帧,以提供瞬时(或连续)3D帧 - 参考。 除了或代替计算这个瞬时(或连续的)参照系,用户头部的位置可以通过使用设备的一个或多个光学传感器(例如,光学摄像机)来直接推断或计算, 红外摄像机,激光器等。通过了解用于显示和/或用户头部位置的知识的3D参考框架,可以创建设备显示器上的图形对象的更逼真的虚拟3D描绘,以及 与用户进行交互。