发明名称 Three dimensional user interface effects on a display
摘要 The techniques disclosed herein may use various sensors to infer a frame of reference for a hand-held device. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track a Frenet frame of the device in real time to provide an instantaneous (or continuous) 3D frame-of-reference. In addition to—or in place of—calculating this instantaneous (or continuous) frame of reference, the position of a user's head may either be inferred or calculated directly by using one or more of a device's optical sensors, e.g., an optical camera, infrared camera, laser, etc. With knowledge of the 3D frame-of-reference for the display and/or knowledge of the position of the user's head, more realistic virtual 3D depictions of the graphical objects on the device's display may be created—and interacted with—by the user.
申请公布号 US9411413(B2) 申请公布日期 2016.08.09
申请号 US201414329777 申请日期 2014.07.11
申请人 Apple Inc. 发明人 Motta Ricardo;Zimmer Mark;Stahl Geoff;Hayward David;Doepke Frank
分类号 G06T15/00;G06F3/01;G06T15/20;G06F3/00;G06F3/0481;G06F3/0346 主分类号 G06T15/00
代理机构 Blank Rome LLP 代理人 Blank Rome LLP
主权项 1. A graphical user interface method, comprising: receiving optical data from one or more optical sensors disposed within a device, wherein the optical data comprises one or more of: two-dimensional image data, stereoscopic image data, structured light data, depth map data, and Lidar data; receiving non-optical data from one or more non-optical sensors; determining a position of a user of the device's head based, at least in part, on the received optical data and the received non-optical data; generating a virtual 3D depiction of at least part of a graphical user interface on a display of the device; and applying an appropriate perspective transformation to the virtual 3D depiction of the at least part of the graphical user interface on the display of the device, wherein the acts of generating and applying are based, at least in part, on the determined position of the user of the device's head, the received optical data, and the received non-optical data, and wherein the at least part of the graphical user interface is represented in a virtual 3D operating system environment.
地址 Cupertino CA US