发明名称 Depth-based user interface gesture control
摘要 Technologies for depth-based gesture control include a computing device having a display and a depth sensor. The computing device is configured to recognize an input gesture performed by a user, determine a depth relative to the display of the input gesture based on data from the depth sensor, assign a depth plane to the input gesture as a function of the depth, and execute a user interface command based on the input gesture and the assigned depth plane. The user interface command may control a virtual object selected by depth plane, including a player character in a game. The computing device may recognize primary and secondary virtual touch planes and execute a secondary user interface command for input gestures on the secondary virtual touch plane, such as magnifying or selecting user interface elements or enabling additional functionality based on the input gesture. Other embodiments are described and claimed.
申请公布号 US9389779(B2) 申请公布日期 2016.07.12
申请号 US201313976036 申请日期 2013.03.14
申请人 Intel Corporation 发明人 Anderson Glen J.;Reif Dror;Hurwitz Barak;Kamhi Gila
分类号 G06F3/033;G06F3/00;G06F3/0488;G06F3/01;G06F3/048 主分类号 G06F3/033
代理机构 Barnes & Thornburg 代理人 Barnes & Thornburg
主权项 1. A computing device for depth-based gesture control, the computing device comprising: a display to define a surface normal; a depth sensor to: generate depth sensor data indicative of a depth relative to the display of an input gesture performed by a user of the computing device in front of the display;generate second depth sensor data indicative of a second depth relative to the display of a second input gesture performed by a second user of the computing device in front of the display; andgenerate third depth sensor data indicative of a third depth relative to the display of a third input gesture performed by the second user of the computing device in front of the display; a gesture recognition module to recognize the input gesture, the second input gesture, and the third input gesture; a depth recognition module to: receive the depth sensor data, the second depth sensor data, and the third depth sensor data from the depth sensor;determine the depth of the input gesture as a function of the depth sensor data;determine the second depth of the second input gesture as a function of the second depth sensor data;determine the third depth of the third input gesture as a function of the third depth sensor data;assign a depth plane to the input gesture as a function of the depth of the input gesture;assign a second depth plane different from the depth plane to the second input gesture as a function of the second depth of the second input gesture; andassign a third depth plane different from the second depth plane to the third input gesture as a function of the third depth of the third input gesture;wherein each depth plane is positioned parallel to the display and intersects the surface normal; and a user command module to: designate the second depth plane as an accessible depth plane for the second user;execute a user interface command based on the input gesture and the assigned depth plane;execute a second user interface command based on the second input gesture and the assigned second depth plane;determine whether the third depth is associated with the accessible depth plane for the second user; andreject the third input gesture in response to a determination that the third depth is not associated with the accessible depth plane for the second user,wherein to execute the second user interface command comprises to: determine whether the second assigned depth plane comprises a secondary virtual touch plane of the computing device; andexecute a secondary user interface command in response to a determination that the assigned second depth plane comprises the secondary virtual touch plane, wherein to execute the secondary user interface command comprises to display a contextual command menu on the display.
地址 Santa Clara CA US