发明名称 Method and system enabling natural user interface gestures with user wearable glasses
摘要 User wearable eye glasses include a pair of two-dimensional cameras that optically acquire information for user gestures made with an unadorned user object in an interaction zone responsive to viewing displayed imagery, with which the user can interact. Glasses systems intelligently signal process and map acquired optical information to rapidly ascertain a sparse (x,y,z) set of locations adequate to identify user gestures. The displayed imagery can be created by glasses systems and presented with a virtual on-glasses display, or can be created and/or viewed off-glasses. In some embodiments the user can see local views directly, but augmented with imagery showing internet provided tags identifying and/or providing information as to viewed objects. On-glasses systems can communicate wirelessly with cloud servers and with off-glasses systems that the user can carry in a pocket or purse.
申请公布号 US8836768(B1) 申请公布日期 2014.09.16
申请号 US201313975257 申请日期 2013.08.23
申请人 Aquifi, Inc. 发明人 Rafii Abbas;Zuccarino Tony
分类号 H04N13/02;H04N7/18;G06T17/00;G09G5/00;G06F3/01 主分类号 H04N13/02
代理机构 代理人 Kaufman, Esq. Michael A.
主权项 1. A method to enable an unadorned user-object to communicate using gestures made in (x,y,z) space with an eye glasses wearable electronic device coupleable to a display having a display screen whereon user viewable imagery is displayable, the method including the following steps: (a) providing said eye glasses wearable electronic device with an optical acquisition system operable to capture image data of said unadorned user-object within a three-dimensional hover zone; (b) defining within said three-dimensional hover zone an interaction subzone including at least one z0 plane disposed intermediate said eye glasses wearable electronic device and a plane at a maximum z-distance beyond which unadorned user-object gestures need not be recognized by said electronic device; (c) processing image data captured at step (a) representing an interaction of said unadorned user-object with at least a portion of said interaction subzone, defined in step (b), to produce three-dimensional positional information of a detected said interaction; (d) using said three-dimensional positional information produced at step (c) to determine at least one of (i) when in time, and (ii) where in (x,y,z) space said unadorned user-object interaction occurred; (e) following determination at step (d), identifying a gesture being made by said unadorned user-object; and (f) in response to identification of a gesture at step (e), generating and coupling at least one command to said display, said command having at least one characteristic selected from a group consisting of (I) said command causes altering at least one aspect of said viewable imagery, and (II) said command causes alteration of a state of said display regardless of whether an altered said state is user viewable.
地址 Palo Alto CA US