发明名称 Apparatus and method for recognizing spatial gesture
摘要 The present invention relates to an apparatus for recognizing a gesture in a space. In accordance with an embodiment, a spatial gesture recognition apparatus includes a pattern formation unit for radiating light onto a surface of an object required to input a gesture in a virtual air bounce, and forming a predetermined pattern on the surface of the object, an image acquisition unit for acquiring a motion image of the object, and a processing unit for recognizing a gesture input by the object based on the pattern formed on the surface of the object using the acquired image. In this way, depending on the depths of an object required to input a gesture in a space, haptic feedbacks having different intensities are provided, and thus a user can precisely input his or her desired gesture.
申请公布号 US9524031(B2) 申请公布日期 2016.12.20
申请号 US201314349063 申请日期 2013.11.18
申请人 CENTER OF HUMAN-CENTERED INTERACTION FOR COEXISTENCE 发明人 Yeom Kiwon;Han Hyejin;You Bumjae
分类号 G06F3/0346;G06F3/01;G06K9/00 主分类号 G06F3/0346
代理机构 Wells St. John P.S. 代理人 Wells St. John P.S.
主权项 1. An apparatus for recognizing a spatial gesture, comprising: a pattern formation unit for radiating light onto a surface of an object required to input a gesture in a virtual air bounce, and forming a predetermined pattern on the surface of the object; an image acquisition unit for acquiring a motion image of the object; a processing unit for recognizing a gesture input by the object based on the pattern formed on the surface of the object using the acquired image; a gesture recognition unit for recognizing a motion of the object as a gesture, the gesture recognition unit extracts vector values from the gesture and differentiates velocities of the vector values to eliminate or correct a meaningless portion of the gesture; wherein the gesture recognition unit recognizes the motion of the object made in a predetermined area of the air bounce as a gesture based on the calculated depth information of the object; wherein the gesture recognition unit extracts information about one or more feature vectors from the recognized gesture, and eliminates a ligature from the motion of the object using the extracted feature vectors; wherein the gesture recognition unit extracts one or more segmentation points using the extracted feature vectors and the calculated depth information of the object, and determines a portion for connecting the extracted segmentation points to be a ligature; and wherein the gesture recognition unit extracts the segmentation points by applying the feature vectors and the depth information of the object to any one of the following feature point extraction techniques: a matching probability technique and a pattern model technique.
地址 Seoul KR