发明名称 System and method for determining liveness
摘要 Systems and methods are provided for recording a user's biometric features and generating an identifier representative of the user's biometric features and whether the user is alive (“liveness”) using mobile devices such as a smartphone. The systems and methods described herein enable a series of operations whereby a user using a mobile device can capture imagery of a user's face, eyes and periocular region. The mobile device is also configured analyze the imagery to identify and determine the position of low-level features spatially within the images and the changes in position of the low level features dynamically throughout the images. Using the spatial and dynamic information the mobile device is further configured to determine whether the user is alive and/or generate a biometric identifier characterizing the user's biometric features which can be used to authenticate the user by determining liveness and/or verify the user's identity.
申请公布号 US9313200(B2) 申请公布日期 2016.04.12
申请号 US201414201462 申请日期 2014.03.07
申请人 HOYOS LABS IP, LTD. 发明人 Hoyos Hector
分类号 G06F21/32;H04L29/06;G06Q20/40;G07F19/00;G06Q20/32 主分类号 G06F21/32
代理机构 Leason Ellis LLP 代理人 Leason Ellis LLP
主权项 1. A computer implemented method for authenticating a user according to the user's biometric features, the method comprising: a) capturing, by a mobile device having a camera, a storage medium, instructions stored on the storage medium, and a processor configured by executing the instructions, a plurality of images depicting at least one facial region of the user,capturing, by the processor and a microphone communicatively coupled to the processor, an audio data file including a recording of the user's voice, wherein the audio data file is captured concurrent to capturing the plurality of images by the processor using the camera, andcapturing, by the processor using one or more sensors, non-machine-vision based information; and b) detecting, by the processor from a first image of the plurality of images, a plurality of facial features depicted in the first image; c) determining, by the processor from the first image, a first position of the facial features, wherein the first position of each of the facial features is relative to a respective position of at least one other facial feature; d) determining, by the processor from each of at least one other image of the plurality of images, at least one second respective position of the each of the facial features; e) calculating, by the processor as a function of the determining in steps c) and d), changes in position of the facial features; encoding, by the processor in one or more spatiotemporal histograms: the changes in position of the facial features as temporal gradients, wherein the temporal gradients correspond to pixels depicting respective facial features detected in the first image, and wherein the temporal gradients represent a difference between a location of the pixels depicting the respective facial features in the first image and respective locations of pixels depicting the respective facial features in the at least one other image, andintensity values for the pixels depicting the respective facial features in the first image as spatial gradients; generating, by the processor from the audio data file, a voice print characterizing the user's voice and usable to determine liveness of the user; determining, by the processor, that the facial features and the audio data file represent live biometric features of the user and thereby indicates liveness of the user as a function of: comparing, by the processor, the changes in position of the facial features to the voice print; and authenticating, by the processor, the user according to determining that the changes in position of the facial features correspond to the voice print and thereby indicates liveness of the user, and wherein authenticating the user further comprises: comparing, by the processor, the non-machine vision based information to pre-defined behavioral characteristics stored on the storage medium, anddetermining, by the processor, that the non-machine vision based information is consistent with the pre-defined behavioral characteristics to a prescribed degree.
地址 Oxford GB