主权项 |
1. A computer implemented method for authenticating a user according to the user's biometric features, the method comprising:
a) capturing, by a mobile device having a camera, a storage medium, instructions stored on the storage medium, and a processor configured by executing the instructions,
a plurality of images depicting at least one facial region of the user,capturing, by the processor and a microphone communicatively coupled to the processor, an audio data file including a recording of the user's voice, wherein the audio data file is captured concurrent to capturing the plurality of images by the processor using the camera, andcapturing, by the processor using one or more sensors, non-machine-vision based information; and b) detecting, by the processor from a first image of the plurality of images, a plurality of facial features depicted in the first image; c) determining, by the processor from the first image, a first position of the facial features, wherein the first position of each of the facial features is relative to a respective position of at least one other facial feature; d) determining, by the processor from each of at least one other image of the plurality of images, at least one second respective position of the each of the facial features; e) calculating, by the processor as a function of the determining in steps c) and d), changes in position of the facial features; encoding, by the processor in one or more spatiotemporal histograms:
the changes in position of the facial features as temporal gradients, wherein the temporal gradients correspond to pixels depicting respective facial features detected in the first image, and wherein the temporal gradients represent a difference between a location of the pixels depicting the respective facial features in the first image and respective locations of pixels depicting the respective facial features in the at least one other image, andintensity values for the pixels depicting the respective facial features in the first image as spatial gradients; generating, by the processor from the audio data file, a voice print characterizing the user's voice and usable to determine liveness of the user; determining, by the processor, that the facial features and the audio data file represent live biometric features of the user and thereby indicates liveness of the user as a function of:
comparing, by the processor, the changes in position of the facial features to the voice print; and authenticating, by the processor, the user according to determining that the changes in position of the facial features correspond to the voice print and thereby indicates liveness of the user, and wherein authenticating the user further comprises:
comparing, by the processor, the non-machine vision based information to pre-defined behavioral characteristics stored on the storage medium, anddetermining, by the processor, that the non-machine vision based information is consistent with the pre-defined behavioral characteristics to a prescribed degree. |