发明名称 3-D MODEL GENERATION
摘要 Various embodiments provide for the generation of 3D models of objects. For example, depth data and color image data can be captured from viewpoints around an object using a sensor. A camera having a higher resolution can simultaneously capture image data of the object. Features between images captured by the image sensor and the camera can be extracted and compared to determine a mapping between the camera and the image. Once the mapping between the camera and the image sensor is determined, a second mapping between adjacent viewpoints can be determined for each image around the object. In this example, each viewpoint overlaps with an adjacent viewpoint and features extracted from two overlapping viewpoints are matched to determine their relative alignment. Accordingly, a 3D point cloud can be generated and the images captured by the camera can be projected on the surface of the 3D point cloud to generate the 3D model.
申请公布号 US2015381968(A1) 申请公布日期 2015.12.31
申请号 US201414318355 申请日期 2014.06.27
申请人 A9.com, Inc. 发明人 Arora Himanshu;Dhua Arnab Sanat Kumar;Ramesh Sunil;Fang Chen
分类号 H04N13/02;G06T17/00 主分类号 H04N13/02
代理机构 代理人
主权项 1. A non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor, cause a computing device to: capture, using an image sensor, depth data of an object from a plurality of viewpoints; capture, using a first camera, first image data of the object from each viewpoint of the plurality of viewpoints, a preregistered process aligning two-dimensional (2D) coordinates of the first image data for the first camera and three-dimensional (3D) coordinates of the depth data for the image sensor; capture, using a second camera, second image data of the object from each viewpoint of the plurality of viewpoints, the second image data being a higher resolution relative to the first image data; extract, using a feature extraction algorithm, first features from the first image data of each viewpoint captured by the first camera; extract, using the feature extraction algorithm, second features from the second image data of each viewpoint captured by the second camera; determine matching features between the first features and the second features; determine, using a projective mapping algorithm, a first mapping between the first image data and the second image data for each viewpoint using the matching features, the first mapping providing 3D coordinates for the second features of the second image data captured by the second camera; determine, for second imaged data of each viewpoint, matching second features between adjacent viewpoints, a first viewpoint having a first field of view at least partially overlapping a second field of view of an adjacent second viewpoint; determine, for the second imaged data of each viewpoint, a second mapping between the second image data of adjacent viewpoints using a Euclidean mapping algorithm; generate, using the depth data, a 3D point cloud for the object; generate, using a mesh reconstruction algorithm, a triangular mesh of the object from the 3D point cloud; and generate a 3D model of the object by projecting, based at least in part on the 3D coordinates for the second features from first mapping, the second image data onto the triangular mesh for each viewpoint of the plurality of viewpoints using the second mapping.
地址 Palo Alto CA US