发明名称 Sensor fusion for depth estimation
摘要 To generate a pixel-accurate depth map, data from a range-estimation sensor (e.g., a time-of flight sensor) is combined with data from multiple cameras to produce a high-quality depth measurement for pixels in an image. To do so, a depth measurement system may use a plurality of cameras mounted on a support structure to perform a depth hypothesis technique to generate a first depth-support value. Furthermore, the apparatus may include a range-estimation sensor which generates a second depth-support value. In addition, the system may project a 3D point onto the auxiliary cameras and compare the color of the associated pixel in the auxiliary camera with the color of the pixel in reference camera to generate a third depth-support value. The system may combine these support values for each pixel in an image to determine respective depth values. Using these values, the system may generate a depth map for the image.
申请公布号 US9424650(B2) 申请公布日期 2016.08.23
申请号 US201313915899 申请日期 2013.06.12
申请人 Disney Enterprises, Inc. 发明人 van Baar Jereon;Beardsley Paul A.;Pollefeys Marc;Gross Markus
分类号 G06T15/10;G06T7/00;G01S17/02;G01S17/89 主分类号 G06T15/10
代理机构 Patterson & Sheridan LLP 代理人 Patterson & Sheridan LLP
主权项 1. A method for calculating a depth value for a pixel in a reference image, the method comprising: receiving the reference image captured by a reference camera and at least one auxiliary image captured by an auxiliary camera; generating a first support value indicating whether the pixel in the reference image is at a particular depth, relative to the reference camera, based on comparing a region of the auxiliary image captured by the auxiliary camera with a region of the reference image captured by the reference camera; providing a depth estimate of the pixel from a range-estimation camera; generating a second support value indicating whether the pixel in the reference image is at the particular depth based on comparing the depth estimate from the range-estimation camera to the particular depth; generating a third support value indicating whether the pixel is at the particular depth based on projecting a 3D point, corresponding to the pixel in the reference image, onto the auxiliary image; and fusing, by operation of one or more computer processors, the first, second, and third support values to generate a total support value for the pixel at the particular depth.
地址 Burbank CA US