发明名称 METHOD FOR IMAGE AND VIDEO VIRTUAL HAIRSTYLE MODELING
摘要 The invention discloses a method for image and video virtual hairstyle modeling, including: performing data acquisition for a target subject by using a digital device and obtaining a hairstyle region from an image by segmenting; obtaining a uniformly distributed static hairstyle model which conforms to the original hairstyle region by solving an orientation ambiguity problem of an image hairstyle orientation field, calculating a movement of the hairstyle in a video by tracing a movement of a head model and estimating non-rigid deformation, generating a dynamic hairstyle model in every moment during the moving process, so that the dynamic hairstyle model fits the real movement of the hairstyle in the video naturally. The method is used to perform virtual 3D model reconstruction with physical rationality for individual hairstyles in single-views and video sequences, and widely applied in creating virtual characters and many hairstyle editing applications for images and videos.
申请公布号 US2015054825(A1) 申请公布日期 2015.02.26
申请号 US201414536571 申请日期 2014.11.07
申请人 ZHEJIANG UNIVERSITY 发明人 WENG YANLIN;CHAI MENGLEI;WANG LVDI;ZHOU KUN
分类号 G06T17/00;G06T7/00 主分类号 G06T17/00
代理机构 代理人
主权项 1. A method for image and video virtual hairstyle modeling, comprising the following steps: (1) data acquisition and preprocessing of a hairstyle image: performing data acquisition on a target subject by using a digital device, wherein clearness and integrity of a hairstyle part are required, and obtaining a hairstyle region from the image by segmenting with a paint selecting tool; (2) image-based calculation of a hairstyle orientation field: solving an orientation ambiguity problem of an image hairstyle orientation field, solving a spatial hairstyle orientation field by combining the unambiguous image hairstyle orientation field with a spatial hair volume region; (3) iterative construction of a static hairstyle model: by referring a scalp region which is defined on a fitted individual head model as a hair root location, starting to grow from a hair root sampling point in the spatial hairstyle orientation field to obtain an initial hairstyle model, and obtaining a uniformly distributed static hairstyle model which conforms to the original hairstyle region by refining the initial result iteratively; (4) video-based dynamic hairstyle modeling: based on the static hairstyle model obtained in step 3, calculating a movement of the hairstyle in a video by tracing a movement of a head model and estimating non-rigid deformation, generating a dynamic hairstyle model in every moment during the moving process, so that the dynamic hairstyle model fits naturally with the real movement of the hairstyle in the video; (5) exporting of the hairstyle modeling result: exporting and storing the modeling result in the aforementioned steps, wherein the modeling result comprises the static hairstyle model obtained in step 3 and the dynamic hairstyle model obtained in step 4.
地址 HANGZHOU CN