Given (a) a single portrait image and a few user strokes, we generated (b) a high quality 3D hair model whose visual fidelity and physical plausibility enabled several dynamic hair manipulating applications, such as (c) physically-based simulation, (d) combing, or (e,f) motion-preserving hair replacement in video. Original images courtesy of Asian Impressions Photography.
This paper presents a single-view hair modeling technique for generating visually and physically plausible 3D hair models with modest user interaction. By solving an unambiguous 3D vector field explicitly from the image and adopting an iterative hair generation algorithm, we can create hair models that not only visually match the original input very well but also possess physical plausibility (e.g., having strand roots fixed on the scalp and preserving the length and continuity of real strands in the image as much as possible). The latter property enables us to manipulate hair in many new ways that were previously very difficult with a single image, such as dynamic simulation or interactive hair shape editing. We further extend the modeling approach to handle simple video input, and generate dynamic 3D hair models. This allows users to manipulate hair in a video or transfer styles from images to videos.
hair modeling, image manipulation, video editing