Recently, with the help of volumetric-based implicit representations and neural rendering , Besides, they require professional artists to design human templates, rigging, and unwrapped UV coordinates. Traditional approaches directly optimize explicit mesh representation which suffers from the problems of smooth geometry and coarse textures. We also showcase applications including novel pose synthesis, material editing, and relighting.Īnother solution is to view the problem as inverse rendering and learn digital humans directly from custom-collected data. The disentanglement of meshes enables direct downstream applications.Įxtensive experiments illustrate the very competitive performance and significant speed boost against previous methods. Moreover, only minutes of optimization are enough for plausible reconstruction results. The mesh representation is highly compatible with the efficient rasterization-based renderer, thus our method only takes about an hour of training and can render in real-time. It jointly optimizes explicit triangular canonical mesh, spatial-varying material, and motion dynamics, via inverse rendering in an end-to-end fashion.Įach above component is derived from separate neural fields, relaxing the requirement of a template, or rigging. We present EMA, a method that Efficiently learns Meshy neural fields to reconstruct animatable human Avatars. Such drawbacks prevent their direct applicability to downstream applications, especially the prominent rasterization-based graphic ones. Their implicit nature results in entangled geometry, materials, and dynamics of humans, However, they are inefficient for long optimization times and slow inference speed Recent volume rendering-based neural representations open a new way for human digitization with their friendly usability and photo-realistic reconstruction quality. Efficiently digitizing high-fidelity animatable human avatars from videos is a challenging and active research topic.
0 Comments
Leave a Reply. |