Model Transduction with Mean-value Shape Representation
2008
This paper proposes a novel method, calledmodel
transduction, to directly transfer pose between differentmeshes,
without the need of building the skeleton configurations
for meshes. Different from previous retargetting methods,
such as deformation transfer, model transduction does not
require a reference source mesh to obtain source deformation,
thus effectively avoids unsatisfying results when the
source and target have different reference poses.Model transduction
is based on two components: model deformation and
model correspondence. Specifically, based on mean-value
manifold operator, our mesh deformation method produces
visually pleasing deformation results under large angle rotations
or big-scale translations of handles. We also propose a
novel scheme for shape preserving correspondence between
manifold meshes. Then, with the above two components, we
present the model transduction technique to directly transfer
pose between different mesh models. Moreover, we show
that the transduction method also can be used for pose correction
after various mesh editing operations. Our method
fits nicely in a unified framework, where the similar type
of operator is applied in all phases. The resulting quadratic
formulation can be efficiently minimized by fast solving the
sparse linear system. Experimental results show that model
transduction can successfully transfer both complex skeletal
structures and subtle skin deformations.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI