Tracking People on a Torus
Abstract
We model shape
deformations corresponding to both view point and body
configuration changes through the motion. Such observed shapes
present a product space (different configurations x different
views) and lie on a low dimensional manifold in the visual input
space. The approach we introduce here is based on learning both the
visual observation manifold and the kinematic manifold of the motion
in a supervised manner. Instead of learning an embedding of the
manifold, we learn the geometric deformation between an ideal
manifold (conceptual equivalent topological structure) and a twisted
version of the manifold (the data). We use a torus manifold to
represent such data for both periodic and non-periodic motions.
Experimental results show accurate estimation of 3D body pose and
view from a single camera.
Approach
- Simultaneous inferring view and body pose using torus manifold
- Represent spatio-temporal shape deformations according to view and body configuration change on a two dimensional torus manifold and nonlinear mapping from embedding manifold to visual input
- Inferring view and body pose from a given image by estimating an embedding point from a given input since every view and body pose has a corresponding
embedding point on the torus manifold.
- View variant human motion tracking as tracking on a torus surface (spatio-temporal constraints)
- 2 dimensional torus manifold: a state space for one dimensional body configuration and one dimensional view circle
- Learning manifold deformation from ideal torus manifoldto the actual visual manifold and to the kinematic manifold through two
nonlinear mapping functions
Related Publications
- Inferring view and body pose:
- Tracking people in view variations from a single camera
|