Modeling View and Posture Manifold for Tracking

    Abstract

    We consider modeling data lying on multiple continuous manifolds. In particular, we model shape manifold of a person performing a motion observed from different view points along a view circle at fixed camera height. We introduce a model that ties together the body configuration (kinematics) manifold and the visual manifold (observations) in a way that facilitates tracking the 3D configuration with continuous relative view variability. The model exploits the low dimensionality nature of both the body configuration manifold and the view manifold where each of them are represented separately.

    Contributions

  • To model the posture, the view, and the shape manifolds of observed motion with three separate low dimensional representations:
    • A view-invariant, shape-invariant configuration manifold;
    • A configuration-invariant, shape-invariant view manifold;
    • A configuration-invariant, view-invariant shape representation.
  • To model view and posture manifolds in a general setting where the motion is not assumed to be one dimensional
    • We show results with complex motions such as ballet and dancing as well as simple one dimensional motions.
  • To link the configuration manifold, learned from 3D motion-captured date, with the visual manifold.
    • A distinguishing feature about our work here is that we utilize both the input (visual) and output (kinematics) manifolds to constrain the problem. That is we model the kinematics manifold and the observation manifold, tied together with a parameterized generative mapping function.