Generalized Separation of Style and Content on Motion Manifolds

    Abstract

    The problem of separation of style and content is essential task in visual perception and is a fundamental mystery of perception. The problem we address in this paper is the separation of style and content when the content lies on low dimensional nonlinear manifold representing dynamic object. We show that such setting appears in many human motion analysis problems and therefore we introduce a framework for learning parameterization of style and content in such settings. The framework we present is based on decomposing the style parameters in the space of nonlinear functions which map between a learned unified embedding of multiple content manifolds and the visual input space. We show the application of the framework in synthesis, recognition, and tracking of certain human motions that follow this setting such as gait and facial expressions.

    Approach

  • Style and content decomposition in dynamic human motion
    • Representation of intrinsic body configuration in low dimensional nonlinear manifolds
    • Decomposition of style and content in the nonlinear mapping space using bilinear model
  • Generalized factorization of style and content on conceptual manifolds
    • Conceptual manifold, which is hormiomorphic to original manifold, is used for unified representation of dynamics of body configuration
    • Decomposition in the nonlinear mapping space using multilinear model