Automatic Segmentation and Recognition of Human Actions in Monocular Sequences
Journal
Proceedings - International Conference on Pattern Recognition
ISSN
1051-4651
Date Issued
2014
Author(s)
Abstract
This paper addresses the problem of silhouette-based human action segmentation and recognition in monocular sequences. Motion History Images (MHIs), used as 2D templates, capture motion information by encoding where and when motion occurred in the images. Inspired by codebook approaches for object and scene categorization, we first construct a codebook of temporal motion templates by clustering all the MHIs of each particular action. These MHIs capture different actors, speeds and a wide range of camera viewpoints. In this paper, we use a Kohonen s Self-Organizing Map (SOM) to simultaneously cluster the MHI templates and represent them in lower dimensional subspaces. To cope with temporal segmentation, and concurrently carry out action recognition, a new architecture is proposed where the obsrvation MHIs are projected onto all these action-specific manifolds and the Euclidean distance between each MHI and the nearest cluster within each action-manifold constitutes the observation vector of a Markov Model. To estimate the state/action at each time step, we introduce a new method based on Observable Markov Models (OMMs) where the Markov model is augmented with a neutral state. The combination of our action-specific manifolds with the augmented OMM allows to automatically segment and recognize long sequences of consecutive actions, without any prior knowledge about initial and ending frames of each action. Importantly, our method allows to interpolate betweeen training viewpoint and recognizes actions, independently of the camera viewpoint, even from unseen viewpoints. © 2014 IEEE.
