Repository logo
Log In(current)
  • Inicio
  • Personal de Investigación
  • Unidad Académica
  • Publicaciones
  • Colecciones
    Datos de Investigacion Divulgacion cientifica Personal de Investigacion Protecciones Proyectos Externos Proyectos Internos Publicaciones Tesis
  1. Home
  2. Universidad de Santiago de Chile
  3. Publicaciones
  4. Automatic Segmentation and Recognition of Human Actions in Monocular Sequences
Details

Automatic Segmentation and Recognition of Human Actions in Monocular Sequences

Journal
Proceedings - International Conference on Pattern Recognition
ISSN
1051-4651
Date Issued
2014
Author(s)
Velastin-Carroza, S  
Abstract
This paper addresses the problem of silhouette-based human action segmentation and recognition in monocular sequences. Motion History Images (MHIs), used as 2D templates, capture motion information by encoding where and when motion occurred in the images. Inspired by codebook approaches for object and scene categorization, we first construct a codebook of temporal motion templates by clustering all the MHIs of each particular action. These MHIs capture different actors, speeds and a wide range of camera viewpoints. In this paper, we use a Kohonen s Self-Organizing Map (SOM) to simultaneously cluster the MHI templates and represent them in lower dimensional subspaces. To cope with temporal segmentation, and concurrently carry out action recognition, a new architecture is proposed where the obsrvation MHIs are projected onto all these action-specific manifolds and the Euclidean distance between each MHI and the nearest cluster within each action-manifold constitutes the observation vector of a Markov Model. To estimate the state/action at each time step, we introduce a new method based on Observable Markov Models (OMMs) where the Markov model is augmented with a neutral state. The combination of our action-specific manifolds with the augmented OMM allows to automatically segment and recognize long sequences of consecutive actions, without any prior knowledge about initial and ending frames of each action. Importantly, our method allows to interpolate betweeen training viewpoint and recognizes actions, independently of the camera viewpoint, even from unseen viewpoints. © 2014 IEEE.
Get Involved!
  • Source Code
  • Documentation
  • Slack Channel
Make it your own

DSpace-CRIS can be extensively configured to meet your needs. Decide which information need to be collected and available with fine-grained security. Start updating the theme to match your Institution's web identity.

Need professional help?

The original creators of DSpace-CRIS at 4Science can take your project to the next level, get in touch!

Logo USACH

Universidad de Santiago de Chile
Avenida Libertador Bernardo O'Higgins nº 3363. Estación Central. Santiago Chile.
ciencia.abierta@usach.cl © 2023
The DSpace CRIS Project - Modificado por VRIIC USACH.

  • Accessibility settings
  • Privacy policy
  • End User Agreement
  • Send Feedback
Logo DSpace-CRIS
Repository logo COAR Notify