Skip to main content

Mining Spatial-Temporal Patterns and Structural Sparsity for Human Motion Data Denoising.

Feng, Y., Ji, M., Xiao, J., Yang, X., Zhang, J. J., Zhuang, Y. and Liu, X., 2014. Mining Spatial-Temporal Patterns and Structural Sparsity for Human Motion Data Denoising. IEEE Transactions on Cybernetics, 45 (12), 2693 - 2706 .

Full text available as:

[img]
Preview
PDF (OPEN ACCESS ARTICLE)
Mining spatial.pdf - Published Version
Available under License Creative Commons Attribution.

2MB

DOI: 10.1109/TCYB.2014.2381659

Abstract

Motion capture is an important technique with a wide range of applications in areas such as computer vision, computer animation, film production, and medical rehabilitation. Even with the professional motion capture systems, the acquired raw data mostly contain inevitable noises and outliers. To denoise the data, numerous methods have been developed, while this problem still remains a challenge due to the high complexity of human motion and the diversity of real-life situations. In this paper, we propose a data-driven-based robust human motion denoising approach by mining the spatial-temporal patterns and the structural sparsity embedded in motion data. We first replace the regularly used entire pose model with a much fine-grained partlet model as feature representation to exploit the abundant local body part posture and movement similarities. Then, a robust dictionary learning algorithm is proposed to learn multiple compact and representative motion dictionaries from the training data in parallel. Finally, we reformulate the human motion denoising problem as a robust structured sparse coding problem in which both the noise distribution information and the temporal smoothness property of human motion have been jointly taken into account. Compared with several state-of-the-art motion denoising methods on both the synthetic and real noisy motion data, our method consistently yields better performance than its counterparts. The outputs of our approach are much more stable than that of the others. In addition, it is much easier to setup the training dataset of our method than that of the other data-driven-based methods.

Item Type:Article
ISSN:2168-2267
Group:Faculty of Science & Technology
ID Code:23043
Deposited By: Symplectic RT2
Deposited On:05 Jan 2016 16:00
Last Modified:14 Mar 2022 13:54

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -