Wang, Z, Liu, S., Qian, R., Jiang, T., Yang, X. and Zhang, J. J., 2017. Human motion data refinement unitizing structural sparsity and spatial-temporal information. In: IEEE 13th International Conference on Signal Processing (ICSP), 6-10 November 2016, Chengdu, China, 975 - 982.
Full text available as:
|
PDF
ICSP_N0274_revised_2.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial No Derivatives. 445kB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.1109/ICSP.2016.7877975
Abstract
Human motion capture techniques (MOCAP) are widely applied in many areas such as computer vision, computer animation, digital effect and virtual reality. Even with professional MOCAP system, the acquired motion data still always contains noise and outliers, which highlights the need for the essential motion refinement methods. In recent years, many approaches for motion refinement have been developed, including signal processing based methods, sparse coding based methods and low-rank matrix completion based methods. However, motion refinement is still a challenging task due to the complexity and diversity of human motion. In this paper, we propose a data-driven-based human motion refinement approach by exploiting the structural sparsity and spatio-temporal information embedded in motion data. First of all, a human partial model is applied to replace the entire pose model for a better feature representation to exploit the abundant local body posture. Then, a dictionary learning which is for special task of motion refinement is designed and applied in parallel. Meanwhile, the objective function is derived by taking the statistical and locality property of motion data into account. Compared with several state-of-art motion refine methods, the experimental result demonstrates that our approach outperforms the competitors.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Uncontrolled Keywords: | Motion Capture Data; Motion Refinement |
Group: | Faculty of Media & Communication |
ID Code: | 29673 |
Deposited By: | Symplectic RT2 |
Deposited On: | 06 Sep 2017 11:51 |
Last Modified: | 14 Mar 2022 14:06 |
Downloads
Downloads per month over past year
Repository Staff Only - |