Du, Y., Wang, Z., Li, Y., Yang, X., Wu, C. and Wang, Z., 2024. Forecasting Distillation: Enhancing 3D Human Motion Prediction with Guidance Regularization. In: 2024 International Joint Conference on Neural Networks (IJCNN). Piscataway, NJ: IEEE.
Full text available as:
|
PDF
IJCNN_2024 (1).pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial. 2MB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.1109/IJCNN60899.2024.10650336
Abstract
Human motion prediction aims to forecast future body poses from historically observed sequences, which is challenging due to motion's complex dynamics. Existing methods mainly focus on dedicated network structures to model the spatial and temporal dependencies. The predicted results are required to be strictly similar to the training samples with l2 loss in the current training pipeline. It needs to be pointed out that most approaches predict the next frame conditioned on the previously predicted sequence, where a small error in the initial frame could be accumulated significantly. In addition, recent work indicated that different stages could play different roles. Hence, this paper considers a new direction by introducing a model learning framework with motion guidance regularization to reduce uncertainty. The guidance information is extracted from a designed Fusion Feature Extraction network (FE-Net) while knowledge distilling is conducted through intermediate supervision to improve the multi-stage prediction network during training. Incorporated with baseline models, our guidance design exhibits clear performance gains in terms of 3D mean per joint position error (MPJPE) on benchmark datasets Human3.6M, CMU Mocap, and 3DPW datasets, respectively. Related code will be available on https://github.com/tempAnonymous2024/MotionPredict-GuidanceReg.
Item Type: | Book Section |
---|---|
ISBN: | 9788350359312 |
Additional Information: | 30 June - 05 July 2024, Yokohama, Japan. |
Uncontrolled Keywords: | Training; Solid modeling; Three-dimensional displays; Uncertainty; Neural networks; Pipelines; Benchmark testing |
Group: | Faculty of Media & Communication |
ID Code: | 40539 |
Deposited By: | Symplectic RT2 |
Deposited On: | 20 Nov 2024 16:51 |
Last Modified: | 20 Nov 2024 16:51 |
Downloads
Downloads per month over past year
Repository Staff Only - |