Wang, M., Guo, S., Liao, M., He, D., Chang, J. and Zhang, J. J., 2019. Action snapshot with single pose and viewpoint. Visual Computer, 35 (4), 507- 520.
Full text available as:
|
PDF
paper.pdf - Accepted Version Available under License Creative Commons Attribution Non-commercial No Derivatives. 12MB | |
Copyright to original material in this document is with the original owner(s). Access to this content through BURO is granted on condition that you use it only for research, scholarly or other non-commercial purposes. If you wish to use it for any other purposes, you must contact BU via BURO@bournemouth.ac.uk. Any third party copyright material in this document remains the property of its respective owner(s). BU grants no licence for further use of that third party material. |
DOI: 10.1007/s00371-018-1479-9
Abstract
Many art forms present visual content as a single image captured from a particular viewpoint. How to select a meaningful representative moment from an action performance is difficult, even for an experienced artist. Often, a well-picked image can tell a story properly. This is important for a range of narrative scenarios, such as journalists reporting breaking news, scholars presenting their research, or artists crafting artworks. We address the underlying structures and mechanisms of a pictorial narrative with a new concept, called the action snapshot, which automates the process of generating a meaningful snapshot (a single still image) from an input of scene sequences. The input of dynamic scenes could include several interactive characters who are fully animated. We propose a novel method based on information theory to quantitatively evaluate the information contained in a pose. Taking the selected top postures as input, a convolutional neural network is constructed and trained with the method of deep reinforcement learning to select a single viewpoint, which maximally conveys the information of the sequence. User studies are conducted to experimentally compare the computer-selected poses and viewpoints with those selected by human participants. The results show that the proposed method can assist the selection of the most informative snapshot effectively from animation-intensive scenarios.
Item Type: | Article |
---|---|
ISSN: | 0178-2789 |
Uncontrolled Keywords: | Action snapshot; Information entropy; Pose; Viewpoint selection |
Group: | Faculty of Science & Technology |
ID Code: | 30541 |
Deposited By: | Symplectic RT2 |
Deposited On: | 09 Apr 2018 08:57 |
Last Modified: | 14 Mar 2022 14:10 |
Downloads
Downloads per month over past year
Repository Staff Only - |