Skip to main content

SWTA: Sparse weighted temporal attention for drone-based activity recognition.

Yadav, S. K., Pahwa, E., Luthra, A., Tiwari, K. and Pandey, H., 2023. SWTA: Sparse weighted temporal attention for drone-based activity recognition. In: International Joint Conference on Neural Networks (IJCNN 2023), 18-23 June 2023, Queensland, Australia. (In Press)

Full text available as:

[img]
Preview
PDF
Drone_IJCNN (2).pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial.

6MB

Official URL: https://2023.ijcnn.org/

Abstract

Drone-camera based human activity recognition (HAR) has received significant attention from the computer vision research community in the past few years. A robust and efficient HAR system has a pivotal role in fields like video surveillance, crowd behavior analysis, sports analysis, and human-computer interaction. What makes it challenging are the complex poses, understanding different viewpoints, and the environmental scenarios where the action is taking place. To address such complexities, in this paper, we propose a novel Sparse Weighted Temporal Attention (SWTA) module to utilize sparsely sampled video frames for obtaining global weighted temporal attention. The proposed SWTA is divided into two components. First, temporal segment network that sparsely samples a given set of frames. Second, weighted temporal attention, which incorporates a fusion of attention maps derived from optical flow, with raw RGB images. This is followed by a basenet network, which comprises a convolutional neural network (CNN) module along with fully connected layers that provide us with activity recognition. The SWTA network can be used as a plug-in module to the existing deep CNN architectures, for optimizing them to learn temporal information by eliminating the need for a separate temporal stream. It has been evaluated on three publicly available benchmark datasets, namely Okutama, MOD20, and Drone-Action. The proposed model has received an accuracy of 72.76\%, 92.56\%, and 78.86\% on the respective datasets thereby surpassing the previous state-of-the-art performances by a margin of 25.26\%, 18.56\%, and 2.94\%, respectively.

Item Type:Conference or Workshop Item (Paper)
Uncontrolled Keywords:Human Activity Recognition; Video Understanding; Drone Action Recognition
Group:Faculty of Science & Technology
ID Code:38505
Deposited By: Symplectic RT2
Deposited On:17 May 2023 11:27
Last Modified:23 Jun 2023 01:08

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -