Skip to main content

YogNet: A Two-Stream Network for Realtime Multiperson Yoga Action Recognition and Posture Correction.

Yadav, S., Agarwal, A., Kumar, A., Tiwari, K., Pandey, H. and Akbar, S. A., 2022. YogNet: A Two-Stream Network for Realtime Multiperson Yoga Action Recognition and Posture Correction. Knowledge-Based Systems, 250 (August), 109097.

Full text available as:

[img]
Preview
PDF
1-s2.0-S095070512200541X-main.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

16MB

DOI: 10.1016/j.knosys.2022.109097

Abstract

Yoga is a traditional Indian exercise. It specifies various body postures called asanas, practicing them is beneficial for the physical, mental, and spiritual well-being. To support the yoga practitioners, there is a need of an expert yoga asanas recognition system that can automatically analyze practitioner’s postures and could provide suitable posture correction instructions. This paper proposes YogNet, a multi-person yoga expert system for 20 asanas using a two-stream deep spatiotemporal neural network architecture. The first stream utilizes a keypoint detection approach to detect the practitioner’s pose, followed by the formation of bounding boxes across the subject. The model then applies time distributed convolutional neural networks (CNNs) to extract framewise postural features, followed by regularized long short-term memory (LSTM) networks to give temporal predictions. The second stream utilizes 3D-CNNs for spatiotemporal feature extraction from RGB videos. Finally, the scores of two streams are fused using multiple fusion techniques. A yoga asana recognition database (YAR) containing 1206 videos is collected using a single 2D web camera for 367 minutes with the help of 16 participants and contains four view variations i.e. front, back, left, and right sides. The proposed system is novel as this is the earliest two-stream deep learning-based system that can perform multi-person yoga asanas recognition and correction in realtime. Simulation result reveals that YogNet system achieved 77.29%, 89.29%, and 96.31% accuracies using pose stream, RGB stream, and via fusion of both streams, respectively. These results are impressive and sufficiently high for recommendation towards general adaption of the system.

Item Type:Article
ISSN:0950-7051
Uncontrolled Keywords:Action recognition; Computer vision; Posture correction; Yoga and exercise
Group:Faculty of Science & Technology
ID Code:36993
Deposited By: Symplectic RT2
Deposited On:30 May 2022 10:02
Last Modified:26 May 2023 01:08

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -