Hierarchical Graph Convolutional Networks for Action Quality Assessment

Kanglei Zhou, Yue Ma, Hubert P. H. Shum and Xiaohui Liang
IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2023

Impact Factor: 11.1Top 10% Journal in Engineering, Electrical & ElectronicCitation: 55#

Hierarchical Graph Convolutional Networks for Action Quality Assessment
# According to Google Scholar 2025

Abstract

Action quality assessment (AQA) automatically evaluates how well humans perform actions in a given video, a technique widely used in fields such as rehabilitation medicine, athletic competitions, and specific skills assessment. However, existing works that uniformly divide the video sequence into small clips of equal length suffer from intra-clip confusion and inter-clip incoherence, hindering the further development of AQA. To address this issue, we propose a hierarchical graph convolutional network (GCN). First, semantic information confusion is corrected through clip refinement, generating the 'shot' as the basic action unit. We then construct a scene graph by combining several consecutive shots into meaningful scenes to capture local dynamics. These scenes can be viewed as different procedures of a given action, providing valuable assessment cues. The video-level representation is finally extracted via sequential action aggregation among scenes to regress the predicted score distribution, enhancing discriminative features and improving assessment performance. Experiments on the AQA-7, MTLAQA, and JIGSAWS datasets demonstrate the superiority of the proposed hierarchical GCN over state-of-the-art methods.


Downloads


YouTube


Cite This Research

Plain Text

Kanglei Zhou, Yue Ma, Hubert P. H. Shum and Xiaohui Liang, "Hierarchical Graph Convolutional Networks for Action Quality Assessment," IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 12, pp. 7749-7763, IEEE, 2023.

BibTeX

@article{zhou23hierarchical,
 author={Zhou, Kanglei and Ma, Yue and Shum, Hubert P. H. and Liang, Xiaohui},
 journal={IEEE Transactions on Circuits and Systems for Video Technology},
 title={Hierarchical Graph Convolutional Networks for Action Quality Assessment},
 year={2023},
 volume={33},
 number={12},
 pages={7749-7763},
 numpages={15},
 doi={10.1109/TCSVT.2023.3281413},
 issn={1051-8215},
 publisher={IEEE},
}

RIS

TY  - JOUR
AU  - Zhou, Kanglei
AU  - Ma, Yue
AU  - Shum, Hubert P. H.
AU  - Liang, Xiaohui
T2  - IEEE Transactions on Circuits and Systems for Video Technology
TI  - Hierarchical Graph Convolutional Networks for Action Quality Assessment
PY  - 2023
VL  - 33
IS  - 12
SP  - 7749-7763
EP  - 7749-7763
DO  - 10.1109/TCSVT.2023.3281413
SN  - 1051-8215
PB  - IEEE
ER  - 


Supporting Grants


Similar Research

Kanglei Zhou, Hubert P. H. Shum, Frederick W. B. Li, Xingxing Zhang and Xiaohui Liang, "PHI: Bridging Domain Shift in Long-Term Action Quality Assessment via Progressive Hierarchical Instruction", IEEE Transactions on Image Processing (TIP), 2025
Kanglei Zhou, Liyuan Wang, Xingxing Zhang, Hubert P. H. Shum, Frederick W. B. Li, Jianguo Li and Xiaohui Liang, "MAGR: Manifold-Aligned Graph Regularization for Continual Action Quality Assessment", Proceedings of the 2024 European Conference on Computer Vision (ECCV), 2024
Ruisheng Han, Kanglei Zhou, Amir Atapour-Abarghouei, Xiaohui Liang and Hubert P. H. Shum, "FineCausal: A Causal-Based Framework for Interpretable Fine-Grained Action Quality Assessment", Proceedings of the 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2025
Kanglei Zhou, Ruizhi Cai, Liyuan Wang, Hubert P. H. Shum and Xiaohui Liang, "A Comprehensive Survey of Action Quality Assessment: Method and Benchmark", arXiv Preprint, 2024
Kanglei Zhou, Ruizhi Cai, Yue Ma, Qingqing Tan, Xinning Wang, Jianguo Li, Hubert P. H. Shum, Frederick W. B. Li, Song Jin and Xiaohui Liang, "A Video-Based Augmented Reality System for Human-in-the-Loop Muscle Strength Assessment of Juvenile Dermatomyositis", IEEE Transactions on Visualization and Computer Graphics (TVCG) - Proceedings of the 2023 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2023

HomeGoogle ScholarYouTubeLinkedInTwitter/XGitHubORCIDResearchGateEmail
 
Print