Real-time and Controllable Reactive Motion Synthesis via Intention Guidance

Xiaotang Zhang, Ziyi Chang, Qianhui Men and Hubert P. H. Shum
Computer Graphics Forum (CGF), 2025

Impact Factor: 2.9

Real-time and Controllable Reactive Motion Synthesis via Intention Guidance

Abstract

We propose a real-time method for reactive motion synthesis based on the known trajectory of input character, predicting instant reactions using only historical, user-controlled motions. Our method handles the uncertainty of future movements by introducing an intention predictor, which forecasts key joint intentions to make pose prediction more deterministic from the historical interaction. The intention is later encoded into the latent space of its reactive motion, matched with a codebook which represents mappings between input and output. It samples a categorical distribution for pose generation and strengthens model robustness through adversarial training. Unlike previous offline approaches, the system can recursively generate intentions and reactive motions using feedback from earlier steps, enabling real-time, long-term realistic interactive synthesis. Both quantitative and qualitative experiments show our approach outperforms other matching-based motion synthesis approaches, delivering superior stability and generalizability. In our method, user can also actively influence the outcome by controlling the moving directions, creating a personalized interaction path that deviates from predefined trajectories.


Downloads


YouTube


Cite This Research

Plain Text

Xiaotang Zhang, Ziyi Chang, Qianhui Men and Hubert P. H. Shum, "Real-time and Controllable Reactive Motion Synthesis via Intention Guidance," Computer Graphics Forum, pp. e70222, Wiley, 2025.

BibTeX

@article{zhang25realtime,
 author={Zhang, Xiaotang and Chang, Ziyi and Men, Qianhui and Shum, Hubert P. H.},
 journal={Computer Graphics Forum},
 title={Real-time and Controllable Reactive Motion Synthesis via Intention Guidance},
 year={2025},
 pages={e70222},
 numpages={11},
 doi={10.1111/cgf.70222},
 issn={0167-7055},
 publisher={Wiley},
}

RIS

TY  - JOUR
AU  - Zhang, Xiaotang
AU  - Chang, Ziyi
AU  - Men, Qianhui
AU  - Shum, Hubert P. H.
T2  - Computer Graphics Forum
TI  - Real-time and Controllable Reactive Motion Synthesis via Intention Guidance
PY  - 2025
SP  - e70222
EP  - e70222
DO  - 10.1111/cgf.70222
SN  - 0167-7055
PB  - Wiley
ER  - 


Supporting Grants


Similar Research

Qianhui Men, Hubert P. H. Shum, Edmond S. L. Ho and Howard Leung, "GAN-Based Reactive Motion Synthesis with Class-Aware Discriminators for Human-Human Interaction", Computers and Graphics (C&G), 2022
Ziyi Chang, He Wang, George Alex Koulieris and Hubert P. H. Shum, "Large-Scale Multi-Character Interaction Synthesis", Proceedings of the 2025 ACM SIGGRAPH, 2025
Baiyi Li, Edmond S. L. Ho, Hubert P. H. Shum and He Wang, "Two-Person Interaction Augmentation with Skeleton Priors", Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2024
Hubert P. H. Shum, "Simulating Interactions Among Multiple Characters", PhD Thesis from the University of Edinburgh, 2010
Edmond S. L. Ho, Hubert P. H. Shum, Yiu-ming Cheung and P. C. Yuen, "Topology Aware Data-Driven Inverse Kinematics", Computer Graphics Forum (CGF) - Proceedings of the 2013 Pacific Conference on Computer Graphics and Applications (PG), 2013
Hubert P. H. Shum, Taku Komura, Masashi Shiraishi and Shuntaro Yamazaki, "Interaction Patches for Multi-Character Animation", ACM Transactions on Graphics (TOG) - Proceedings of the 2008 ACM SIGGRAPH Asia, 2008
Taku Komura, Hubert P. H. Shum and Edmond S. L. Ho, "Simulating Interactions of Characters", Proceedings of the 2008 ACM International Conference on Motion in Games (MIG), 2008

HomeGoogle ScholarYouTubeLinkedInTwitter/XGitHubORCIDResearchGateEmail
 
Print