The GENEA Challenge 2023:
A large-scale evaluation of gesture generation models in monadic and dyadic settings

Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov, Mihail Tsakov,
Gustav Eje Henter

[conference paper (ICMI 2023)]

 

SUMMARY

In the GENEA Challenge 2023 participating teams built speech-driven gesture-generation systems using the same speech and motion dataset, followed by a joint evaluation. This year’s challenge provided data on both sides of a dyadic interaction, allowing teams to generate full-body motion for an agent given its speech (text and audio) and the speech and motion of the interlocutor.12 submissions and 2 baselines were evaluated together with held-out motion-capture data in several large-scale user studies. The studies focused on three aspects: 1) the human-likeness of the motion, 2) the appropriateness of the motion for the agent’s own speech whilst controlling for the human-likeness of the motion, and 3) the appropriateness of the motion for the behaviour of the interlocutor in the interaction in segments where the interlocutor is speaking, using a setup that controls for both the human-likeness of the motion and the agent’s own speech. We found a large span in human-likeness between challenge submissions, with a few systems rated close to human mocap. Appropriateness seems far from being solved, with most submissions performing in a narrow range slightly above chance, far behind natural motion. The effect of the interlocutor is even more subtle, with submitted systems at best performing barely above chance. Interestingly, a dyadic system be- ing highly appropriate for agent speech does not necessarily imply high appropriateness for the interlocutor.

This page collects papers, videos, and other resources from our challenge.

 


 

Open-source materials:


Dataset release DOI: 10.5281/zenodo.8199132

User-study video stimuli DOI: 10.5281/zenodo.8211448

Submitted BVH files DOI: 10.5281/zenodo.8146027

Human-likeness user-study results DOI: 10.5281/zenodo.8434117




Code for visualising gesture motion GENEA visualizer

Code for visualising conducting the user study HEMVIP

Code for computing the numerical evaluation metrics GENEA numerical evaluations




 


 

Challenge papers

The FineMotion entry to the GENEA Challenge 2023: DeepPhase for conversational gestures generation

Vladislav Korzun, Anna Beloborodova, Arkady Ilin [OpenReview] [ACM ICMI]


Gesture Motion Graphs for Few-Shot Speech-Driven Gesture Reenactment

Zeyu Zhao, Nan Gao, Zhi Zeng, Guixuan Zhang, Jie Liu, Shuwu Zhang [OpenReview] [ACM ICMI]


Diffusion-based co-speech gesture generation using joint text and audio representation

Anna Deichler, Shivam Mehta, Simon Alexanderson, Jonas Beskow [OpenReview] [ACM ICMI]


The UEA Digital Humans entry to the GENEA Challenge 2023

Jonathan Windle, Iain Matthews, Ben Milner, Sarah Taylor [OpenReview] [ACM ICMI]


FEIN-Z: Autoregressive Behavior Cloning for Speech-Driven Gesture Generation

Leon Harz, Hendric Voß, Stefan Kopp [OpenReview] [ACM ICMI]


The DiffuseStyleGesture+ entry to the GENEA Challenge 2023

Sicheng Yang, Haiwei Xue, Zhensong Zhang, Minglei Li, Zhiyong Wu, Xiaofei Wu, Songcen Xu, Zonghong Dai [OpenReview] [ACM ICMI]


Discrete Diffusion for Co-Speech Gesture Synthesis

Ankur Chemburkar, Shuhong Lu, Andrew Feng [OpenReview] [ACM ICMI]


The KCL-SAIR team's entry to the GENEA Challenge 2023 Exploring Role-based Gesture Generation in Dyadic Interactions: Listener vs. Speaker

Viktor Schmuck, Nguyen Tan Viet Tuyen, Oya Celiktutan [OpenReview] [ACM ICMI]


Gesture Generation with Diffusion Models Aided by Speech Activity Information

Rodolfo Luis Tonoli, Leonardo Boulitreau de Menezes Martins Marques, Lucas Hideki Ueda, Paula Paro Dornhofer Costa [OpenReview] [ACM ICMI]


Co-Speech Gesture Generation via Audio and Text Feature Engineering

Geunmo Kim, Jaewoong Yoo, Hyedong Jung [OpenReview] [ACM ICMI]


DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models

Weiyu Zhao, Liangxiao Hu, Shengping Zhang [OpenReview] [ACM ICMI]


The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation

Gwantae Kim, Yuanming Li, Hanseok Ko [OpenReview] [ACM ICMI]

 


 

Citation format:

@inproceedings{kucherenko2023genea,
  author={Kucherenko, Taras and Nagy, Rajmund
     and Yoon, Youngwoo and Woo, Jieyeon
    and Nikolov, Teodor and Tsakov, Mihail
    and Henter, Gustav Eje},
  title={The {GENEA} {C}hallenge 2023: {A} large-scale
    evaluation of gesture generation models in
    monadic and dyadic settings},
  booktitle = {Proceedings of the ACM International
    Conference on Multimodal Interaction},
  publisher = {ACM},
  series = {ICMI '23},
  year={2023}
}