2023

Taras Kucherenko*, Rajmund Nagy*, Youngwoo Yoon*, Jieyeon Woo, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large scale evaluation of gesture generation models in monadic and dyadic settings. In Proceedings of the ACM International Conference on Multimodal Interaction (ICMI’23). ACM.

2022

Youngwoo Yoon*, Pieter Wolfert*, Taras Kucherenko*, Carla Viegas, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation. In Proceedings of the ACM International Conference on Multimodal Interaction (ICMI). 2022

2021

Rajmund Nagy*, Taras Kucherenko*, Birger Moell, André Pereira, Hedvig Kjellström, and Ulysses Bernardet. A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents. 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS). 2021

 


 

Taras Kucherenko*, Patrik Jonell*, Youngwoo Yoon*, Pieter Wolfert, and Gustav Eje Henter. A large, crowdsourced evaluation of gesture generation systems on common data: The GENEA Challenge 2020. International Conference on Intelligent User Interfaces. 2021

 


 

2020

Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Simon Alexanderson, Iolanda Leite, and Hedvig Kjellström. Gesticulator: A framework for semantically-aware speech-driven gesture generation. International Conference on Multimodal Interaction (ICMI ‘20)

Best Paper Award

 


 

Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, Jonas Beskow. Let’s face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings. International Conference on Intelligent Virtual Agents (IVA’20). 2020.

Best Paper Award

 


 

Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow. Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows. EuroGraphics 2020

Best Paper Award Nominee

 


 

2019

A general overview of my research.

 


 

Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, and Hedvig Kjellström. Analyzing input and output representations for speech-driven gesture generation. International Conference on Intelligent Virtual Agents (IVA ‘19), Paris, July 02–05, 2019

Code is publicly available at Github

Below is a demo applying that model to a new dataset (which is in English). To reproduce the results you can use our pre-trained model

2018

Taras Kucherenko, Jonas Beskow and Hedvig Kjellström. A neural network approach to missing marker reconstruction in human motion capture. arXiv preprint arXiv:1803.02665 (2018).

The system from above could also do Denoising. Code is publicly available at Github