Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Simon Alexanderson, Iolanda Leite, and Hedvig Kjellström. Gesticulator: A framework for semantically-aware speech-driven gesture generation. (to appear at) International Conference on Multimodal Interaction (ICMI ‘20).



Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow. Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows. EuroGraphics 2020 (Honourable Mention)


A general overview of my research.



Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, and Hedvig Kjellström. Analyzing input and output representations for speech-driven gesture generation. International Conference on Intelligent Virtual Agents (IVA ‘19), Paris, July 02–05, 2019

Code is publicly available at Github

Below is a demo applying that model to a new dataset (which is in English). To reproduce the results you can use our pre-trained model


Taras Kucherenko, Jonas Beskow and Hedvig Kjellström. A neural network approach to missing marker reconstruction in human motion capture. arXiv preprint arXiv:1803.02665 (2018).

The system from above could also do Denoising. Code is publicly available at Github