My publications can also be found on [Google Scholar Citations], [DBLP], and [ResearchGate].

2020

  • Taras Kucherenko, Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, and Gustav Eje Henter. The GENEA Challenge 2020: Benchmarking gesture-generation systems on common data. International Workshop on Generation and Evaluation of Non-Verbal Behaviour for Embodied Agents. 2020 [Paper] [Video]

  • Taras Kucherenko, Dai Hasegawa, Naoshi Kaneko, Gustav Eje Henter, and Hedvig Kjellström. Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation. arxiv preprint. 2020 [Paper] [Code] [Video] [Project Page]

  • Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Eje Henter, Simon Alexanderson, Iolanda Leite, and Hedvig Kjellström. Gesticulator: A framework for semantically-aware speech-driven gesture generation. International Conference on Multimodal Interaction (ICMI ‘20). 2020. [Paper] [Code] [Video] [Project Page] Best Paper Award

  • Patrik Jonell^, Taras Kucherenko^, Ilaria Torre, Jonas Beskow. Can we trust online crowdworkers? Comparing online and offline participants in a preference test of virtual agents. International Conference on Intelligent Virtual Agents (IVA’20). 2020 [Paper] [Video]

  • Patrik Jonell, Taras Kucherenko, Gustav Eje Henter, Jonas Beskow. Let’s face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings. International Conference on Intelligent Virtual Agents (IVA’20). 2020. [Paper] [Code] [Video] [Project Page] Best Paper Award

  • Simon Alexanderson, Éva Székely, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow. Generating coherent spontaneous speech and gesture from text. International Conference on Intelligent Virtual Agents (IVA’20). 2020. [Paper] [Project Page]

  • Simon Alexanderson, Gustav Eje Henter, Taras Kucherenko, and Jonas Beskow. Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows. Computer Graphics Forum. 2020. (EuroGraphics 2020 Honourable Mention Award ) [Paper] [Code] [Video]

2019

  • Pieter Wolfert, Taras Kucherenko, Hedvig Kjellström, Tony Belpaeme. Should Beat Gestures Be Learned Or Designed? A Benchmarking User Study. ICDL-EPIROB 2019 Workshop on Naturalistic Non-Verbal and Affective Human-Robot Interactions, Oslo, August 19, 2019 [Paper] [Code] [Poster]

  • Patrik Jonell, Taras Kucherenko, Erik Ekstedt, Jonas Beskow. Learning Non-verbal Behavior for a Social Robot from YouTube Videos. ICDL-EPIROB 2019 Workshop on Naturalistic Non-Verbal and Affective Human-Robot Interactions, Oslo, August 19, 2019 [Paper] [Code] [Poster]

  • Taras Kucherenko, Dai Hasegawa, Gustav Eje Henter, Naoshi Kaneko, and Hedvig Kjellström. Analyzing input and output representations for speech-driven gesture generation. International Conference on Intelligent Virtual Agents (IVA ‘19), Paris, July 02–05, 2019 [Paper] [Code] [Video] [bib] [Project Page]

  • Taras Kucherenko, Dai Hasegawa, Naoshi Kaneko, Gustav Eje Henter, and Hedvig Kjellström. On the importance of representations for speech-driven gesture generation. 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS ‘19), Extended Abstract, Montreal, May 13–17, 2019 [Paper] [Poster] [bib] [Project Page]

2018

  • Taras Kucherenko. Data driven non-verbal behavior generation for humanoid robots. International Conference on Multimodal Interaction (ICMI ‘18), Doctoral Consortium, Boulder, Oct 12-17, 2018 [Paper]

  • Taras Kucherenko, Jonas Beskow and Hedvig Kjellström. A neural network approach to missing marker reconstruction in human motion capture. arXiv preprint (2018) [Paper] [Code] [Video]

2017

  • Patrik Jonell, Joseph Mendelson, Thomas Storskog, Goran Hagman, Per Ostberg, Iolanda Leite, Taras Kucherenko, Olga Mikheeva, Ulrika Akenine, Vesna Jelic, Alina Solomon, Jonas Beskow, Joakim Gustafson, Miia Kivipelto, Hedvig Kjellstrom. Machine Learning and Social Robotics for Detecting Early Signs of Dementia arXiv preprint (2017) [Paper]

  • Taras Kucherenko and Hedvig Kjellström. Towards Context-Preserving Human to Robot Motion Mapping. The First Swedish Symposium on Deep Learning, Stockholm, 2017 [Paper]