I am a Research Scientist at Electronic Arts (EA). My research is on machine learning models for non-verbal behavior generation, such as hand gestures and facial expressions. One of my talks about it you can find in this video. My favorite project, GENEA Challenge 2022, can be found on this project page. My thesis is publicly available at this url.
I come from Ukraine and my country has been invaded by Russia causing lots of suffering to the people. If you want to help - you can find many ways to do that in this website.
Awards
- May 2023: CVPR 2023 Outstanding Reviewer Award!
- Oct 2021: ICMI 2021 Best Reviewer Award!
- Sept 2021: Our paper Speech2Properties2Gestures received Honourable Mention award at IVA 2021.
- Oct 2020: Gesticulator have got the ICMI 2020 Best Paper Award!
- Oct 2020: Let’s face it received the IVA 2020 Best Paper Award!
- May 2020: We received Honourable Mention award at Eurographics 2020 for our paper Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows.
Other news
- May 2024: Thrilled to announce the publication of our paper on the second GENEA Challenge in the esteemed ACM Transactions on Graphics (TOG)! Joint work with Pieter Wolfert, Youngwoo Yoon, Carla Viegas, Teodor Nikolov, Mihail Tsakov and Gustav Eje Henter 🌟
- Feb 2023: Our paper A Comprehensive Review of Data-Driven Co-Speech Gesture Generation was accepted at EuroGraphics 2023.
- Feb 2023: GENEA Challenge 2023 was accepted at ICMI 2023 as a Multimodal Grand Challenge.
- Nov 2022: Our paper The GENEA Challenge 2022: A large evaluation of data-driven co-speech gesture generation was published at ICMI 2022.
- Nov 2022: GENEA challenge and a workshop are presented at ICMI 2022. Here are their websites: the GENEA Challenge 2022 and the GENEA Workshop 2022. GENEA stands for Generation and Evaluation of Non-verbal Behaviour for Embodied Agents.
- July 2022: Our paper Evaluating Data-Driven Co-Speech Gestures of Embodied Conversational Agents through Real-Time Interaction was accepted at IVA 2022.
- May 2022: Our paper Multimodal analysis of the predictability of hand-gesture properties was published at AAMAS 2022.
- Mar 2022: A challenge and a workshop accepted at ICMI 2022: the GENEA Challenge 2022 and the GENEA Workshop 2022. GENEA stands for Generation and Evaluation of Non-verbal Behaviour for Embodied Agents.
- Feb 2022: I joined EA Games R&D department, called SEED, as research engineer.
- Dec 2021: Our paper Multimodal analysis of the predictability of hand-gesture properties was accepted to AAMAS 2022.
- Dec 2021: I defended my thesis. My opponent was Stefan Kopp and you can find more info under this link
- Nov 2021 I gave two talks about my research on co-speech gesture synthesis: for Social AI and Robotics (SAIR) Lab at King’s College London and for Affective Intelligence and Robotics Laboratory at Cambridge University
- Oct 2021: Our GENEA (Generation and Evaluation of Non-verbal Behavior for Embodied Agents) Workshop 2021 is happening at ICMI 2021. Workshop page
- Sept 2021: Started an internship at Microsoft Research UK Applied Research team
- July 2021: Two papers accepted at ICMI 2021 : HEMVIP: Human Evaluation of Multiple Videos in Parallel and To Rate or Not To Rate: Investigating Evaluation Methods for Generated Co-Speech Gestures.
- June 2021: Our paper Speech2Properties2Gestures: Gesture-Property Prediction as a Tool for Generating Representational Gestures from Speech was accepted to IVA 2021.
- Mar 2021: Our GENEA (Generation and Evaluation of Non-verbal Behavior for Embodied Agents) Workshop 2021 has been accepted as an official workshop at ICMI 2021. Call for papers is out!
- Feb 2021: Our demonstrator A Framework for Integrating Gesture Generation Models into Interactive Conversational Agents was accepted to the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021).
- Feb 2021: Synced wrote an article about Gesticulator, which you can access under this link.
- Jan 2021: Our paper Moving fast and slow: Analysis of representations and post-processing in speech-driven automatic gesture generation got accepted to the International Journal of Human-Computer Interaction.
- Dec 2020: Our paper A large, crowdsourced evaluation of gesture generation systems on common data: The GENEA Challenge 2020 was accepted to IUI 2021 conference.
- Oct 2020: I will give a talk about my work on gesture generation for the Talking Robotics seminar series on October 30th 4pm UTC(GMT). More details under this link.
- Sept 2020: I have an open position for a master thesis on benchmarking gesture generation models in an interaction. See the proposal for more details.
- Sept 2020: One more paper accepted to IVA 2020: Let’s face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings.
- Aug 2020: Our paper Gesticulator: A framework for semantically-aware speech-driven gesture generation got accepted to ICMI 2020!
- July 2020: Two papers accepted for IVA 2020. More details in the publications tap.
- April 2020: The CFP for our IVA’20 Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents is out. As part of this workshop we organize Gesture Generation Challenge!
- Feb 2020: Our paper Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows was accepted to Eurographics 2020.
- Jan 2020: Sifan Jiang started his master thesis with me. He will be extending my gesture generation model to humanoid robot NAO.
- November 2019: Talk on general direction of my research.
- October 2019: Master students wanted for a master thesis project on gesture generation for a humanoid robot.
- September 2019 Talk on How to make your agent gesture in a natural way at Max Planck Institute for Intelligent Systems: Perceiving Systems Department in Tuebingen.
- August 2019: Our gesture generation model was applied to a new dataset. Now it can gesticulate in both Japanese and English. Check out a short demo video and our code with a pre-trained model.
- June 2019: Our two papers were accepted for ICDL-EPIROB 2019 Workshop on Naturalistic Non-Verbal and Affective Human-Robot Interactions.
- May 2019: Talk on my research at the Pint Of Science in Stockholm.
- April 2019: Our paper Analyzing input and output representations for speech-driven gesture generation was accepter to IVA 2019 for oral presentation. (24% acceptance rate)
- Jan 2019: Our paper On the importance of representations for speech-driven gesture generation was accepted at AAMAS 2019 for poster presentation.
- Oct 2018: My project proposal was published ICMI Doctoral Consortium 2018.
- April 2018: Joined Social Robotics Sweden (SoRoS) community.
- June 2017: We had a poster at The First Swedish Symposium on Deep Learning (SSDL).
Academic Service
- Co-organizing
- ICMI’23 Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents
- ICMI’22 Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents
- ICMI’21 Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents
- IVA’20 Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents
- GESPIN 2020 Conference
- SoRoS 2020 Workshop
- NIPS 2018 Workshop on AI for Social Good
- Reviewer for
- FG 2023, IEEE VR 2023, EuroGraphics 2023, CVPR 2023, SIGRAPH 2023, ACII 2023, IVA 2023, SIGGRAPH Asia 2023
- HRI 2022, CHI 2022, CVPR 2022, AISTATS 2022, IVA 2022, ICMI 2022, SigGraph Asia 2022
- ACII 2021, ICMI 2021, ROMAN 2021, ICCV 2021 Workshops
- ECAI 2020, IJCAI 2020, SIGRAPH 2020, IVA 2020, ICMI 2020 LBR
- ACII 2019, ICSR 2019
- NIPS 2018 Workshop on AI for Social Good