Workshop on Machine Learning for Computer Animation and Computer Games
Instructors
Daniel Holden
Daniel Holden is a Machine Learning researcher working at Ubisoft Montréal’s Research and Development Lab “Ubisoft La Forge”. He completed his PhD in 2017 at Edinburgh University with research focusing on how Neural Networks and Machine Learning techniques can be used to produce state-of-the-art character animation systems. At “Ubisoft La Forge” he has helped develop and guide several important Machine Learning projects, and has previously presented research at GDC, SIGGRAPH, and SIGGRAPH Asia.
Workshop Abstract
Game Development for AAA productions requires a vast array of tools, techniques, and expertise, ranging from game design, artistic content creation, to data management and low level engine programming. With recent advances in Neural Networks, Machine Learning is becoming increasingly practical and starting to take the role of a general purpose tool that resides in the toolbox of any game developer. In this workshop I will present some of the applications of Machine Learning developed at Ubisoft La Forge and the practical ways it can be thought about and used.
Jungdam Won
Bio
Jungdam Won is a post-doctoral researcher in Movement Research Lab. at Seoul National University. He received his Ph.D. and B.S. in Computer Science and Engineering from Seoul National University, Korea, in 2017, and 2011, respectively. He was awarded the global Google PhD Fellowship program in Robotics in 2016. He also worked at Disney Research Los Angeles as a Lab Associate Intern with Jehee Lee, Carol O’Sullivan, and Jessica K. Hodgins in 2013. His current areas of research are physics-based controls for diverse creatures and the collaborative formation between multiple characters, where motion capture, optimization, and various machine learning approaches have been applied.
https://sites.google.com/site/jungdampersonal/
Workshop Abstract
Reinforcement learning (RL) is a powerful framework to solve optimal control problems, where agents and interacting environments exist. Recently, RL with deep neural networks has shown its potential in computer animation/games. Constructing physics-based controllers for physically simulated characters is an example, which is a promising topic for the future animation/game industry because it can generate physically plausible interaction for users.
In this talk, I will first introduce basics of RL including the underlying optimal control theory, then present recent RL applications to physics-based character controls.