MIG 2019 - ACM SIGGRAPH Conference on Motion, Interaction and Games

Please find the downloadable pocket guide for MIG 2019 here.

Welcome to the 12th annual ACM SIGGRAPH conference on Motion, Interaction and Games 2019 (formerly known as Motion In Games) held at Northumbria University, Newcastle Upon Tyne, United Kingdom, 28th - 30th Oct 2019.

In this year, authors of selected best papers will be referred to submit extended and significantly revised versions in the IEEE Transactions on Visualizations and Computer Graphics (TVCG, Impact Factor 3.078) and the Computers & Graphics Journal (C&G, Impact Factor: 1.200).

Motion plays a crucial role in interactive applications, such as VR, AR, and video games. Characters move around, objects are manipulated or move due to physical constraints, entities are animated, and the camera moves through the scene. Even the motion of the player nowadays is used as input to such interactive systems.

Motion is currently studied in many different areas of research, including graphics and animation, game technology, robotics, simulation, computer vision, and also physics, psychology, and urban studies. Cross-fertilization between these communities can considerably advance the state-of-the-art in the area.

The goal of the Motion, Interaction and Games conference is to bring together researchers from this variety of fields to present their most recent results, to initiate collaborations, and to contribute to the establishment of the research area. The conference will consist of regular paper sessions, poster presentations, and as well as presentations by a selection of internationally renowned speakers in all areas related to interactive systems and simulation. The conference includes entertaining cultural and social events that foster casual and friendly interactions among the participants.

 

Best Paper Awards

Best Paper

Volume Maps: An Implicit Boundary Representation for SPH by Jan Bender, Tassilo Kugelstadt, Marcel Weiler, Dan Koschier

Best Second Paper

The Case for Haptic Props: Shape, Weight and Vibro-tactile Feedback by Michael White, James Gain, Ulysse Vimont, Daniel Lochner

Best Student Paper

Data-driven Gaze Animation using Recurrent Neural Networks by Alex Klein, Zerrin Yumak, Arjen Beij, A. Frank van der Stappen

Peer-voted Best Poster

Why Did the Human Cross the Road? by Panayiotis Charalambous, Yiorgos Chrysanthou

Peer-voted Best Second Poster

3D Car Shape Reconstruction from a Single Sketch Image by Naoki Nozawa, Hubert P. H. Shum, Edmond S. L. Ho, Shigeo Morishima

Best Presentation

Animation Synthesis Triggered by Vocal Mimics by Adrien Nivaggioli, Damien Rohmer

 

Sponsors

SIGGRAPH

Platinum Special

Vicon

Gold

Huawei

Silver

Disney

 

In Cooperation

EG

 

Keynote Speakers


    • Prof. Robert Sumner
      Associate Director of DisneyResearch|Studios and an Adjunct Professor at the Swiss Federal Institute of Technology Zurich (ETH)
    • The Science to Create the Magic
      For more than a decade, DisneyResearch|Studios has been pushing the forefront of scientific and technological innovation to advance entertainment products, experiences and shows. Our research covers a broad spectrum of different fields including graphics, vision, augmented and virtual reality, machine learning and AI, as well as interactive technologies. Our innovations are experienced by hundreds of millions of audiences and customers across the world. In this talk I will give and overview of our core research programs including digital humans, story technology, interactive content creation, video processing, and audience understanding. Furthermore, I will share my insights into the fundamental differences between academic and corporate research and highlight the challenges of transferring technology into products.
      Biography
      Dr Robert Sumner is the Associate Director of DisneyResearch|Studios and an Adjunct Professor at ETH Zurich. At Disney Robert leads the lab’s research in animation and games. His research group strives to bypass technical barriers in animation and game production pipelines with new algorithms that expand the designer’s creative toolbox in terms of depiction, movement, deformation, stylization, control, and efficiency. Robert received a B.S. degree in computer science from the Georgia Institute of Technology and his M.S. and Ph.D. from the Massachusetts Institute of Technology. He spent three years as a postdoctoral researcher at ETH Zurich before joining Disney.
      At ETH, Robert teaches a course called the Game Programming Laboratory in which students from ETH and the Zurich University of the Arts work in small teams to design and implement novel video games. In 2015, Robert founded the ETH Game Technology Center, which provides an umbrella over ETH research, teaching, and outreach in the area of game technology.
    • Prof. Taku Komura
      Professor of Computer Graphics, University of Edinburgh


    • Learning Neural Character Controllers from Motion Capture Data
      In this talk, I will cover our recent development of neural network-based character controllers. Using neural networks for character controllers significantly increases the scalability of the system - the controller can be trained with a large amount of motion capture data while the run-time memory can be kept low. As a result, such controllers are suitable for real-time applications such as computer games and virtual reality systems. The main challenge is in designing an architecture that can produce movements in production-quality and also manage a wide variation of motion classes. Our development covers low-level locomotion controllers for bipeds and quadrupeds, which allow the characters to walk, run, side-step and climb over uneven terrain, as well as a high level character controller for humanoid characters to interact with objects and the environment, which allows the character to sit on chairs, open doors and carry objects. In the end of the talk, I will discuss about the open problems and future directions of character animation.
      Biography
      Taku Komura is a Professor at the Institute of Perception, Action and Behaviour, School of Informatics, University of Edinburgh. As the leader of the Computer Graphics and Visualization Unit his research has focused on data-driven character animation, physically-based character animation, crowd simulation, cloth animation, anatomy-based modelling, and robotics. Recently, his main research interests have been the application of machine learning techniques for animation synthesis. He received the Royal Society Industry Fellowship (2014) and the Google AR/VR Research Award (2017).
    • Prof. Karan Singh
      Professor of Computer Science at the University of Toronto


    • Expressive Facial Modeling and Animation
      Humans are hard-wired to see and interpret minute facial detail. The rich signals we extract from facial expressions imposes high expectations for computer generated facial imagery. This talk focuses on the science and art of expressive facial animation. Specifically, aspects of facial anatomy, biomechanics, linguistics and perceptual psychology will be used to motivate and describe the construction of geometric face rigs, and techniques for the animator-centric creation of emotion, expression and speech animation from input images, audio and video.
      Biography
      Karan Singh is a Professor of Computer Science at the University of Toronto. He co-directs a globally reputed graphics and HCI lab, DGP, has over 100 peer-reviewed publications, and has supervised over 40 MS/PhD theses. His research interests lie in interactive graphics, spanning art and visual perception, geometric design and fabrication, character animation and anatomy, and interaction techniques for mobile, Augmented and Virtual Reality (AR/VR). He has been a technical lead for the Oscar award winning software Maya and was the R&D Director for the 2004 Oscar winning animated short Ryan. He has co-founded multiple companies including Arcestra (architectural design), JALI (facial animation), and JanusVR (Virtual Reality).

 

    Tutorial Speakers

    • Dr. Nicolas Heess
      Research Scientist at DeepMind, London


    • Deep reinforcement learning for control - algorithms and architectures
      Reinforcement learning algorithms in combination with powerful function approximators such as neural networks have achieved a number of impressive results in a number of challenging domains such as Atari games, Starcraft, chess, or Go. The growing interest in (deep) RL has led to significant progress but also to a large space of algorithms to choose from. In my talk I will review some of the principles underlying modern policy search methods and discuss several of the ones that are widely used in practice, with a particular focus on algorithms suitable for high-dimensional continuous action spaces. I will also discuss some of the special challenges arising when RL is applied to motor control tasks and present some applications especially to the control of simulated physical characters.
      Biography
      Nicolas Heess is a Research Scientist at DeepMind, London. He is interested in questions related to artificial intelligence and machine learning, perception, motor control, and robotics. One of his long-term goals is to develop algorithms and architectures that enable embodied agents to learn to intelligently reason about and interact with their physical environment and other agents. He has worked on the theory and applications of reinforcement learning and control, unsupervised learning, probabilistic models, and inference. His current research focuses on the application of these methods at the intersection of perception and control with a special interest in the acquisition, representation and adaptation of sensorimotor skills. Prior to joining DeepMind Nicolas was a postdoctoral researcher at the Gatsby Unit (UCL) working with Yee Whye Teh and David Silver. He did his PhD under the supervision of Chris Williams at the University of Edinburgh and also paid several extended visits to Microsoft Research (Cambridge, UK) where he worked with John Winn and others.
Copyright ©2019-2020. All Rights Reserved.
^ Back to Top