Cengiz Oztireli is an associate professor at the University of Cambridge, and a senior researcher at Google. His goal is to have a form of digital reality that is easy to create, manipulate, and experience. Some problems he has been working on to realize this goal are understanding structures learned by neural networks, 3D capture, rendering, stochastic sampling techniques, geometry representations, neural geometry processing, 3D shape and pose estimation, character animation, imaging, image/video processing.

Clement Fuji Tsang is a research scientist at NVIDIA, where he leads the development of the Kaolin 3D deep learning library.

Katja Schwarz is a PhD candidate in the Autonomous Vision Group, University of Tübingen. Her research is at the intersection of Machine Learning, Deep Learning and Computer Vision. Currently, I am interested in geometric scene understanding in 3D.

Marie-Julie Rakotosaona is a Computer Science PhD student at Ecole Polytechnique in France. Her research focuses on 3D shape analysis and processing. She is particularly interested in geometric deep learning approaches.

I am currently pursuing a PhD in the computer science and electrical engineering department at UC Berkeley. I am advised by Ren Ng and Angjoo Kanazawa, and funded by the National Science Foundation Graduate Research Fellowship Program. Previously I completed my undergrad and Masters of Engineering at MIT in the Camera Culture group. My topics of interest include computer vision, computational imaging, and graphics. I am particularly interested in 3D reconstruction for applications in autonomous vehicles and robotics.

Nikhila Ravi is a Research Engineer working on Computer Vision at Facebook AI research. She is the lead engineer on the PyTorch3D project, a library of reusable components for deep learning with 3D data. She is currently working on several engineering and research projects in the 3D space. She is also excited by the potential for applying technology and AI to solve social problems.

Tzu-Mao Li is an assistant professor at the CSE department of UCSD, working with the Center for Visual Computing. He explores the connections between visual computing algorithms and modern data-driven methods and develops programming languages and systems for facilitating the exploration.

Vincent Sitzmann just finished his Ph.D. in the Stanford Computational Imaging Laboratory, moving on to a Postdoc at MIT’s CSAIL. His research interest lies in neural scene representations - the way neural networks learn to represent information on our world. His goal is to allow independent agents to reason about our world given visual observations, such as inferring a complete model of a scene with information on geometry, material, lighting etc. from only a few observations, a task that is simple for humans, but currently impossible for AI. He has previously worked on differentiable camera pipelines, VR and human perception.

Wenzel Jakob is Assistant Professor at EPFL (Laussane, Switzerland), where he leads the EFPL’s Realistic Graphics Lab. His research revolves around inverse graphics, material appearance modeling and physically based rendering algorithms. He is interested in simulations that produce realistic images of our world, to reconcile the resulting data with physical measurements, and to solve complex inverse problems using differentiable simulations.

His research interest lies in deep learning of geometric and irregular data, geometry processing, computer graphics, discrete differential geometry, optimization and computational anatomy/biology.