Schedule
Time (EDT) |
Time (PDT) |
Time (CEST) |
Event |
|
1300 |
1000 |
1900 |
|
Organizers
Opening remarks
|
1305 |
1005 |
1905 |
|
Differentiable Simulation of Light
Inverse problems involving light abound throughout many scientific disciplines. Typically, a set of images captured by an instrument must be mathematically processed to reveal some property of our physical reality. This talk will provide an introduction and overview of the emerging field of differentiable physically-based rendering, which has the potential of substantially improving the accuracy of such calculations. Methods in this area propagate derivative information through physical light simulations to solve optimization problems. While still very much a work in progress, advances in the last years have led to increasingly efficient and numerically robust methods that can begin to tackle interesting real-world problems. I will give an overview of recent progress and open problems.
|
1335 |
1035 |
1935 |
|
Cengiz Oztireli
- University of Cambridge
- Google
The TensorFlow Graphics Ecosystem
|
1400 |
1100 |
2000 |
|
Marie-Julie Rakotosaona
Differentiable Meshing
Triangle meshes remain the most popular data representation for surface geometry. Unfortunately, the combinatorial nature of the triangulation prevents taking derivatives over the space of possible meshings of any given surface. As a result, using modern optimization frameworks typically used in deep learning for the tasks of generating and manipulating meshes has been a challenging task. This talk will describe a deep learning method for mesh generation and a differentiable mesh representation. My ultimate goal will be to show that differentiability is an important key towards constructing more accurate and efficient shape reconstruction methods.
|
1415 |
1115 |
2015 |
|
GRAF - Generative Radiance Fields for 3D-aware Image Synthesis
In this talk, I will present our work on Generative Radiance Fields (GRAF) for 3D-controllable image synthesis. We make significant headway on synthesizing 3D consistent images from controllable viewpoints training with unposed 2D images only. Our key contribution is to combine the recent advances in coordinate-based neural representations with generative adversarial networks. In contrast to previous methods, this enables our approach to scale to high image resolutions and high image fidelity while preserving 3D consistency.
|
1430 |
1130 |
2030 |
|
Why Do We Need Domain-Specific Languages for (Differentiable) Visual Computing?
The success of deep learning can be largely credited to domain-specific deep learning frameworks such as TensorFlow or PyTorch, which made deep learning accessible to a large crowd of researchers and practitioners. However, these frameworks are designed with layered neural network architectures in mind, and are insufficient for more complex visual computing programs (ray tracing, geometry processing, physics simulation, patchmatch, Markov random fields, graphcut, etc). Differentiable programming requires us to explore the large visual computing toolbox outside of convolution or matrix multiplication layers. Therefore, we need new differentiable domain-specific languages for achieving such exploration. I believe one way to address the complexity in visual computing programming is to decouple the mathematical complexity from the implementation. I will briefly discuss a few of our works in this direction.
|
1500 |
1200 |
2100 |
|
Investigating Positional Encodings in Coordinate Based Networks
Neural Radiance Fields (NeRFs) enable novel view synthesis of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. In the past year these representations have received interest from the community due to their simplicity to implement and their high quality results. In this talk I will discuss some of the follow up projects we have worked on to improve and expand NeRFs, specifically positional encodings. I will discuss the paper mipNeRF which uses anti-aliased positional encodings to improve results.
|
1530 |
1230 |
2130 |
|
Nikhila Ravi
Accelerating 3D deep learning with PyTorch3D
PyTorch3D is a fast, modular and differentiable library for 3D deep learning built to be easily used in deep learning pipelines with 3D data. It features tools for heterogeneous batching of 3D inputs, differentiable rendering of meshes point clouds and volumes, and several common 3D operators and loss functions. This talk will cover the key features of the PyTorch3D library, practical examples and research use cases.
|
1545 |
1245 |
2145 |
|
Vincent Sitzmann
Light Field Networks - Neural Scene Representation with Single-Evaluation Rendering
Given only a single picture, people are capable of inferring a mental representation that encodes rich information about the underlying 3D scene. We acquire this skill not through massive labeled datasets of 3D scenes, but through self-supervised observation and interaction. Building machines that can infer similarly rich neural scene representations is critical if they are to one day parallel people’s ability to understand, navigate, and interact with their surroundings. Recent progress on 3D-structured neural scene representations suggests a path towards self-supervised learning of such representations. However, current approaches rely on volumetric rendering or sphere-tracing, which incurs dramatic costs that scale with the complexity and depth range of the scene. I will discuss our recent work, Light Field Networks, that strikes a different trade-off of strictly enforcing multi-view consistency and computational complexity, offering a path towards scalable self-supervised scene representation learning.
|
1600 |
1300 |
2200 |
|
Yaron Lipman
Invited talk - Yaron Lipman
|
1630 |
1330 |
2230 |
|
Clement Fuji-Tsang
Kaolin - A suite of tools for 3D deep learning research
|
1645 |
1345 |
2245 |
|
Panelists - Vincent Sitzmann, Rana Hanocka, Matthew Tancik
Panel Discussion
|
1715 |
1415 |
2315 |
|
Presenters
Contributed talks
|