Augmenting Geometry in Implicit Neural Scene Representations
Doctoral project at a glance
Departments and Instituts
Period
18.01.2021 to 05.04.2026
Doctoral candidate
Supervising professor
Project Description
Computer-generated images are ubiquitous in our modern visual world. Manufacturing, entertainment, education and many other industries require real or fictional virtual 3D models to represent a variety of different scenarios. Modern computer graphics produce high quality visual and even photorealistic content.
However, this quality has two major drawbacks:
First, the calculation of many visual effects based on traditional methods is inefficient and involves long calculation times. Secondly, as image quality increases, so does the demand for extremely fine geometry to represent the desired scene. As a result, manual post-processing, which is time-consuming and tedious, is required. Whether individual objects or entire scenes, every little detail is positioned by hand, including geometry, light sources or object materials.
Doctoral student Daniel Bachmannn is taking a depth-generative approach with neuronal networks, in particular neural rendering. Here, features such as the shape or colour of virtual objects or scenes are encoded as learned weights that are stored by neural networks. This internal representation is called a neural scene representation (NSR) - a non-discretising, implicit form of storing scene data.