Continuing humanity’s race towards potential deepfake hell, researchers have developed a way of creating 3D models from 2D images using neural networks. The full title of the project is PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization and here’s some gibberish from the researchers:
Recent advances in image-based 3D human shape estimation have been driven by the significant improvement in representation power afforded by deep neural networks. Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images. We argue that this limitation stems primarily form two conflicting requirements; accurate predictions require large context, but precise predictions require high resolution. Due to memory limitations in current hardware, previous approaches tend to take low resolution images as input to cover large spatial context, and produce less precise (or low resolution) 3D estimates as a result. We address this limitation by formulating a multi-level architecture that is end-to-end trainable. A coarse level observes the whole image at lower resolution and focuses on holistic reasoning. This provides context to an fine level which estimates highly detailed geometry by observing higher-resolution images. We demonstrate that our approach significantly outperforms existing state-of-the-art techniques on single image human shape reconstruction by fully leveraging 1k-resolution input images.
I’m all for the progression of science and technology, but I honestly have no idea where all this research is heading. If things keep progressing the way they are, eventually researchers will be able to recreate the entire Universe using somebody’s Instagram profile picture.
Keep going for a video detailing the project and showing how it renders 2D video.
Source: Geekologie – Creating 3D models of people from 2D images