This Video is not "real"

It's been NeRFed

The video you see above isn’t “real” - it’s pieced together from images taking throughout the house and then NeRFed to create a virtual fly through like a drone.

Have you ever wondered how we can make computer-generated images look more like real life? Neural Radiance Fields (NeRF) is a technique that can do just that. It helps create 3D scenes by mapping spatial coordinates to colors and volumetric density. But there's a problem - sometimes the pictures can look jagged or have missing parts, known as aliasing.

Recent work has tried to fix aliasing using a method called mip-NeRF 360, which looks at parts of the scene differently. However, this method doesn't work well with other popular techniques that speed up training, like using grid-based models such as Instant NGP (iNGP).

In a recent paper, researchers found a way to combine the best parts of mip-NeRF 360 and grid-based models like iNGP. They called this new method "Zip-NeRF." Zip-NeRF is faster and better at creating more accurate images. It can train 22 times faster than mip-NeRF 360 and reduces error rates by as much as 76% on certain benchmarks.

In layman’s terms: NeRF is a technique used to create realistic computer-generated images and 3D scenes by mapping points in space to colors and densities. However, sometimes these images can have jagged edges or missing parts, a problem known as aliasing.

Researchers have been working on different ways to fix this aliasing issue and speed up the process of creating images. One such method is called mip-NeRF 360, which tackles aliasing but doesn't work well with some other popular techniques that make the process faster, such as grid-based models like Instant NGP (iNGP).

A recent paper introduced a new approach called "Zip-NeRF," which combines the best aspects of mip-NeRF 360 and grid-based models like iNGP. This new method is faster and better at creating more accurate images. It can train 22 times faster than mip-NeRF 360 and reduces error rates by as much as 76% on certain tests.

In simpler terms, researchers have found a way to create more accurate and realistic computer-generated images while making the process much faster. This new approach, called Zip-NeRF, has the potential to improve various applications such as virtual reality, video games, and movie special effects.

This breakthrough is exciting for the world of computer-generated imagery, as it allows for faster and more accurate 3D scene rendering. Zip-NeRF has potential applications in view synthesis, generative media, robotics, and computational photography. As technology continues to advance, we can expect to see even more improvements in creating realistic images and scenes.

In the researchers' implementation of Zip-NeRF, they used JAX and based their work on the mip-NeRF 360 codebase. They replaced the large MLP used by mip-NeRF 360 with iNGP's pyramid of voxel grids and hashes. They made some adjustments for anti-aliasing and other modifications to improve the model's performance. The researchers used separate proposal NGPs and MLPs for each round of proposal sampling, and their NeRF MLP had a much larger view-dependent branch than iNGP.

An essential modification they made to iNGP was imposing a normalized weight decay on the feature codes stored in its pyramid of grids and hashes. This simple trick greatly improved performance compared to no weight decay and significantly outperformed naive weight decay.

The results of the experiments showed that Zip-NeRF is not only faster but also more accurate than previous methods. With these improvements, Zip-NeRF has the potential to revolutionize computer-generated imagery and create even more realistic images and scenes.

0 Comments
Authors
Justin Hart