Gaussian Splatting Nerf
I am absolutely staggered at Gaussian Splatting for NERFs. Fast and accurate? And runs on my RTX-2060 mobile GPU (with some minor changes). I'm absolutely floored.
Background
I used my Razer Blade 15 2020 with an Intel Core i7-12850h, an Nvidia RTX-2060 mobile GPU with 6GB VRAM, running Windows 11 v22000.2416. This time though, I did not use WSL2 like in previous posts about NeRFs. I ended up using CUDA 12.2 with Visual Studio 2019.
Training
I followed the directions in the GitHub README and trained on the Lego NeRF synthetic data I had from running previous NeRFs.
python train.py -s ..\nerf\data\nerf_synthetic\lego --densify_until_iter=7000
Note the --densify_until_iter=7000
. This comes from the FAQ, which states that there are a couple knobs to tweak if you don't have 24GB of VRAM. This one worked for me and my measly 6GB of RAM. 🤷
Visualizing
This ended up taking longer than I thought it would. In particular, it's the same problem as usually, some mismatching library. In this case, CUDA. First, cmake
was too old, so I used conda
to install a newer version. That was weird because cmake
is listed as a dependency in the environment.yaml
. Next, there was a missing ninja
executable, which was weird since it's in the environment.yaml
file. Nevertheless, I added it through conda
. Building SIBR_gaussianviewer
though caused a strange error:
GaussianView.obj : error LNK2019: unresolved external symbol cudaGetDeviceProperties_v2 referenced in function "public: __cdec
l sibr::GaussianView::GaussianView(class std::shared_ptr<class sibr::BasicIBRScene> const &,unsigned int,unsigned int,char con
st *,bool *,int,bool,bool,int)" (??0GaussianView@sibr@@QEAA@AEBV?$shared_ptr@VBasicIBRScene@sibr@@@std@@IIPEBDPEA_NH_N3H@Z) [C
:\Users\mark\Projects\gaussian-splatting\SIBR_viewers\wbuild\src\projects\gaussianviewer\renderer\sibr_gaussian.vcxproj]
C:\Users\mark\Projects\gaussian-splatting\SIBR_viewers\install\bin\sibr_gaussian_rwdi.dll : fatal error LNK1120: 1 unresolved
externals [C:\Users\mark\Projects\gaussian-splatting\SIBR_viewers\wbuild\src\projects\gaussianviewer\renderer\sibr_gaussian.vc
xproj]
Googling for cudaGetDeviceProperties_v2
had very few hits, but it was noted on a handful of them that this is in CUDA 12
. But, I was using CUDA 11
, specifically CUDA 11.3
. So, I decided to switch to WSL2
only for the viewer. And it compiled. But, WSL2
doesn't support a high enough version of OpenGL for the Gaussian Viewer.
So, I went back to Windows 11, set env:CUDA_PATH
to env:CUDA_PATH_12_2
, manually edited the CMakeFiles.txt
to change 11.3
to 12.2
and recompile. And it worked (without the CUDA/OpenGL interop though):
.\install\bin\SIBR_gaussianViewer_app_rwdi.exe -m C:\Users\mark\Projects\gaussian-splatting\output\e1f47668-a\
And, Gaussian splatting looks fantastic.
Subscribe to my newsletter
Read articles from Mark Kim directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by