3D Gaussian Splatting and NeRFs for Burning Man Art Preservation

Пікірлер: 38

  • @kidocreates
    @kidocreates9 ай бұрын

    i'm so curious to see how gaussian splats is going to evolve

  • @maxsuica6144

    @maxsuica6144

    9 ай бұрын

    From gaussian splats to gaussian shats

  • @Gilotopia

    @Gilotopia

    8 ай бұрын

    Animated splats and Ai editing of the splats are probably next

  • @zac2877

    @zac2877

    6 ай бұрын

    AI developed interactions and video games

  • @joelface
    @joelface9 ай бұрын

    SERIOUSLY, after watching the struggles the previous methods had.. the GSPLAT version looked like absolute WIZARDRY! WOW!

  • @jbienz

    @jbienz

    9 ай бұрын

    I totally agree. I fully expected better detail in the areas that were covered by other methods, but I didn't expect radically better coverage or new details like the LED lights. This wizardry is absolutely welcome. It will enable people with no experience to reconstruct usable environments, and those with practice to capture pristine moments in time.

  • @joelface

    @joelface

    9 ай бұрын

    @@jbienz I know Gsplats are point clouds, but it seems like they are far more detailed and intricate than any kind of mesh. It's got to be only a matter of time before you can press a button and have all of that detail in mesh form as well, don't you think? Though, I guess part of the beauty of the gsplat is also that it can capture reflections and light scatter in really impressive ways as well? You've got to think cleaned-up gsplats will be immediately useable as digital backgrounds for TV shows, anyway!

  • @jbienz

    @jbienz

    9 ай бұрын

    @@joelface The challenge is Point Clouds, NeRFs and GSplats are like "gas clouds" while traditional models are like "hollow shells". Imagine how hard it is to convert one to the other. Like, what parts of the gas cloud do you draw circles around and make them solid? This is generally done by looking at some threshold of density, which is why we end up with holes. The other issue with the "hollow shell" approach of traditional modeling is that it can't accurately represent light transmission effects like refraction where the amount light bends depends on the density of the material. We're likely just starting to see the emergence of a new rendering pipeline for the future.

  • @GrowthMindsetDigital
    @GrowthMindsetDigital7 ай бұрын

    Great! Thank you.

  • @muraldoctor1
    @muraldoctor18 ай бұрын

    Great work! Lovely! I work in preservation, so thank you for that!

  • @zippo340415
    @zippo3404159 ай бұрын

    Great and nice job bro!

  • @jbienz

    @jbienz

    9 ай бұрын

    Hey thanks! Are you working on any projects using Splats? If so, anything I can check out online or on GitHub?

  • @SergeyPower
    @SergeyPower9 ай бұрын

    pretty thorough video, thanks!

  • @jbienz

    @jbienz

    9 ай бұрын

    Thanks Sergey! Let me know if there's anything you're still wondering about this tech. I'm enjoying learning about it. BTW NerfGuru is going to have a video on surface reconstruction soon. I don't want to announce too much ahead of schedule, but definitely keep an eye there too.

  • @TheCynicalNihilist
    @TheCynicalNihilist9 ай бұрын

    this will be great in the future for VR concerts

  • @conkin998
    @conkin9989 ай бұрын

    genius idea

  • @b4rtmod
    @b4rtmod9 ай бұрын

    Thats really interesting.

  • @haikeye1425
    @haikeye14258 ай бұрын

    Welcome to our new UE plugin: "UEGaussianSplatting: 3D Gaussian Splatting Rendering Feature For UE"

  • @jbienz

    @jbienz

    8 ай бұрын

    Sounds cool, can you provide a link?

  • @haikeye1425

    @haikeye1425

    8 ай бұрын

    @@jbienz Any link will be removed by KZread, you can search it by Google: "UEGaussianSplatting: 3D GaussianSplatting Rendering Feature For UE"

  • @pixxelpusher
    @pixxelpusher9 ай бұрын

    I wonder if there's a way to instead of downsampling the 8K portrait images you cut them up into multiple images, so you don't loose the resolution and essential gain more images that get fed into the trainer. You could get around 16 1.6K sized images out of an 8K. I guess they'd still need to have common elements in them to track, so even downscaling to 4K and then cutting them up into 4 images might work and have a more detailed result.

  • @jbienz

    @jbienz

    9 ай бұрын

    Another commenter had a similar idea. I replied to him but I'll copy it here as well. I divided each 8k image into 6 rows and 6 columns then added 360 x 640 pixels of overlap. I fed the SFM algorithm 7,346 images but in the end it was only able to solve for 11 of the 7,346 images. Deeply disappointing. The SFM algorithm is tuned for smaller delta changes and doesn't appear to handle image slices sadly.

  • @pixxelpusher

    @pixxelpusher

    9 ай бұрын

    ​@@jbienz Oh well that's a pity! Thanks for trying, would have been great if it had worked.

  • @brettcameratraveler
    @brettcameratraveler9 ай бұрын

    VR + Burning Man + Unreal + Art installations. Definitely my language. Do you have a link to any more info on the VR project these are going into?

  • @jbienz

    @jbienz

    9 ай бұрын

    You bet! It's an evolving work in progress. The main website is brcvr.org. The CEO and all-around amazing human being is Athena Demos. linkedin.com/in/athenademos.

  • @brettcameratraveler

    @brettcameratraveler

    7 ай бұрын

    @jbienz OH yeah. I know Athena. We talked over the phone about the "holodeck" lab I'm building for a school

  • @alpaykasal2902
    @alpaykasal29029 ай бұрын

    Unity absolutely does support gaussian splats... i think it may have had the plug-in and sample scenes before Unreal did.

  • @jbienz

    @jbienz

    9 ай бұрын

    Indeed it does. But as of the date this video was produced, there was no open source implementation of Gaussian Splatting for Unity that was compatible with VR. There was one open source implementation that was compatible with 2D screen space rendering. And there was one binary released without any source code. More projects are showing up on Twitter all the time, but I'm still waiting for an open source Unity implementation with VR support. I'd love to see someone post one here.

  • @alpaykasal2902

    @alpaykasal2902

    9 ай бұрын

    @@jbienz Ahh, I understand, VR support, gotcha. thanks.

  • @DJ-Illuminate
    @DJ-Illuminate9 ай бұрын

    So a radiant is similar to a three dimensional pixel.

  • @jbienz

    @jbienz

    9 ай бұрын

    In some ways, yes. But rather than thinking of it like a voxel, I think of it more like a brush stroke that changes base on the angle it's viewed from. From one angle it might be very opaque. From another angle it might be more translucent. I don't know if that's an entirely accurate way of representing it, but that's what my eyes see when I zoom way in and try to imagine what's going on.

  • @bucklogos
    @bucklogos9 ай бұрын

    Damn cool. Split each of your images into 4 tiles of 2k each and run it again, I want to see the difference.

  • @jbienz

    @jbienz

    9 ай бұрын

    Huh, I hadn't thought about that, but you're right. There would probably be enough overlap for the SFM algorithm to figure out the overlap. I may actually try that!

  • @jbienz

    @jbienz

    9 ай бұрын

    So far the results aren't promising. The SFM algorithm (Structure From Motion) didn't like it split that way. I think because there was no overlap in the splits. I'm going to try again tomorrow and try overlapping the splits by some proportion.

  • @bucklogos

    @bucklogos

    9 ай бұрын

    @@jbienz ah yes that makes sense. Shame. Leaving some overlap should help. But it depends on how the algorithm works, if it processes frames in order and expects only minor differences between frames then the whole idea goes out the window. Probably only a matter of time before something like this is officially supported though.

  • @jbienz

    @jbienz

    9 ай бұрын

    ​@@bucklogos So, I divided each 8k image into 6 rows and 6 columns. That made each cell 1280 x 720. To those cells I added 360 x 640 pixels of overlap. That brought each cell back up to 1080 x 1920. In total, I fed the SFM algorithm it 7,346 images. It churned for nearly 10 HOURS looking for poses. In the end, it was only able to solve for 11 (yes, ELEVEN) of the 7,346 images. I was deeply disappointed. The SFM algorithm is clearly looking for smaller delta changes between frames and is not meant to handle image slices.

  • @bucklogos

    @bucklogos

    9 ай бұрын

    @@jbienz aw shit so that was a big waste of time. :(