No video

I3D'24 Technical Paper: Filtering After Shading with Stochastic Texture Filtering

This is a recorded talk accompanying our paper:
"Filtering After Shading with Stochastic Texture Filtering" by Matt Pharr, Bartlomiej Wronski, Marco Salvi, and Marcos Fajardo.
research.nvidia.com/labs/rtr/...
Published at ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D), 2024, where it won the Best Paper Award.
Please see the link above for publication details.
Thank you for watching!

Пікірлер: 16

  • @samson7294
    @samson72942 ай бұрын

    Dont know if anyone will read this but I don't make games, i don't code, I just game! However, I love knowing the intricacies of what makes my hobby possible. I know techs/devs don't get the credit when its due and get the hate when it's not but please know that you guys make my life worth living! I am mostly bed ridden due to a spinal cord injury that left me paralyzed 12 years ago. Going outside is a chore due to my lack of mobility and constant pain so exploring the digital worlds and experiencing the stories that are built through your hard work means the world to me! I save up so i can to upgrade my PC to get the best experience possible within my budget. It's worth every penny! It keeps me wanting to live just long enough to see what technology will come next!

  • @MinhNguyen-fp2mk
    @MinhNguyen-fp2mkАй бұрын

    This is what the graphic community need right now

  • @robinbenzinger5646
    @robinbenzinger56462 ай бұрын

    Great paper and presentation. Can't wait for these new filtering methods and neural textures to arrive in future games.

  • @PawelStoleckiTechArt
    @PawelStoleckiTechArtАй бұрын

    Thanks for sharing, great to see advancements in that field!

  • @Atrix256
    @Atrix2562 ай бұрын

    Great video and paper Bart :)

  • @AuroraLex
    @AuroraLexАй бұрын

    I hope this gets implemented in Microsoft Flight Simulator for the speed/quality of cloud rendering.

  • @jeffreyzhuang4395
    @jeffreyzhuang4395Ай бұрын

    You always amaze ordinary game devs like me. I'm very curious about how you study and what math courses you took in college.

  • @SuperXzm
    @SuperXzmАй бұрын

    I still don't understand why people try to erase specular dots, because that's how specular objects work in real life. The little glittery sparkles crawling around look natural.

  • @袁军平
    @袁军平18 күн бұрын

    I have a question. How do you do 64x filtering when use Stochastic Texture Filtering?

  • @Ricky_Lauw
    @Ricky_LauwАй бұрын

    Great paper! We have just made a material system for the unreal marketplace, heavily relying on temporal dithered triplanar mapping like you mentioned. I would love to see Unreal implement your novel solution to texture filtering. What are your thought on this technique or something similar to sample rough reflections? I believe currently this uses mipmapping aswell. Also I have been thinking about that in future rendering we would probably render at multiple times the screen resolution and sample it down to the screen resolution with all the benefits that could come with that...

  • @Cloroqx

    @Cloroqx

    Ай бұрын

    Link?

  • @Ricky_Lauw

    @Ricky_Lauw

    Ай бұрын

    @@Cloroqx kzread.info/dash/bejne/gq5svM6oZMXJiZc.htmlsi=TlEsK0p2V3tWZR69

  • @Ricky_Lauw

    @Ricky_Lauw

    Ай бұрын

    @@Cloroqx Ah it seems I'm not allowed to add links in comments, but the asset is called NOVA - modular sci-fi kit and there is also a separate kit with materials only. But I do not want to hijack this comment section.

  • @Cloroqx

    @Cloroqx

    Ай бұрын

    @@Ricky_Lauw Thanks!

  • @Waffle4569
    @Waffle4569Ай бұрын

    You lost me at DLSS. If the method needs to lean into DLSS to get decent results, that excludes *a lot* of platforms. I'm not yet convinced on temporal filtering, hell I'm not convinced on TAA given how brutal its overblurring and artifacts tend to be. I don't trust a compressed video to convey how different it is from traditional filtering.

  • @jacobcrowley8207
    @jacobcrowley8207Ай бұрын

    This seems like a problem with the neural texture technique itself. It's using downscaled features, but if you can't filter it (like with anisotropic filtering) then it must be relying on aliasing to pass high frequency information from the lower resolutions. See Alias Free GAN / StyleGAN 3 and its comparisons with StyleGAN 2 showing the feature maps. An idea for a simple solution, during optimization of the neural textures, shift the target image over by one or so pixels randomly, and interpolate the low resolution feature maps in the neural texture accordingly. For example, if the neural texture's features are 0.25 the resolution, then move the image over 1 pixel, and move the features over 0.25 pixels, interpolating between the neighboring values in the same way that hardware filtering would do it. Perhaps then it would learn to look good with anisotropic filtering without needing to implement any sort of additional filtering in the neural texture decoder.