[CVPR2024] HiFi4G: High-Fidelity Human Performance Rendering via Compact Gaussian Splatting
Project Page: nowheretrix.github.io/HiFi4G/
Arxiv: arxiv.org/abs/2312.03461
We have recently seen tremendous progress in photo-real human modeling and rendering. Yet, efficiently rendering realistic human performance and integrating it into the rasterization pipeline remains challenging. In this paper, we present HiFi4G, an explicit and compact Gaussian-based approach for high-fidelity human performance rendering from dense footage. Our core intuition is to marry the 3D Gaussian representation with non-rigid tracking, achieving a compact and compression-friendly representation. We first propose a dual-graph mechanism to obtain motion priors, with a coarse deformation graph for effective initialization and a fine-grained Gaussian graph to enforce subsequent constraints. Then, we utilize a 4D Gaussian optimization scheme with adaptive spatial-temporal regularizers to effectively balance the non-rigid prior and Gaussian updating. We also present a companion compression scheme with residual compensation for immersive experiences on various platforms. It achieves a substantial compression rate of approximately 25 times, with less than 2MB of storage per frame. Extensive experiments demonstrate the effectiveness of our approach, which significantly outperforms existing approaches in terms of optimization speed, rendering quality, and storage overhead.
Yuheng Jiang, Zhehao Shen, Penghao Wang, Zhuo Su, Yu Hong, Yingliang Zhang, Jingyi Yu, Lan Xu,
HiFi4G: High-Fidelity Human Performance Rendering via Compact Gaussian Splatting
Пікірлер: 20
Holy smokes. It's like a interactive video. Now, imagine recreating famous slo-mo scenes from The Matrix. Instead of hundred cameras mounted in a fixed position, creating only one jittery sequence, we got these gaussian splats. Incredible.
6 ай бұрын
You still need a lot of cameras for complete 4D 360 capture. GaSP can process only data it has, it wont generate which was not captured.
I predict that this will be a huge technology for the movie industry!
This is awesome! This will remove the requirements for greenscreen, but directors/dp's will still need to "match lighting" to composite. I can't wait for this to release!
@bause6182
3 ай бұрын
the time savings we can have are incredible, you just have to shoot a video and incorporate your 3D model directly into the scene, no more need for motion tracking to integrate a flat video into the scenery. I can't wait to try
Very impressive and informative thank you!
This is the future!
Wow 🥰🥰 this is incredible, when public test?
cant wait to the day we can render it through cloud like polycam and export it as a mesh to blender
This is amazing! 👏How do you even set something like this up?
incredible! Will this become publicly accessible soon? Would love to try this out! Are there any limits to the amount of cameras you have to use? And do the camera's need to be high end or would this work with any kind of camera quality? And what is the processing time for lets say 10 sec of 4d footage?
cameras around subject 360 degrees?
Can you tell me how many cameras the source material was shot on?
Can these composite into Google's recently announced 'SMERFS"?
So what is the driver? Image or video
Is your tool accessible, how to use it. I am really interested
How can I download the app?
Can we use it in Unreal Engine?
All outputs and no inputs. The input video and how it was taken has to be shown first.
any demo online?