No video

Unleash the power of 360 cameras with AI-assisted 3D scanning. (Luma AI)

Пікірлер: 74

  • @thatvideoguy4k
    @thatvideoguy4k7 ай бұрын

    I got to one of your videos while looking for some turntables alternatives, and here I'm watching the 3rd one in a row that have nothing to do with what I was looking for at the beginning, well done mate, you have very engaging and informative videos 👍

  • @AClarke2007
    @AClarke200710 ай бұрын

    Keeping us all up to date and realising that 360 isn't just a gimmick any more!

  • @johnw65uk
    @johnw65uk4 ай бұрын

    Tip: Merge the vertices on the model and you can sculpt inside a 3d package without the mesh breaking apart.

  • @Decoii
    @Decoii11 ай бұрын

    Thank you for this. Even the harsh models will be great references in terms of scaling.

  • @ArcticSeaCamel
    @ArcticSeaCamel Жыл бұрын

    Ai että! Hienoa kamaa tulossa. Vielä kun saadaan tuosta tehtyä rakennuksen IFC-komponentit niin avot!

  • @ney.j_
    @ney.j_ Жыл бұрын

    Excellent video appreciate the work you put in for it!

  • @camshand
    @camshand Жыл бұрын

    Love the Car example for typiclaly "impossible" camera moves through windows. I do wonder if putting windows up and down as the camera moves through may trick it into keeping windows up for the NERF scan allowing you to move through the passenger windows in the final animation.

  • @OlliHuttunen78

    @OlliHuttunen78

    Жыл бұрын

    Good idea. You should try that. Although I think that car should be then scanned two times. Once with the windows open an once with closed. And then compine some how these parts of the model for example in Unreal. Since NeRF creates lumby mesh if something moves during the scanning.

  • @tribaltheadventurer
    @tribaltheadventurer10 ай бұрын

    This is fantastic work Oli, leep up the good work

  • @easyweb3056
    @easyweb30564 ай бұрын

    Excellent content, keep going!

  • @fallogingl
    @fallogingl10 ай бұрын

    Unironically the lump looks like the orb from Donnie Darko 😂

  • @Sigurgeir
    @Sigurgeir11 ай бұрын

    This is just brilliant, thank you for the great explanation. I wonder if this method would be useful to scan a bigger environment like a whole street from a moving car to use as a backdrop in a studio recording.

  • @user-rv1yo3ww3t
    @user-rv1yo3ww3t Жыл бұрын

    Great work thank you for the info :). very interesting!.

  • @f1pitpass
    @f1pitpass10 ай бұрын

    Thank you Olli!

  • @TheBFHmontage
    @TheBFHmontage Жыл бұрын

    great informative video, just what I needed thanks!

  • @mariorodriguez8627
    @mariorodriguez8627 Жыл бұрын

    Great work thank you for the info :)

  • @smiledurb
    @smiledurb Жыл бұрын

    very interesting!

  • @lobodonka
    @lobodonka Жыл бұрын

    Nicely described video! Your interests match mine, so, just subscribed! Bring us some more goodies. 👍

  • @saemranian
    @saemranian Жыл бұрын

    Thanks for sharing

  • @madedigital
    @madedigital Жыл бұрын

    very good info

  • @notanotherbrick6114
    @notanotherbrick6114 Жыл бұрын

    Fascinating! Can you look the generated models in a VR headset, as the quest 2? In this case, can you walk around inside the model? This would be a perfect application for that!

  • @OlliHuttunen78

    @OlliHuttunen78

    Жыл бұрын

    Sure it can be done in Unreal. There is a video from bad decision studio where the guys tests how NeRF models run in VR in Unreal engine. Check it out: kzread.info/dash/bejne/lH-olNGPhNqfYJM.html

  • @o0oo888oo0o
    @o0oo888oo0o11 ай бұрын

    Thank you

  • @Mauriliocaracci
    @Mauriliocaracci11 ай бұрын

    Great! Thanks

  • @IdahoMthman
    @IdahoMthman10 ай бұрын

    I will have to try this with my X3

  • @gaussiansplatsss

    @gaussiansplatsss

    Ай бұрын

    have you try it? can you share it to me

  • @luckybarbieri8533
    @luckybarbieri85336 ай бұрын

    Great info. Thx. Do you think this setup would be good to create a 3D model of a large place, like a church for example?? Or do you recommend another type of setup? Thank you!

  • @gaussiansplatsss
    @gaussiansplatsss4 ай бұрын

    Which is better for you, postshot or Luma Ai?

  • @OlliHuttunen78

    @OlliHuttunen78

    4 ай бұрын

    I'd say Postshot because you can train your model more accurate than in Luma AI and you can live preview the process.

  • @dewanthornberry7938
    @dewanthornberry79389 күн бұрын

    Please, can this be used to do room interiors? And then these be used as minute data points for comparison?

  • @TrasThienTien
    @TrasThienTien4 ай бұрын

    🤗🤗🤗

  • @jasoncow2307
    @jasoncow230711 ай бұрын

    hi!i'm wondering,, the video you uploaded is 360 original footage or recuted one side camer footage?

  • @OlliHuttunen78

    @OlliHuttunen78

    11 ай бұрын

    Yes. I made test with both. The original full equiretangular footage does not make as good result as the video which is cropped from full 360 video. Luma works better if you can go around your target.

  • @lennycecile3775
    @lennycecile377511 ай бұрын

    Hi Olli, great content. I'm curious on if this will work with the insta360 sphere, and what kind of results will you get?

  • @OlliHuttunen78

    @OlliHuttunen78

    11 ай бұрын

    Sure it works. I have tried that on sphere with my dorne. But it is not that convincing when rendered as a equirectangular image out from Luma AI. But when they get this new Gaussian Splatting method work for 360 images it will be perfect. We just need to wait a little bit because its very new tecnique.

  • @lennycecile3775

    @lennycecile3775

    11 ай бұрын

    @@OlliHuttunen78 Thank you. Its mind-boggling technology 🔥

  • @sujitchachad
    @sujitchachad Жыл бұрын

    Thanks for the video. I followed your tips but when I import the model in the blender it just imports small chunk of cropped scene. in Luma Ai i have adjusted the crop to cover the whole geometry but wjen export to .gltf it exports cropped geo. is limitation of free service? I hope I have explained properly.

  • @OlliHuttunen78

    @OlliHuttunen78

    Жыл бұрын

    Yes. I noticed that luma exports only cropped models right now if you export GLB or OBJ. If you export it to Unreal you will get both versions full model with the backgroud and the cropped one. I quess this need to be asked directly from LumaLabs if they could include the full model also for mesh models.

  • @masanoriito
    @masanoriito Жыл бұрын

    Please let me know how I can get high quality scans like yours. You mentioned that in the middle of the video, you export the HD video instead of the 360 ​​video and upload it to luma ai. However, in the subsequent scene where the two containers are painted, you used equirectangular video. Which video format would you recommend based on your experience so far? Also, did uploading the insv file directly work for you? I'm using ONE X2, but it doesn't work because it doesn't have leveling function.

  • @OlliHuttunen78

    @OlliHuttunen78

    Жыл бұрын

    Yes. I recommend that you allways edit your material in Insta360 studio. Right now I have had much accurate and better nerf models when I edit the video such way that the target that I shot is in the middle of the picture during the whole video. Then I render it out as a normal MP4 in HD resolution and upload that in the Luma AI service as a Normal video. Second option is to load the full equiretangular video (also in MP4 format). But I have noticed that NeRF trained from equiretangular video do not convert that accurate model as the one where the target is centered. Perhaps I could make another video where I go more deeply in these methods.

  • @masanoriito

    @masanoriito

    Жыл бұрын

    Thank you for your detailed response. Looking forward to another explainer video. When scanning a place, do you scan the same place over and over again at different heights? Or is it a one time thing?

  • @OlliHuttunen78

    @OlliHuttunen78

    Жыл бұрын

    Yes. When I'm scanning I record all at once to one video file. Usually with 360 camera you don't need to make so many walk arounds of your object on different heights because those wide lenses sees most of the surroundings at once. With the selfie stick it is very easy to reach and capture all corners of your object.

  • @masanoriito

    @masanoriito

    Жыл бұрын

    Gotcha! Thanks a lot!

  • @LaurentEgliAdventure
    @LaurentEgliAdventure11 ай бұрын

    Great video thanks for sharing and thanks and congratulations to your partner who puts up with your tests 😂

  • @2imtuan
    @2imtuan5 ай бұрын

    what is accessory that you used with insta 360 camera ? I saw a connector attach to a rig

  • @OlliHuttunen78

    @OlliHuttunen78

    5 ай бұрын

    It is a power selfie stick. There is a battery in selfie stick which can give extra power to 360 camera via usb and you can also press record button and control camera from the stick.

  • @2imtuan

    @2imtuan

    5 ай бұрын

    @@OlliHuttunen78 oh right !! thank you so much mate

  • @robmulally
    @robmulally11 ай бұрын

    Thanks for this video. Time to dust off my 3d camera

  • @michael_knight3457
    @michael_knight34578 ай бұрын

    Hello! Can LUMA AI phone scanning software scan a given item in a 1 to 1 ratio? It will know the dimensions of the scanned item, e.g. height, width. I want to model a separate part based on the scanned item that would match the first one. Is it possible?

  • @Niberspace
    @Niberspace4 ай бұрын

    If this app wasn't cloud based I would have loved to try it, but

  • @rockbench
    @rockbench8 ай бұрын

    Hi, is the final result download able?

  • @gaussiansplatsss
    @gaussiansplatsss4 ай бұрын

    what is your pc specs Sir?

  • @pietervandervyver516
    @pietervandervyver51611 ай бұрын

    If I take 30 sec with a 360 Does it take up a lot of resolution or memory? B.. I just want to video 4x people next to each other similar to yre car lady Thank you

  • @JAYTHEGREAT355
    @JAYTHEGREAT355 Жыл бұрын

    hello brother , did you shoot a 360 video or where you shooting consting pictures to then upload to luma ai

  • @OlliHuttunen78

    @OlliHuttunen78

    Жыл бұрын

    I shooted 360 video.

  • @JAYTHEGREAT355

    @JAYTHEGREAT355

    Жыл бұрын

    @@OlliHuttunen78 thank you brother , i will try to repricate by fallowing youre video , i 3d print so maybe i can scan some figurings and convert them to 3d printable stls . thank you .

  • @OlliHuttunen78

    @OlliHuttunen78

    Жыл бұрын

    @@JAYTHEGREAT355 I recommend also check out the 3Dpresso web service 3dpresso.ai/. It can also make 3D models from video. They turn out to be much solid and suitaple models for 3D printing than luma ai model. When NeRF model is tornet to polygon model it can be very broken and takes lot of work to make it solid stl for 3D printing.

  • @360socialms
    @360socialms Жыл бұрын

    Thank you very much for the tutorial!! I have uploaded on Luma web, a 360 video as equirectangular, filmed with the camera always vertically (the video is not walking around an object, it is a free walk through an outdoor space). Luma processes it and creates the NeRF model, but with important noise, cuts and cloud species of noise. In the same way when I create a Reshoot in free form and render, the results are still of poor quality. Do you have any suggestions to improve this? Does the 360 ​​origin video have to have any requirements? Thank you so much !!

  • @OlliHuttunen78

    @OlliHuttunen78

    Жыл бұрын

    Yes. I have noticed also that Luma does not make so great models from 360 equiretangular images where you just walk straight line. It will create something but Luma is mostly based on circular movement where you move aroud something. But you also should not rely what you can see on the web browser when you rotateng the model in 3D mode. It is only aproximate preview. Much better result will appear if you render some videos out from Luma service. That is when the actual NeRF model can be seen and it is often much better looking than the model which you can see in the Web Browser. Another tip is to download the model into Unreal Game Engine and see how the volume model will look in there. All the other options when you download the model in GLTF, USD or OBJ format thaey will convert the NeRF volume to polygons and it will loose its quality. In polygons the model is not that good. But as for the 360 camera settings I do not have any special tip. Just don't try to upload too long clips where you walk like over 100 meters long route in the video. Luma works best when you have video shot from short area.

  • @360socialms

    @360socialms

    Жыл бұрын

    @@OlliHuttunen78Thank you very much Olli for the answer. Yes indeed, it seems that Luma responds very well to scanning objects when moving around them, and not in more limeal routes. In my commented case, the video source is very short, only 17 seconds and taken with Ricoh ThetaV camera.The final video with the route animation in the Reshoot and the 3D model (gltf) generated by Luma, both are very bad. I'll keep trying different alternatives, seeing if I can get better results. Your channel is the only one that deals with this important topic. Thank you very much for your help !!

  • @resanpho
    @resanpho Жыл бұрын

    Hi Oli and thank you for this interesting video. Do i get it right that the objects which are being recorded should be static and the whole thing will not work when you have moving objects? For instance would it be possible to capture a 360 video from a scene in which people dance? I guess not. Thanks

  • @OlliHuttunen78

    @OlliHuttunen78

    Жыл бұрын

    Yes. This scannig method works only with static objects and surroundings. If something moves or passes by (like bike or a car in the background) while you are scanning. AI tries to ignore them and remove from radiance field. It's kind of same effect if you take a photo with a very long exposure time. So you cannot make a very good 3D model with this method from the scene where people are dancing.

  • @resanpho

    @resanpho

    Жыл бұрын

    ​@@OlliHuttunen78 Thank you for response. I was thinking about the ability of 3d modeling important events such as wedding. If every guest play well, once could create a memorable 3D model of the event. :) Another question: Is there a special media player / tool to view the exported 3d Model? Can a normal user easily view the model or needs to install specific and complex tools?

  • @OlliHuttunen78

    @OlliHuttunen78

    Жыл бұрын

    Yeah! It could work to model that kind of group picture in wedding if everybody can remain in place couple of minutes while you scan the moment with 360 camera. You can easily share a link from Luma AI and people can look rendered NeRF video and rotate 3d model in web browser. It works in mobile and on the computer. You don't have to login or download any kind of special app or plugin for that. And model can be also be embeded to any webpage. Those are the normal features of this kind of cloud service. Luma AI is a great service.

  • @resanpho

    @resanpho

    Жыл бұрын

    @@OlliHuttunen78 thanks a lot Mate. Need to Test it.

  • @kriptomavi
    @kriptomavi10 ай бұрын

    only iphone?

  • @Hopp5ann

    @Hopp5ann

    4 ай бұрын

    It has an android app now

  • @anthonycampbell7843
    @anthonycampbell78433 ай бұрын

    kzread.info/dash/bejne/gpeg2aOFgMzXmbQ.html Was your video done before the update to remove the floaters? Or were they still present during your tests at the 6:30 mark?

  • @OlliHuttunen78

    @OlliHuttunen78

    3 ай бұрын

    My video was made after that Luma AI floaters announcement. But it should be noted that I presented the model in preview mode on Luma's web pages. It doesn't tell the whole truth. The final result of the NeRF model will only appear when the camera animation is rendered. There are often significantly fewer floaters to be seen. But this is quite secondary now that Gaussian Splatting technology has replaced everything and the older 3D models produced with NeRF technology are not talked about very much anymore. In that sense, many things in this video are already outdated information.

  • @Mateee.01
    @Mateee.014 ай бұрын

    if u use a pro iPhone that have lidar sensor the result will be much more detailed than luma ai....

  • @iarde3422
    @iarde34227 ай бұрын

    I hate it, when people put their feet in dirty shoes on top of seats where other people are going to seat afterwards and make their pants dirty, because of inconsiderate filthy people, that have climbed on the seat with their dirty shoes. If such people don't understand it, then they should be punished for doing this by cleaning the seat every day for a week.

Келесі