Depth Camera - Computerphile

Depth can be a useful addition to image data. Mike Pound shows off a realsense camera and explains how it can help with Deep Learning
/ computerphile
/ computer_phile
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Пікірлер: 261

  • @Pystro
    @Pystro2 жыл бұрын

    4:27 "I should put an artwork up or something." Take a depth-field picture of that wall, print it out and hang it back onto the wall. Now it's a piece of art!

  • @Checkedbox

    @Checkedbox

    2 жыл бұрын

    @yefdafad I think you might have forgotten to switch Windows

  • @nikanj
    @nikanj2 жыл бұрын

    Ah the Kinect. Such a massive failure as a gaming peripheral but pivotal in so much computer vision research/DIY projects.

  • @MINDoSOFT

    @MINDoSOFT

    2 жыл бұрын

    And even freelance production projects ! As part of a team I've created one game with kinect v1, and another one with kinect v2. What a great piece of hardware.

  • @glass1098

    @glass1098

    2 жыл бұрын

    @@MINDoSOFT Which ones?

  • @MINDoSOFT

    @MINDoSOFT

    2 жыл бұрын

    @@glass1098 hi ! Unfortunately I don't have a portfolio page. But the first one was an air-hockey style game, where the player held a broom with an ir led, which was detected via kinect, and the players had to put the trash in the correct recycling bins. The other game was a penalty shootout game which detected the players kick. :)

  • @xeonthemechdragon

    @xeonthemechdragon

    2 жыл бұрын

    I have three of the v2, and 2 of the v1

  • @JulesStoop

    @JulesStoop

    2 жыл бұрын

    Kinect technology became faceID in iPhone and iPad. Not a failure at all but providing very secure and just about invisible biometric authentication to about a billion people on a daily basis.

  • @cussyplays
    @cussyplays2 жыл бұрын

    I just LOVE that he talks to the camerman and not us, makes it so much more candid and easier to watch as a viewer!

  • @oskrm
    @oskrm2 жыл бұрын

    - "Probably have to give it back" - "Oh no, it fell off... my car"

  • @Yupppi
    @Yupppi2 жыл бұрын

    Mike always has something exciting.

  • @smoothmarx
    @smoothmarx2 жыл бұрын

    That comment at 2:41 was magic. Caught me red handed!

  • @ianbdb7686
    @ianbdb76862 жыл бұрын

    This channel is insane. Never stop uploading

  • @daltonbrady2492
    @daltonbrady24922 жыл бұрын

    Mike Pound always has the stuff to really get you going! More Mike Pound!

  • @araghon007
    @araghon0072 жыл бұрын

    A sidenote to Kinect: The Kinect v2 uses time of flight, which some people like, some people hate. What I find most fascinating is that the Kinect lives on, both as Kinect for Azure, and the depth sensing tech the Hololens has. While not successful as a motion control method, it's still really useful when used with a PC.

  • @TiagoTiagoT

    @TiagoTiagoT

    2 жыл бұрын

    Why people hate it?

  • @MattGriffin1
    @MattGriffin12 жыл бұрын

    Another great video from Mike, love computerphile!

  • @rachel_rexxx
    @rachel_rexxx2 жыл бұрын

    Thank you this was exactly the breakdown I was hunting for last week!

  • @jenesuispasbavard
    @jenesuispasbavard2 жыл бұрын

    I still use my Kinect - mostly to just log into Windows with my face, but also as a night camera to keep an eye on our new foster dog when he's home alone. It's amazing that a piece of hardware almost a decade old is still so good at what it does!

  • @stef9019
    @stef90192 жыл бұрын

    Always great to learn from Mike Pound!

  • @TheGreatAtario

    @TheGreatAtario

    2 жыл бұрын

    He's a lot better than Mike Pence

  • @kieronparr3403

    @kieronparr3403

    2 жыл бұрын

    Entering poundland

  • @maciekdziubinski
    @maciekdziubinski2 жыл бұрын

    Alas, Intel discontinued the RealSense line of products. The librealsense library will be still maintained (if I'm correct) but no new hardware is going to be released.

  • @joels7605

    @joels7605

    2 жыл бұрын

    I wish they'd maintain the L515 a little better. The 400 series seem to be well supported, but the 500 series is a vastly superior sensor.

  • @arcmchair_roboticist

    @arcmchair_roboticist

    2 жыл бұрын

    There is still kinect which actually works better in pretty much every way afaik

  • @joels7605

    @joels7605

    2 жыл бұрын

    @@arcmchair_roboticist There is some truth to this. KinectV2 and V1 are both excellent. I think it's mostly down to a decade of software refinement though. From a hardware perspective the RealSense L515 should mop the floor with everything. It's a shame it was dropped.

  • @paci4416

    @paci4416

    2 жыл бұрын

    Intel has discontinued some of the products, but the stereo cameras would continue to be sold (D415, D435I, D455) for sure. The librealsense library is still maintained (new release today).

  • @CrazyDaneOne

    @CrazyDaneOne

    2 жыл бұрын

    Wrong

  • @MmmVomit
    @MmmVomit2 жыл бұрын

    I wonder what this might do with a mirror. I expect it would see the mirror as a "window" where there's a lot more depth, but I wonder how it would handle the weird reflections of the IR dots.

  • @meispi9457

    @meispi9457

    2 жыл бұрын

    Wow 🤯 Interesting thought!

  • @FlexTCWin

    @FlexTCWin

    2 жыл бұрын

    Now I’m curious too!

  • @260Xander

    @260Xander

    2 жыл бұрын

    Someone needs to do this please!

  • @hulavux8145

    @hulavux8145

    2 жыл бұрын

    It does not do well really. Same things with transparent objects

  • @zybch

    @zybch

    2 жыл бұрын

    The dots necessarily spread out from the projector, so even if a mirror was placed perfectly perpendicular to their flight path barely any would reflect back in the right way to generate a coherent depth image.

  • @jerrykomas1248
    @jerrykomas12482 жыл бұрын

    This is really insightful. We are using stereomaping, similar to the techniques used by Landsat and World View satelites, for my Master's Thesis! This technology is super cool, glad you are showig folks how it works becasue there are so many applications beyond the kinect!

  • @marioh9926
    @marioh99262 жыл бұрын

    Exceptional once again, Mike, congratulations!

  • @Sth0r
    @Sth0r2 жыл бұрын

    i would love to see this and Intel RealSense LiDAR L515 side by side.

  • @jonva13
    @jonva132 жыл бұрын

    Oh, thank you! 🙏 This is exactly the video I've been looking for.

  • @ajv35
    @ajv352 жыл бұрын

    I wish he would've done a more in depth explanation about the device. Like what data type is used for the depth field? Is it a 2D array of floating point values since depth can technically be infinite? Is it calibrated to only detect so far? Or does it use a variable-depth rate with a finite sized data type (like an integer, as in the other rgb fields) that adjusts the value according to the furthest object it senses?

  • @b4ux1t3-tech

    @b4ux1t3-tech

    2 жыл бұрын

    So, thinking about it, it's likely that the RGB aspect is an integer or a fraction between 0 and 1. That's pretty common, and for RGB, those two are going to be functionally identical, since a computer is likely only going to be able to display in 24-bit color anyway. So, for the color, it probably doesn't matter, and it could go either way. The depth is probably a fraction between zero and one. That would allow you to map between the visible colors pretty accurately, and display a fine-grained depth map, which we see in the video. After all, you only need 32 million values, and the resolution of a 32-bit floating point between 0 and 1 gives you that reliably. Re: 2d array, I wouldn't be surprised if it's indexable as a 2d array in the API, but it's probably stored as a 1d array, since translating from coordinates to an index (and vice versa) is trivial. I don't know if that's actually what's going on, mind you, just making some assumptions based on similar technologies.

  • @Norsilca

    @Norsilca

    2 жыл бұрын

    I'll bet it's just an extra byte, just like R, G, and B are each 1 byte. 256 integers, maybe in a logarithmic scale so there's more precision for near values than far ones.

  • @b4ux1t3-tech

    @b4ux1t3-tech

    2 жыл бұрын

    Keep in mind, you don't have to store colors as 24-bit (three byte) colors, that's just a convention because that's what most monitors support. If you're working with optical data, you may or may not be limited to a 24-bit color. For the depth, only having 256 "depth steps" seems _really, really_ restrictive.

  • @Norsilca

    @Norsilca

    2 жыл бұрын

    @@b4ux1t3-tech Yeah, I just meant the common 24-bit RGB format. 8 bits for depth could be too little, though I thought it might be enough to give the extra boost a neural net needs. You could easily do more bits. I was wondering if instead of inventing a new format they actually just produce a separate file that's a grayscale image for the depth. Then you can combine them yourself or just use the standard RBG image when you don't need depth.

  • @danieljensen2626

    @danieljensen2626

    2 жыл бұрын

    I imagine if you look up a manual it'll tell you.

  • @Snair1591
    @Snair15912 жыл бұрын

    This device, Intel RealSense D435 and its peers, are so under appreciated. The hardware is brilliant but at the same time the wide range of support its packages offers is amazing. They have regular support with ROS, edge computation platforms like Jetson nano and as a stand alone relasense SDK. If more people knew about this and used it, Intel would not have dared to thought of shutting this down. There are other cameras similar this, like Zed for example, but the wise array of support realsense offers ha no competition.

  • @thecheshirecat5564

    @thecheshirecat5564

    2 жыл бұрын

    You don’t even need an SDK. If you have a network card, there are devices that run driverless and are compatible with industrial and FOSS software. We are building one of these.

  • @utp216
    @utp2162 жыл бұрын

    I loved your video and hopefully you’ll get to hang on to the hardware so you can keep working with it.

  • @Bstrolch
    @Bstrolch2 жыл бұрын

    MIKE IS BACK

  • @omerfarukpaker7551
    @omerfarukpaker7551 Жыл бұрын

    I am literally enlightened! Thanks ever so much!

  • @AaronHilton
    @AaronHilton2 жыл бұрын

    For everyone looking for a realsense alternative, occipital are still shipping their structure sensors and structure cores. Works on similar principles.

  • @soejrd24978
    @soejrd249782 жыл бұрын

    Ohh yes! Mike videos are the best

  • @stefanguiton
    @stefanguiton2 жыл бұрын

    Excellent video!

  • @maxmusterman3371
    @maxmusterman33712 жыл бұрын

    Its been so long 😭 finally

  • @Hacktheplanet_
    @Hacktheplanet_2 жыл бұрын

    Id like to hear a video with mike pound talking about the occukus quest 2, i bet that uses a similar method. What a brilliant machine!

  • @Athens1992
    @Athens1992 Жыл бұрын

    Very informative!! This camera will work far better at night in a car instead in the morning?

  • @johanhendriks
    @johanhendriks2 жыл бұрын

    What's the link to the video where the stuff on the whiteboard was written and discussed?

  • @CineGeeks001
    @CineGeeks0012 жыл бұрын

    I am search for this yesterday and now you put video 😀

  • @sermadreda399
    @sermadreda399 Жыл бұрын

    Great video, thank you for sharing

  • @adekunleafolabi1040
    @adekunleafolabi10402 жыл бұрын

    A beautiful beautiful beautiful video

  • @bluegizmo1983
    @bluegizmo19832 жыл бұрын

    Image Depth is a quantification of the camera's ability to take a picture that makes a deep philosophical statement! 🤣

  • @astropgn
    @astropgn2 жыл бұрын

    lol I put my finger on my face at the exact instant before the screen said I was looking at my finger

  • @asnothe
    @asnothe2 жыл бұрын

    I have that laptop. Thank you for validating my purchase. ;-)

  • @suryavaraprasadalla8511
    @suryavaraprasadalla8511 Жыл бұрын

    Great explanation

  • @delusionnnnn
    @delusionnnnn2 жыл бұрын

    I'm reminded of my sadly unsupported Lytro Illum camera, a "lightfield" device. Being able to share "live" images was fun, and it's a shame they didn't release that back-end code as open source so something like flickr or instagram could support it. You can still make movies of it, but the fun of the live images was that the viewer could control the focus view of your photograph.

  • @arash_mehrabi
    @arash_mehrabi Жыл бұрын

    Nice explanation. Thanks!

  • @katymapsa
    @katymapsa2 жыл бұрын

    More Mike videos, please!!

  • @TiagoTiagoT
    @TiagoTiagoT2 жыл бұрын

    Is the depth calculated on the hardware itself or on software running on the computer?

  • @Lodinn
    @Lodinn2 жыл бұрын

    Ah, just gotten a couple of 435's for the lab this year. The funniest bit so far is how it sometimes does a perspective distortion of featureless walls much more realistically than photoshop does :D

  • @nonyafletcher601
    @nonyafletcher6012 жыл бұрын

    We need more cameos of Sean!

  • @DavidLindes
    @DavidLindes2 жыл бұрын

    Now if we can get IRGBUD (adding (near-)Infrared and Ultraviolet), that'd be cool. (Even cooler would be FIRGBUD, but far-IR tends to require sufficiently different optics that I definitely won't be holding my breath for that one.)

  • @UTVNEPAL
    @UTVNEPAL2 жыл бұрын

    Genius idea, Exactly a multiple image sensor can capture various algorithms. Specially Heat signature. That can see through doors.

  • @functionxstudios1674
    @functionxstudios16742 жыл бұрын

    Made it early. Computerphile is the Best

  • @lopzag
    @lopzag2 жыл бұрын

    Would be cool to see Mike talk about 'event cameras' (aka artificial retinas). They're really on the rise in machine vision.

  • @ciarfah

    @ciarfah

    2 жыл бұрын

    Agreed. Hoping to work with those soon

  • @elmin2323
    @elmin23232 жыл бұрын

    Mike needs to have his own channel Dona vlog

  • @anujpartihar
    @anujpartihar2 жыл бұрын

    Hit the like button so that Mike can get to keep the camera.

  • @haziqsembilanlima
    @haziqsembilanlima2 жыл бұрын

    Just a question, is image depth included in regular JPEG? There was a case back with my final year that I was thinking to add image depth to improve shape recognition (dataset was regular JPEGs) but target object tend to mix with surrounding objects that makes regular bounding box less accurate. not to mention I needed the target to be painted accurately as possible so I could perform transformation and finally turn the target object into a scale of sort (target object has fixed, defined dimension)

  • @jeroenkoehorst4056

    @jeroenkoehorst4056

    2 жыл бұрын

    No, it's a separate image. Just like the RGB en IR pictures.

  • @blenderpanzi
    @blenderpanzi2 жыл бұрын

    I thought the kinect when announced promised to no use any processing power of the console, but in the end because of cuts actually did? Am I misremembering?

  • @billconiston8091
    @billconiston80912 жыл бұрын

    where do they get the dot matrix printer paper from... ?

  • @sikachukuning2473
    @sikachukuning24732 жыл бұрын

    I believe this is also how Face ID works. It used the dot projector and IR camera to get the 3D image of the face and do the authentification.

  • @arkemal

    @arkemal

    Жыл бұрын

    indeed, TrueDepth

  • @Hacktheplanet_
    @Hacktheplanet_2 жыл бұрын

    Mike pound the legend 🙌

  • 2 жыл бұрын

    Is there already a video conferencing tool which takes advantage of this? This seems huge for being able to eliminate background and focus on the face.

  • @Garvm

    @Garvm

    2 жыл бұрын

    I think FaceTime could be already doing that since iPhones have one of these depth sensors in each of the cameras

  • @Jacob-yg7lz
    @Jacob-yg7lz2 жыл бұрын

    Could you take one of these, then attach it to a mirror setup which separates each len's vision by far more distance, and then use it for longer distance range finding (like a WW2 stareoscopic rangefinder)?

  • @Jacob-yg7lz

    @Jacob-yg7lz

    2 жыл бұрын

    @Pedro Abreu I just meant having the view of each camera be really far away from each other so that there's more parallax

  • @JadeNeoma
    @JadeNeoma2 жыл бұрын

    Interesting the ultraleap leapmotion camera uses three cameras to try and resolve depth and position m, all of which are near ir

  • @unvergebeneid
    @unvergebeneid2 жыл бұрын

    I mean, the Kinect 2 did time-of-flight, not structured light like the first one. And it was still pretty cheap, being a mass-market device.

  • @quanta8382
    @quanta83822 жыл бұрын

    I wish I had a teacher like him!

  • @troeteimarsch
    @troeteimarsch2 жыл бұрын

    Mike's the best

  • @hexenkingTV
    @hexenkingTV2 жыл бұрын

    But image depth could also lead to poor performance if it catches more noises leading to a general data shift. I guess the processing step should be carefully done.

  • @threeMetreJim
    @threeMetreJim2 жыл бұрын

    What does it do if you hold a stereogram (SIRDS) picture in front of it?

  • @bryan69087
    @bryan690872 жыл бұрын

    MORE MIKE POUND!

  • @TheTobias7733
    @TheTobias77332 жыл бұрын

    Mr. Pound i love you

  • @CrystalblueMage
    @CrystalblueMage2 жыл бұрын

    Hmm, so the camera can be used to detect color imperfections on supposedly singlecolored flat surfaces. Can that be used to detect beginning fungus?

  • @NoahSpurrier
    @NoahSpurrier2 жыл бұрын

    Do open tools support this? OpenCV, UVC, V4L2?

  • @thisisthefoxe
    @thisisthefoxe2 жыл бұрын

    Question: *How* is the depth stored? RGB uses values between 0-255 to store the intensity and you can work out the percentage of that could in that pixel. How about depth? Does it also have 1byte? What does it mean? Can you calculate the actual distance from the camera?

  • @ciarfah

    @ciarfah

    2 жыл бұрын

    I mostly worked with depthimage, which is essentially a greyscale image where lighter pixels are closer and darker pixels are further away. On the other hand there is pointcloud, which is an array of 3D points. Typically that can be structured or unstructured, e.g. a 1000x1000 array of points, or a vector of 1000000 points. Perhaps this isn't as detailed as you'd have liked but this is as in depth as I've gone

  • @ciarfah

    @ciarfah

    2 жыл бұрын

    The handy thing about depthimage is you can compress it like any other image, which is great for saving bandwidth in a distributed system

  • @LaRenard
    @LaRenard2 жыл бұрын

    my professor literally delivered a lecture today regarding image depth, and i see it on Computerphile XD

  • @0thorderlogic
    @0thorderlogic2 жыл бұрын

    does anyone know the name of the guy featured?

  • @TiagoTiagoT
    @TiagoTiagoT2 жыл бұрын

    What happened to that time-of-flight RGBD webcam Microsoft bought just a little before they released the Kinect? Did they just buy it out to try to stifle competition and left the technology to rot?

  • @tsunghan_yu
    @tsunghan_yu2 жыл бұрын

    7:16 but why does faceid work under sunlight? Is the laser just stronger in face id?

  • @1endell
    @1endell Жыл бұрын

    You got a like just when you predicted i looked at my finger. Amazing video

  • @GameOfThePlanets
    @GameOfThePlanets2 жыл бұрын

    Would adding a UV emitter help?

  • @StuartSouter
    @StuartSouter2 жыл бұрын

    I'm a simple man. I see Mike Pound, I click.

  • @rustyfox81
    @rustyfox812 жыл бұрын

    Can two cameras close together of different focal lengths detect depth ?

  • @teriyakipuppy

    @teriyakipuppy

    2 жыл бұрын

    You get a stereoscopic image, but it doesn't make a depth map.

  • @acegh0st
    @acegh0st2 жыл бұрын

    I like the 'Gingham/Oxford shirt with blue sweater' energy Mike projects in almost every video.

  • @GameNOWRoom
    @GameNOWRoom2 жыл бұрын

    3:12 The camera knows where it is because it knows where it isn't

  • @rustycherkas8229

    @rustycherkas8229

    2 жыл бұрын

    So it calculates where it should be... :-)

  • @sanveersookdawe
    @sanveersookdawe2 жыл бұрын

    Please make the next one on the time of flight camera

  • @ZandarKoad
    @ZandarKoad2 жыл бұрын

    12:13 "THAT'S A QUANTUM BIT!!! SO IT'S NOT JUST ZERO OR ONE..."

  • @ByteMe1980
    @ByteMe19802 жыл бұрын

    @computerphile Just wondering, rather than having the camera figure out depth, why not feed left and right RGB into the network instead?

  • @christophermcclellan8730

    @christophermcclellan8730

    2 жыл бұрын

    The Realsense camera has a left and right infrared camera, but only a single RGB camera.

  • @ByteMe1980

    @ByteMe1980

    2 жыл бұрын

    @@christophermcclellan8730 I understand that, my question was why not have a camera with just left and right rgb and let the neural net figure out the depth

  • @christophermcclellan8730

    @christophermcclellan8730

    2 жыл бұрын

    @@ByteMe1980 you could try, but you would still need a labeled dataset for training, which would require a similar setup. There are actually some (non-neural net) algorithms for determining depth from stereoscopic RGB images. They require very precise calibration, which makes it impractical outside of the lab. My team looked into it and determined it was cheaper to just put the more expensive devices into our production run. The point is this technology was too expensive for consumer tech until recently. Now that the price has come down, it’s more accessible for applications, such as liveness detection for biometrics.

  • @AcornElectron
    @AcornElectron2 жыл бұрын

    Heh, Mike is always fun.

  • @silakanveli
    @silakanveli2 жыл бұрын

    Mike is too smart!

  • @jms019
    @jms0192 жыл бұрын

    So much better when I only saw image death

  • @Amonimus
    @Amonimus2 жыл бұрын

    What if you use two of those?

  • @JohnDlugosz
    @JohnDlugosz2 жыл бұрын

    I was hoping to learn how the time-of-flight depth sensors work.

  • @bsvenss2
    @bsvenss22 жыл бұрын

    Looks like the Intel RealSense Depth Camera D435. Only 337 GBP (in Denmark). Let's send a couple to Mike. ;-)

  • @PrashantBatule
    @PrashantBatule2 жыл бұрын

    9:20 convolved using a convolution 👍

  • @joechacon8874
    @joechacon88742 жыл бұрын

    Definitely looked at my own finger you mentioned it haha. Great info thank you.

  • @JB-oj7bq
    @JB-oj7bq2 жыл бұрын

    What if you did Stereo RGBD, using two devices

  • @rikschaaf
    @rikschaaf2 жыл бұрын

    We used an Xbox Kinekt for this in our robolab

  • @kaustabhchakraborty4721
    @kaustabhchakraborty47212 жыл бұрын

    Computerphile, a very very earnest request, every video you post sparks a hunger for knowledge on that topic, could you plz plz attack some links or anything from where we can actually learn that stuff. I and I think ma y others like me will be very grateful if you could do such a thing.

  • @carmatic
    @carmatic2 жыл бұрын

    when will they make camera modules which can simultaneously capture RGB and the IR from the same lens? that way, we have no parallax error between the depth and colour data

  • @erikbrendel3217

    @erikbrendel3217

    2 жыл бұрын

    Pretty sure that this is possible. Only problem is that you need two IR cameras to do the stereo matching

  • @marc_frank

    @marc_frank

    2 жыл бұрын

    lenses for rbg cams usually have an ir filter built in

  • @ssshukla26
    @ssshukla262 жыл бұрын

    2:42 Yeah 🤦‍♂️ now that's why this video deserves a like.

  • @levmatta
    @levmatta2 жыл бұрын

    How do you get depth for a single RGB image with AI?

  • @antonisvenianakis1047

    @antonisvenianakis1047

    2 жыл бұрын

    Check megadepth

  • @srry198
    @srry1982 жыл бұрын

    Wouldn’t LiDAR be more accurate/achieve the same thing concerning depth perception for machines?

  • @Phroggster

    @Phroggster

    2 жыл бұрын

    Yes, LiDAR would be way better, but it's going to cost you ten or twenty times more than this device. This is geared more for prosumer tinkering, while LiDAR is more for autonomous driving, or other situations where human lives hang in the balance.

  • @ZT1ST

    @ZT1ST

    2 жыл бұрын

    I imagine it would also be more useful in time based solutions - because Lidar requires it to count the time for the signal to return back to do calculations, and the infrared emitter could be used to get the depth information a little bit faster - because you're only waiting for the image to get back the first time, and you get more information on the lens at once, based on the pattern in the image. You could probably get even more accurate depth perception if you combined lidar with this.

  • @niccy266

    @niccy266

    2 жыл бұрын

    @@ZT1ST also unless the lidar laser is changing direction for each pixel, which would have to happen extremely quickly, you would have to use a number of lidars that probably can't move and will get a much lower resolution depth channel. Maybe it could supplement the stereo information or help calibrate the camera but overall not super useful

  • @thuokagiri5550
    @thuokagiri55502 жыл бұрын

    Dr Pound is the Richard Feynman of computer science

  • @SimonCoates

    @SimonCoates

    2 жыл бұрын

    Coincidentally, Richard Feynman had so many affairs he was known as Dr Pound 😂

  • @ArclampSDR
    @ArclampSDR2 жыл бұрын

    the kinect V2 has time of flight lidar now

  • @Jacob-yg7lz
    @Jacob-yg7lz2 жыл бұрын

    Do any space rovers have anything like this?

  • @6kwecky6
    @6kwecky62 жыл бұрын

    huh.. Thought this was more solved than it is. Even with dedicated hardware, you can only get sub 30fps directly from the camera. I suppose the directly from the camera and cheaply is key words

  • @_yonas

    @_yonas

    2 жыл бұрын

    You can get 30FPS of depth-aligned RGBD Images from the realsense camera with a resolution of 1280x720. Higher than that and it drops to 15, afaik.

  • @ciarfah

    @ciarfah

    2 жыл бұрын

    You can also get 60 Hz at lower res and 6 Hz at higher res IIRC