Swap GPUs at the Press of a Button: Liqid Fabric PCIe Magic Trick!

Ғылым және технология

Easily allocate hundreds of GPUs WITHOUT touching them!
Check out other Liqid solutions here: www.liqid.com
0:00 Intro
1:00 Explaining the Magic
2:00 Showing the Use Case Set Up
4:41 The Problem with Microsoft
6:07 Infrastructure as Code Hardware Control
10:36 Game Changing our Game Testing and More
16:12 Outro
********************************
Check us out online at the following places!
bio.link/level1techs
IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
-------------------------------------------------------------------------------------------------------------
Music: "Earth Bound" by Slynk
Edited by Autumn

Пікірлер: 265

  • @user-eh8oo4uh8h
    @user-eh8oo4uh8h2 ай бұрын

    The computer isn't real. The fabric isn't real. Nothing actually exists. We're all just PCI-express lanes virtualized in some super computer in the cloud. And I still can't get 60fps.

  • @AlumarsX

    @AlumarsX

    2 ай бұрын

    Goddamn Nvidia all that money and keeping us fps capped

  • @gorana.37

    @gorana.37

    2 ай бұрын

    🤣🤣

  • @jannegrey593

    @jannegrey593

    2 ай бұрын

    There is no spoon taken to the extreme.

  • @fhsp17

    @fhsp17

    2 ай бұрын

    The hivemind secret guardians saw that. They will get you.

  • @nicknorthcutt7680

    @nicknorthcutt7680

    Ай бұрын

    😂😂😂

  • @wizpig64
    @wizpig642 ай бұрын

    WOW! imagine having 6 different CPUs and 6 GPUs, rotating through all 36 combinations to hunt for regressions! Thank you for sharing this magic trick!

  • @joejane9977

    @joejane9977

    2 ай бұрын

    imagine if windows worked well

  • @onisama9589

    @onisama9589

    2 ай бұрын

    Most likely the windows box would need to be shutdown before you switch or the OS will crash.

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    I do this daily in my lab. 16 different servers, 16 GPUs (4 groups of 4) and do fully automated regressions for AI/ML models/GPU driver stacks / Cuda version comparisons. Like I have said in other posts once you stitch this together with Ansible / Digital Rebar things get really interesting. Now that everything is automated... I just simply input a series of hardware and software combos to test and the system does all the work while I sleep. Just wake up review the results and input the next series of tests. There is no more cost effective way for one person to test the thousands of combinations.

  • @formes2388

    @formes2388

    Ай бұрын

    @@joejane9977 It does. I mean, it works well enough that few people go through the hastle of conciously switching. It's more a default switch if people go start using a tablet as a primary device, do to not needing a full fat desktop for their day to day needs. For perspective of where I am coming from: I have a trio of Linux systems, a pair of windows systems; one of the windows systems is also dual booted to 'nix. Used to have a macOS system but, have no need of one, and better things to spend money on. For some stuff: Linux is great; thing is, I have better things to do with my time than tinker with configs to get things running - so sometimes, a windows system just works.

  • @ProjectPhysX
    @ProjectPhysX2 ай бұрын

    That PCIe tech is just fantastic for software testing. I test my OpenCL codes on Intel, Nvidia, AMD, Arm, and Apple GPU drivers to make sure I don't step on any driver bugs. For benchmarks that need the full PCIe bandwidth, this system is perfect.

  • @Ultrajamz
    @Ultrajamz2 ай бұрын

    So I can literally hotswap my 4090’s as they melt like a belt fed gpu pc?

  • @christianhorn1999

    @christianhorn1999

    2 ай бұрын

    thats like a gatlinggun for gpus. dont bring manifacturers on ideas.

  • @TigonIII

    @TigonIII

    2 ай бұрын

    Melt? Like turning them to liquid, pretty on brand. ;)

  • @BitsOfInterest

    @BitsOfInterest

    2 ай бұрын

    I don't think 4090's fit in that chassis based on how much room is left in the front with those other cards.

  • @nicknorthcutt7680

    @nicknorthcutt7680

    Ай бұрын

    Lmao

  • @KD-_-

    @KD-_-

    Ай бұрын

    The VHPWR connector might be durable enough to justify hand loading because the belt system could dislodge the connectors from the next one in line. Would need to do analysis.

  • @abavariannormiepleb9470
    @abavariannormiepleb94702 ай бұрын

    Please Liqid, introduce a tier for homelab users!

  • @popeter

    @popeter

    2 ай бұрын

    oh yea could do so much, proxmox systems on dual ITX all sharing GPU and network off one of these

  • @marcogenovesi8570

    @marcogenovesi8570

    2 ай бұрын

    I doubt this can be made affordable for common mortals

  • @AnirudhTammireddy

    @AnirudhTammireddy

    2 ай бұрын

    Please deposit your 2 kidneys and 1 eye before you make any such requests.

  • @abavariannormiepleb9470

    @abavariannormiepleb9470

    2 ай бұрын

    My humble dream setup would be a “barebones” kit consisting of the PCIe AIC adapters for the normal “client” motherboard and the “server” board that offers four x16 slots. You’d have to get your own cases and PSU solution for the “server” side.

  • @mritunjaymusale

    @mritunjaymusale

    2 ай бұрын

    @@marcogenovesi8570 you can tho, in terms of hardware it's just a pci switch the hard part is the low level code to match the right pci device to right cpu and on top of that software that connects it to workflows that can understand this.

  • @totallyuneekname
    @totallyuneekname2 ай бұрын

    Can't wait for the Linus Tech Tips lab team to announce their use of Liqid in two years

  • @mritunjaymusale

    @mritunjaymusale

    2 ай бұрын

    I mentioned this idea in his comments when Wendell was doing interviews with the liqid guys, but Linus being the dictator he is in his comments has banned me from commenting.

  • @krishal99

    @krishal99

    2 ай бұрын

    @@mritunjaymusale sure buddy

  • @janskala22

    @janskala22

    2 ай бұрын

    LTT does already use Liqid, just not this product. You can see in one of their videos they have a 2U Liqid server in their main rack. It seemd like a rebranded DELL server, but still from Liqid.

  • @totallyuneekname

    @totallyuneekname

    2 ай бұрын

    Ah TIL, thanks for the info @janskala22

  • @tim3172

    @tim3172

    2 ай бұрын

    Can't wait for you to type "ltt liqid" into KZread search and realize LTT has videos from the last 3 years showcasing Liquid products.

  • @chaosfenix
    @chaosfenix2 ай бұрын

    I would love this in the home setting. If it is hot pluggable it is also programmable which means that you could upgrade GPUs periodically but instead of just throwing it away you would push it down the list on your priority. Hubby and Wifey could get priority on the fastest GPU and if you have multiple kids they would be lower priority. If mom and dad aren't playing at the moment though they could just get the fastest GPU to use. You could centralize all of your hardware in a server in a closet and then have weaker terminal devices. They could have an amazing screen, keyboard, etc but they could cheap out on the CPU, RAM, GPU etc because those would just be composed when they booted up. Similar to how computers will switch between an integrated GPU and a dGPU now you could just use the cheap devices iGPU while doing the basics but then if you opened an application like a game it would dynamically mount the GPU from the rack. No more external GPUs for laptops and no more insanely expensive laptops with hardware that is obsolete for its intended task in 2 years.

  • @christianhorn1999

    @christianhorn1999

    2 ай бұрын

    moooom?! why is my fortnite dropping fps lmao

  • @SK83RJOSH

    @SK83RJOSH

    2 ай бұрын

    I would have concerns about cross talk and latency from like, signal amplifiers, in that scenario. I could not imagine trying to triage the issues this will introduce. 😂

  • @chaosfenix

    @chaosfenix

    2 ай бұрын

    @@SK83RJOSH I think latency would be the biggest one. I am not sure what you mean by cross talk though. If you are meaning signal interference I don't think that would apply here any more than it would apply in any regular motherboard and network. If you are meaning about cross talk in wifi then this really would not be how I would do it. I would use fiber for all of this. Even Wifi 7 is nowhere near fast enough for this kind of connectiviy and would have way too much interference. Maybe if you had a 60ghz connection but that is about it.

  • @seanunderscorepry
    @seanunderscorepry2 ай бұрын

    I was skeptical that I'd find anything useful or interesting in this video since the use-case doesn't suit me personally, but Wendell could explain paint drying on a wall and make it entertaining / informative.

  • @nicknorthcutt7680
    @nicknorthcutt7680Ай бұрын

    This is absolutely incredible! Wow, I didn't even realize how many possibilities this opens up. As always, another great video man.

  • @Maxjoker98
    @Maxjoker982 ай бұрын

    I've been waiting for this video ever since Wendell first started talking about/with the Liquid people. Glad it's finally here!

  • @d0hanzibi
    @d0hanzibi2 ай бұрын

    Hell yea, we need that consumerized!

  • @cs7899
    @cs78992 ай бұрын

    Love Wendell's off label videos

  • @scotthep
    @scotthep2 ай бұрын

    For some reason this is one of the coolest things I've seen in a while.

  • @andypetrow4228
    @andypetrow42282 ай бұрын

    I came for the magic.. I stayed for the soothing painting above the techbench

  • @shinythings7
    @shinythings72 ай бұрын

    I was looking at the vfio stuff to have everything in a different part of the house. Now this seems like just as good of a solution. Having the larger heat generating components in a single box and having the mobo/cpu/os where you are would be a nice touch. Would be great for SFF mini pc's as well to REALLY lower your footprint on a desk or in an office/room.

  • @TheFlatronify
    @TheFlatronify2 ай бұрын

    This would come in so handy in my small three node Proxmox cluster, assigning GPUs to different servers / VMs when necessary. The image would be streamed using Sunshine / Moonlight (similar to Parsec). I wish there was a 2 PCIe Slot consumer tier available for a price that enthusiasts would be willing to spend!

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    I use this every day in my lab running Prox / XCP-NG / KVM. Linux hot plug PCIe drivers work like a champ to move GPUs in an out of hypervisors. If only virt-io had reasonable support for hot-plug PCIe into the VM so I would not have to restart the VM every time I wanted to change GPUs to run a new test. Maybe someday.

  • @N....
    @N....Ай бұрын

    A workaround for lack of hotplug is to just keep all the GPUs connected at once and disable/enable via Device Manager. Changing primary display to a display connected to the GPU works for most stuff but some games like to pick a different GPU than the primary display, hence disabling in Device Manager to prevent that.

  • @pyroslev
    @pyroslev2 ай бұрын

    This is wickedly cool. Practical or usable for me? Naw, not really. But seeing that messy workshop lived in, satisfying as the tech.

  • @MatMarrash
    @MatMarrash2 ай бұрын

    If there's something you can cram into PCIe lanes, you bet Wendell's going to try it and then make an amazing video about it!

  • @AzNcRzY85
    @AzNcRzY852 ай бұрын

    Wendell does it fit in the Minisforum MS-01? It woukd be a massive plus if it would and works. RTX A2000 12GB is already good but this is a complete game changer for alot of systems mini or full desktop.

  • @chrismurphy2769
    @chrismurphy27692 ай бұрын

    I've absolutely been wanting and dreaming of something like this

  • @DaxHamel
    @DaxHamel2 ай бұрын

    Thanks Wendell. I'd like to see a video about network booting and imaging.

  • @mritunjaymusale
    @mritunjaymusale2 ай бұрын

    I really wanted to do something similar in my Uni's server for deep learning since we had 2 GPU based systems that had multiple GPUs using this we could've pooled those GPUs together to make a system of 4 gpu in one click.

  • @ianemptymindtank
    @ianemptymindtank9 күн бұрын

    Thinking about why my workplace needs this

  • @immortalityIMT
    @immortalityIMT2 ай бұрын

    How to do cluster for training LLM, first 4 x 8GB in one system and second 4x8gb over lan.

  • @ko260
    @ko2602 ай бұрын

    so instead of a disk shelf I could have one of those racks, fill it with HBAs instead of gpus or replacing them all with m.2 cards would that work ?!?!!? @Level1Techs

  • @AGEAnimations
    @AGEAnimations2 ай бұрын

    Could this use all the GPUs for 3D Rendering in Octane or Redshift 3D for a single PC or is it just one GPU at a time? I know Wendell mentions SLI briefly but to have a GPU render machine connected to a small desktop pc would be ideal for a home server setup.

  • @christianhorn1999
    @christianhorn19992 ай бұрын

    cool and so. is that the same notebooks do that have a switchable igpu and dedicated gpu?

  • @Ben79k
    @Ben79k2 ай бұрын

    I had no idea something like this was possible. Very cool. Its not the subject of the video, but that iMac you were demoing on, is it rigged up to use as just a monitor? Or is it actually running? Looks funny with the glass removed

  • @solidreactor
    @solidreactor2 ай бұрын

    I have been thinking about this use case for a year now, for UE5 development, testing and validation. Recently also thought about using image recognition with ML or "standard" computer vision (or a mix) for automatic validation. I can see this being valuable for both developers and also for tech media benchmarking. I just need to allocate time to dive into this.... or get it served "for free" by Wendel

  • @ralmslb
    @ralmslb2 ай бұрын

    I would love to see performance tests comparing the impact of the cable length, etc. Essentially, the PCI speed impact not only in terms of latency but also throughput, the native solution vs LiquidFabrid products. I have a hard time believing that this solution has 0 downsides, hence wouldn't be surprised that the same GPU has worse performance over LiquidFabric.

  • @MiG82au

    @MiG82au

    2 ай бұрын

    Cable length is a red herring. An 8 m electrical cable only takes ~38 ns to pass a signal and the redriver (not retimer) adds sub 1 ns, while normal PCIe whole link latency is on the order of hundreds of ns. However, the switching of the Liqid fabric will add latency as will redrivers.

  • @paulblair898

    @paulblair898

    2 ай бұрын

    There are most definitely downsides. Some PCIe devices drivers will crash with the introduction of additional latency because fundamental assumptions were made when writing them that don't handle the >100ns latency the liqid switch adds well. ~150ns additional latency is not trivial compared to the base latency of the device.

  • @_GntlStone_
    @_GntlStone_2 ай бұрын

    Looking forward to a L1T + GN collaboration video on building this into a working gaming test setup (Pretty Please ☺️)

  • @Mervinion

    @Mervinion

    2 ай бұрын

    Throw Hardware Unboxed into the mix. I think both Steves would love it. Only if you could do the same with CPUs...

  • @cem_kaya
    @cem_kaya2 ай бұрын

    this might be very useful with CXL if it lives up to expectations.

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    Liqid already has demos of CXL memory pooling with their fabric. I would not expect it to reach production before mid 2025.

  • @hugevibez

    @hugevibez

    2 ай бұрын

    CXL already goes far beyond this as it has cache coherency, so you can pool devices much more easily together. I see at as an evolution to this technology (and the nvswitch stuff), which CXL 3.0 and beyond expands on even further with the extended fabric capabilities and PCIe gen 6 speeds. I think that's where the holdup has been since it's a relatively new technology and those extended capabilities are significant for hyperscalar adoption which is what drives much of the industry and especially the interconnects subsector in the first place.

  • @dangerwr
    @dangerwr2 ай бұрын

    I could see Steve and team at GamersNexus utilizing this for retesting older cards when new GPUs come out.

  • @stamy
    @stamy2 ай бұрын

    Let's say you have a WS motherboard with 4 expansion slots PCIe x16. Can you dynamically activate/deactivate by software these PCIe slots so that the CPU can only see one at a time ? Each of the slot is populated with a GPU of course. This need then to be combined with a KVM to switch the video output to the monitor.

  • @dmytrokyrychuk7049
    @dmytrokyrychuk70492 ай бұрын

    Can this work in an internet cafe or would the latency be too big for competitive gaming?

  • @shodan6401
    @shodan64012 ай бұрын

    I know that GPU riser cables are common, but realistically, how much latency is introduced by having the GPU at such a physical distance compared to being directly in the PCIe slot on the board?

  • @sebmendez8248
    @sebmendez82482 ай бұрын

    This could genuinely be useful for massive engineering firms, most engineering firms nowadays use 3d modelling and thus having a server side gpu setup could technically mean every single computer on site has access to a 4090 for model rendering and creation without buying and maintaining 100+ gpus.

  • @Jdmorris143
    @Jdmorris1432 ай бұрын

    Magic Wendell? Now I cannot get that image out of my head.

  • @talon262
    @talon2622 ай бұрын

    My only question is how much latency does this add, even in a short run in the same rack?

  • @thepro08
    @thepro082 ай бұрын

    so you saying i can do this with my 15 gbs internet, and connect my monitor or pc to a server game and ps5??? just have to pay 20 per month right like netflix?

  • @_neon_light_
    @_neon_light_2 ай бұрын

    From where can one buy this hardware? I can't find any info on Liqid's website. Google didn't help either.

  • @Jimster481
    @Jimster4812 ай бұрын

    Wow this is so amazing, I bet the pricing is far out of the range of a small office like mine though

  • @stamy
    @stamy2 ай бұрын

    Wow very interesting ! Can you control power on those PCI devices ? I mean lets say only one GPU powered on at a time, the one that is currently used remotely. Also how do you sent the video signal back to the monitor ? Are you using a extra long display port cable, or a fiber optic cable at some sort ? Thank you. PS: What is the approximative price of such a piece of hardware ?

  • @reptilianaliengamedev3612
    @reptilianaliengamedev36122 ай бұрын

    Hey if you have to record in that noisey environment again you can leave about 15 or 30 seconds of you being silent at the beginning or end of video to use as a noise profile. In Audacity use the noise reduction effect generate the noise profile than run it on the whole audio track. Then it should sound about 10x better, or nearly get rid of all noise.

  • @MartinRudat

    @MartinRudat

    Ай бұрын

    I'm surprised Wendell isn't using a pair of communication earmuffs; hearing protection coupled with a boom mic (or a bunch of mics and post-processing) possibly being fed directly to the camera. As far as I know a good, comfortable set of earmuffs, especially something like the Sensear brand (which allow you to have a casual conversation next to a diesel engine at full throttle) are more or less required equipment for someone that works in a data center all day.

  • @georgec2932
    @georgec29322 ай бұрын

    How much worse is performance in terms of timing/latency compared to the slot on the motherboard? I wonder if it would be noticeable for gaming...

  • @spicyandoriginal280
    @spicyandoriginal2802 ай бұрын

    Does the card support 2 gpus at 8x each?

  • @ShankayLoveLadyL
    @ShankayLoveLadyL2 ай бұрын

    WoW.. this is truly amazing, impressive, I dunno... like, I usually expect smart stuff on this channel from my list of tech channels, but this time, what Wendell done is a complete another league. I bet Linus was thinking about something similar with his tech lab, but now there is someone to be hired for his project with automated mass testing.

  • @misimik
    @misimik2 ай бұрын

    Guys, can you help me gather Wendel's most used phrases? Like - coloring outside the lines - this is not what you would normally do - this is MADNESS - ...

  • @tim3172

    @tim3172

    2 ай бұрын

    He uses "RoCkInG" 19 times every video like he's a tween that needs extra time to take tests.

  • @GameCyborgCh
    @GameCyborgCh2 ай бұрын

    your test bench has an optical drive?

  • @brandonhi3667
    @brandonhi36672 ай бұрын

    fantastic video!

  • @chrisamon5762
    @chrisamon57622 ай бұрын

    I might actually be able to use all my pc addiction parts now!!!!!

  • @gollenda7852
    @gollenda78522 ай бұрын

    Where can I get a copy of that Wallpaper?

  • @shadowarez1337
    @shadowarez13372 ай бұрын

    Have we cracked the code to pass a igpu to a vm in say TrueNas?

  • @Edsdrafts
    @Edsdrafts2 ай бұрын

    How about power usage when you have all these GPUs running? Do the rest idle when unused at reasonable wattage / temp.? It's also hard doing game testing due to thermals as you are using different enclosure from standard PC etc. There muat be noticeable performance loss too.

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    I can't speak for client GPUs but enterprise GPUs have power saving features embedded into the cards. For instance an A100 at idle pulls around 50 watts. At full tilt it can pull close to 300'ish watts. The enclosure itself pulls about 80 watts empty (no GPUs). As far as performance loss. Based on my testing of AI/ML workloads on GPUs inside Liqid fabrics compared with published MLPerf results I would say performance loss is very minimal.

  • @NickByers-og9cx
    @NickByers-og9cx2 ай бұрын

    How do I buy one of these switches, I must have one

  • @arnox4554
    @arnox45542 ай бұрын

    Maybe I'm misunderstanding this, but wouldn't the latency between the CPU and the GPU be really bad here? Especially with the setup Wendell has in the video?

  • @BestSpatula
    @BestSpatula2 ай бұрын

    With SR-IOV, Could I attach different VFs of the same PCIe card to separate computers?

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    Liqid does support SRIOV, but the VFs are not composable. The way SRIOV is leveraged today is a single card that supports SRIOV is exposed to a host and the VFs and SRIOV bar space is then registered by that host. That host then can present each of those VFs to a VM just as if the card was physically installed into the host.

  • @michaelsdailylife8563
    @michaelsdailylife85632 ай бұрын

    This is really interesting and cool tech!

  • @cal2127
    @cal21272 ай бұрын

    whats the price?

  • @daghtus
    @daghtus2 ай бұрын

    What's the extra latency?

  • @4megii
    @4megii2 ай бұрын

    What sort of cable does this use? Could this be run over fibre instead? Also can you have a singular GPU Box with a few GPUs and then use those GPUs interchangeably with different hosts. My thought process is. GPU box in the basement with multiple PCs connected over a fibre cables so I can just switch GPU on any device connected to the fibre network.

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    The cable is a SFF-8644 cable using copper as a media. (mini-sas) There are companies that use optical media but they are fairly pricey.

  • @lemmonsinmyeyes
    @lemmonsinmyeyes2 ай бұрын

    This could greatly cut down on hardware for render farms in VFX. Neat

  • @fanshaw
    @fanshaw2 ай бұрын

    I just want this inside my workstation - a bank of x16 slots and I get to dynamically (or even statically, with dip switches) assign PCIE lanes to each slot or to the chipset.

  • @shodan6401
    @shodan64012 ай бұрын

    Man, I'm not an IT guy. I know next to nothing. But I love this sht...

  • @abavariannormiepleb9470
    @abavariannormiepleb94702 ай бұрын

    …could you hook up a second Liqid adapter in the same client system to a Gen5 x4 M.2 slot to not interfere with the 16 dGPU lanes?

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    Liqid does support having multiple HBAs in a single host. Each Fabric device provisioned on the fabric is directly provisioned to a specific HBA so your thinking of isolating disk IO from GPU IO would work.

  • @abavariannormiepleb9470

    @abavariannormiepleb9470

    2 ай бұрын

    Thanks for that clarification.

  • @kirksteinklauber260
    @kirksteinklauber2602 ай бұрын

    How much it costs??

  • @jayprosser7349
    @jayprosser73492 ай бұрын

    The Wizard at Techpowerup must be aware of this.

  • @yttt2220
    @yttt222020 күн бұрын

    What smartwatch is Wendell wearing?

  • @AdmV0rl0n
    @AdmV0rl0n2 ай бұрын

    I like some of this. But let me look at the far end, outside of Parsec or similar, how am I re-routing the video signal or playback. Perhaps there needs to be a wink wink nudge nudge Level One KVM solution. But outside of this, walkiing down to the basement, to re-plumb the video cables old school to new host/ changed host, kinds degrades the magic a bit on the idea...

  • @PsiQ
    @PsiQ2 ай бұрын

    i might have missed it.. But would/will/could there be an option to shove around gpus (or AI hardware) on a server running multiple VMs to the VMs that currently need it, and "unplug" it from idle ones ? .. Well, ok, you would need to run multiple uplinks at some point i guess.. Or have all gpus directly slotted in your vm server.

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    The ability of Liqid to expose a single or multiple PCIe devices to a single or multiple hypervisors is 100% a reality. As long as you are using a linux based hypervisor hot-plug will just work. You can then expose those physical devices or vGPUs (if you can afford the license) to one or many virtual machines. The only gotcha is to change GPU types in the VM you will have to power-cycle the VM because I have not found any hypervisor (VMware / Prox/ XCP-NG / KVM-qemu that support hot-plug PCIe into a VM.

  • @PsiQ

    @PsiQ

    2 ай бұрын

    @@jjaymick6265 thanks ;-) you seem to be going round answering questions here 🙂

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    @@PsiQ I have 16 servers (Dell MX blades and various other 2U servers all attached to a Liqid fabric in my lab with various GPUs/NICs/FPGAs/NVMe that I have been working with for the past 3 years. So have a fair bit of experience on what it is capable of. Once you stitch it together with some CI/CD stuff via Digital Rebar or Ansible it become super powerful for testing and automation.

  • @abavariannormiepleb9470
    @abavariannormiepleb94702 ай бұрын

    Thought of another question: Can the box that houses all the PCIe AICs hard-power off/on the individual PCIe slots via the management software in case there is a crashed state? Or do you have to do something physically at the box?

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    There is no slot power control features... There is however a bus reset feature of the Liqid fabric to ensure that devices are reset and in a good state prior to being presented to a host. So if you have a device in a bad state you can simply just remove it from the host and add it back in and it will get bus reset in the process. Per slot power control is a feature being looked at in future enclosures.

  • @abavariannormiepleb9470

    @abavariannormiepleb9470

    2 ай бұрын

    Again, thanks for that clarification. Would definitely appreciate the per slot power on/off control, would be helpful for diagnosing maybe defective PCIe cards and of course also reduce power consumption with unused cards not just idling around.

  • @Gooberpatrol66
    @Gooberpatrol662 ай бұрын

    This would be great for KVM. Plug USB cards into PCIE, and send your peripherals to all your computers.

  • @ryanw.9828
    @ryanw.98282 ай бұрын

    Hardware unboxed! Steve!!!!

  • @Ironic-Social-Phobia
    @Ironic-Social-Phobia2 ай бұрын

    Now we know what really happened to Ryan this week, Wendell was practicing his magic trick!

  • @rojovision
    @rojovision2 ай бұрын

    What are the performance implications in a gaming scenario? I assume there must be some amount of drop, but I'd like to know how significant it is.

  • @Mpdarkguy

    @Mpdarkguy

    2 ай бұрын

    A few ms of latency I reckon

  • @bluefoxtv1566
    @bluefoxtv15662 ай бұрын

    Such a good thing for cloud computing.

  • @OsX86H3AvY
    @OsX86H3AvY2 ай бұрын

    Id like to be able to hotplug GPUs in my running VMs as well...how nice would it be to have say two or three VM boxes for CPU and MEM, one SSD box, one GPU box and one NIC box so you could just swap any nic/gpu/disk to any VM in any of those three boxes in any combination.... that'd be sweet....i definitely dont need it but that just makes me want it more

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    Over the last couple days I have been working on this exact use case. In most environments this simply is not possible, however in KVM (libvirt) I have discovered the capability to hot-attach a PCIe device to a running VM like this... virsh attach-device VM-1 --file gpu1.xml --current . So with Liqid I can hot attach a GPU to the hypervisor and then hot attach said GPU all the way to the VM. The only thing I have not figured out how to get done is to get the BAR address space for GPU pre-allocated in the VM so the device is actually functional without a VM reboot. As of today the GPU will show up in the VM but drivers cannot bind to it because there is no bar space allocated for it so in lspci the device has a bunch of unregistered memory bars and drivers don't load. Once bar space can be pre-allocated in the VM I have confidence this will work. Baby steps.

  • @dgo4490
    @dgo44902 ай бұрын

    How's the latency? Every PHY jump induces latency, so considering all the hardware involved, this should have at least 3 additional points of extra latency. So maybe like 4-5 times the round trip of native pcie...

  • @jjaymick6265

    @jjaymick6265

    2 ай бұрын

    100ns per hop. In this specific setup that would mean 3 hops between the CPU and the GPU device. 1 hop at the HBA, 1 hop at the switch, 1 hop at the enclosure. so 600 nanoseconds round trip.

  • @felixspyrou
    @felixspyrou2 ай бұрын

    Here take my money, this is amazing, me with a lot of computers and with I would be able to use my best GPU on all of them!

  • @annebokma4637
    @annebokma46372 ай бұрын

    I don't want an expensive box in my basement. In my attic high and DRY. 😂

  • @vdis
    @vdis2 ай бұрын

    What's your monthly power bill?!

  • @SprocketHoles
    @SprocketHoles10 күн бұрын

    Image this biult into a laptop for an external dock. Full phat gpu running at full speed on the desk.

  • @leftcoastbeard
    @leftcoastbeard2 ай бұрын

    Reminds me of Compute Express Link (CXL)

  • @philosoaper
    @philosoaper2 ай бұрын

    Fun.. not sure it would be ideal for competitive gaming exactly.. but very very cool

  • @LeminskiTankscor
    @LeminskiTankscor2 ай бұрын

    Oh my. This is something special.

  • @Dr_b_
    @Dr_b_2 ай бұрын

    Do we want to know what this costs?

  • @brianmccullough4578
    @brianmccullough45782 ай бұрын

    Woooooo! PCI-E fabrics baby!!!

  • @ThatKoukiZ31
    @ThatKoukiZ312 ай бұрын

    Ah! He admits it, he is a wizard!

  • @Orochimarufan1900
    @Orochimarufan1900Ай бұрын

    This looks like it might also eventually enable migration of VMs with PCIe passthrough.

  • @hugevibez
    @hugevibez2 ай бұрын

    The real question is, does this support Looking Glass so you can do baremetal-to-baremetal video buffer sharing between hosts? I know it should technically be possible since PCIe devices on the same fabric/chassis can talk to one another. Yes, my mind goes to some wild places, I've also had dreams of Looking Glass over RDMA. Glad you've finally got one of these in your lab. Anxiously awaiting the CXL revolution which I might be able to afford in like a decade.

  • @GorditoCrunchTime
    @GorditoCrunchTime2 ай бұрын

    Wendell: “you may have noticed..” Me: “I noticed that Apple monitor!”

  • @gh975223
    @gh975223Ай бұрын

    how much does this cost? remember this is home user requirement enterprise is a far second use case HOME USEr is FAR MORE the important option!

  • @AnonymousUser-ww6ns
    @AnonymousUser-ww6ns2 ай бұрын

    I would describe this as SDN Software defined Network but for GPUS.

  • @animalfort3183
    @animalfort31832 ай бұрын

    I don't know how to thank you enough without being weird man....XOXO

  • @HumblyNeil
    @HumblyNeil2 ай бұрын

    The iMac bezel blew me away...

  • @ferox63
    @ferox632 ай бұрын

    What kind of latency penalties are inherent with this system?

  • @benny-fo7bd
    @benny-fo7bd2 ай бұрын

    Man, a system that would convert or retrofit PCI fabric over and/or to Infiniband would shake up the home lab community if it was somehow compatible with Windows and Win Server and Proxmox and all that, or if it could somehow be acived just on the software side using only Infiniband as the transport layer.

  • @pristine3700
    @pristine37002 ай бұрын

    This seems tailored made for Steve from Hardware Unboxed. Shame it doesnt work that well with Windows for hot-plug. but i bet it would simplify benchmarking multiple GPUs on same CPU platform

  • @MlnscBoo
    @MlnscBoo2 ай бұрын

    I love my RX 6800, but with hardware acceleration on in my browser, it pulls 50 watts! just to play a video on youtube. Why can't I use my 11700k's integrated graphics for you tube?

  • @tim3172

    @tim3172

    2 ай бұрын

    You can on Nvidia -> Nvidia settings -> 3D -> Program settings -> select Chrome and set it to "Power Saving (Intel UHD Graphics 750)". I don't have a system with an AMD graphics card and another GPU, but it might be similar on that.

Келесі