Trying out 40GBe, Does It Make Sense In A Homelab?

Ғылым және технология

In this video I take a look at 40GBe and see what hardware is needed to take full advantage of a 40GBe connection. I try many different servers and configurations to see how speeds change with the different configurations.
Thanks for Unix surplus for suppling the 40GBe hardware used in this video. Check out the network cards and cable using in this video here: www.ebay.com/itm/355139412067...
unixsurplus.com/
0:00 Intro
0:28 40GBe Hardware Overview
2:45 Filling 40GBe with file copies
7:39 Does 40GBe help with random IO?
8:31 Other notes about 40GBe
9:53 Conclusion

Пікірлер: 50

  • @alexanderg9106
    @alexanderg91064 ай бұрын

    Please keep in mind. This card's need active coming. A fan that move some air over them. So a 6cm fan at 5v will do it. Without airflow the cards will throttle down and even stop working at all.

  • @ElectronicsWizardry

    @ElectronicsWizardry

    4 ай бұрын

    Thanks for pointing that out. I was doing some testing and had extremely low speeds and dropouts. Then I noticed the syslog messages and the card had an extremely hot heatsink. A fan made the card work as normal again.

  • @esotericjahanism5251

    @esotericjahanism5251

    3 ай бұрын

    Most 10gb NICs need active cooling. Especially if you don't have them in rackmount chassis. I stuck some X540s in a few old Lenovo thinkstation tiny 1L systems I have running in a HA cluster on Proxmox. I took the blower fan coolers off a few old Nvidia Quadro cards I had and modified them to fit onto my NICs and spliced the CPU fan connections to operate the blower fan, that keeps them running cool even under pretty intensive transfers. Did something similar for the NIC in my main desktop. Took a single slot gpu cooler off an old radeon pro card and a copper heatsink and modified it to fit. Then I just adapted the JST xh fan connector to a standard KF 2510 4 pin PWM fan connector and hooked it up to my fan hub with a temp probe to set a custom fan curve for it.

  • @TheRealMrGuvernment

    @TheRealMrGuvernment

    29 күн бұрын

    This! many do not realise these are cards found in servers with high CFM pushing through the entire chassis

  • @nadtz
    @nadtz4 ай бұрын

    I got a brocade ICX 6610 for a song and after some hunting around found out you can use the stacking 40gb ports for regular data after some tinkering. Cards and cables will cost a bit but honestly considering I got the switch for $120 to add more 10gb ports another $120 or so for 40gb cards and cables to test it with is about the price of a Mikrotik CRS309-1G-8S+in all said and done. Uses more power and makes more noise (and the cards run significantly hotter) but it will be fun to play with.

  • @LUFS
    @LUFS4 ай бұрын

    Another great video. Thank you. I got a lot of inspiration and knowledge on this channel.

  • @mhos5730
    @mhos57304 ай бұрын

    Excellent video. Good info!

  • @shephusted2714
    @shephusted27144 ай бұрын

    really good content - a ws to a dual nas with dual port 40g is a great setup for anybody but esp small biz sector - no switch needed and you can sync nases quick - maybe in followup you can use nvme raid0 arrays and bonded 40g for effectively 80g between ws and nas - the 56g connect-x cards are like around 50 bucks so this can make for an affordable speedboost and especially good if you are moving a lot of video/vms/backups or other big data around - good to see this content and how relatively easy it is to get the most performance with smb multi chan and jumbo frames

  • @CarAudioInc
    @CarAudioInc4 ай бұрын

    Great vid as always, I'm running 10gb only realistically ever gets used very rarely though.. BIt of future proofing I suppose, even though I wonder if that future will ever come lol.-

  • @ewenchan1239
    @ewenchan12394 ай бұрын

    Seven things: 1) I'm current running 100 Gbps Infiniband in my homelab. For storage, because my backend servers are all running HDDs, therefore; I think that the absolute maximum that I have ever been able to push was around 16-24 Gbps (block transfers rather than file transfers). In nominal practice, it's usually closer to around 4 Gbps max. (~500 MB/s) But again, that's because I am using 32 HDDs on my backend with no SSD caching. (SSDs are the brake pads of the computer world -- the faster they go, the faster you'll wear them out.) 2) For NON-storage related tasks, my HPC FEA application is able to use somewhere around 83 Gbps or thereabouts when doing RAMRAM transfer between my compute nodes in my micro HPC cluster. That roughly about where RDMA lands, in an actual RDMA-aware application. (Or really, it's MPI that's able to use the IB fabric.) A long time ago, I was able to create four 110 GB RAM disks, and then created a distributed striped Gluster volume and exported that whole thing back to the IB network as a NFSoRDMA share/export. But if my memory serves, that ended up being quite disappointing as well as I don't think that it even hit 40 Gbps due to the various layers of software that were running. 3) "RoC over E" is actually just "RoCE" - RDMA over converged ethernet. 4) If you want to try and push faster speeds, you might want to look into running SMB Direct rather than SMB multi-channel. 5) For network data transfers, I don't use SMB for that. I mean, I can, but to get peak performance, that usually will happen between Linux systems (CentOS 7.7.1908) because one can be the server which is able to create a NFS export which has NFSoRDMA enabled whilst the linux client will be able to mount said NFS share/export with the "rdma,port=20049" option, to take advantage of NFSoRDMA. 6) Ever since I went through my mass migration project in Jan 2023, MOST of the time now, my Mellanox MSB-7890 externally managed 36-port 100 Gbps IB switch is OFF. Since my VMs and containers run off of a single server (the result of said mass consolidation project) -- therefore; VM host communication is handled via virtio-fs (for the clients that support it) and for the clients that doesn't support virtio-fs (e.g. SLES12 SP4, out of the box) -- I use "vanilla" NFS for that. Again, with HDDs, it's going to be slow anyways, so even with virtio NICs, they show up as 10 Gbps NICs; except that I don't need a switch, nor any cables, or any physical NICs since all of it is handled via the virtio virtual NICs that's attached to my VMs and containers, in Proxmox. (NFS, BTW, also, by default, runs with 8 processes.) 7) The NICs, cables, and switches are only cheaper in absolute $ terms, but on a $/Gbps basis -- they're actually quite expensive still. My Mellanox switch is capable of 7.2 Tbps full duplex switching capacity, which means that given the price that I paid for said switch at the time ~$2270 USD), I am about $0.31517/Gbps. Pretty much ALL 10 Gbps switch that I've looked it, would be a MINIMUM of 4x higher than that. Most of the time, it's actually closer to like 10 TIMES that price. i.e. if you want an 8-port 10 Gbps switch (80 Gbps switching capacity - half duplex/160 Gbps full duplex switching capacity) - that switch would have to be priced at or below $50.43 USD to be equal in cost in terms of $/Gbps as my IB switch, and 10 Gbps switches still aren't quite there yet. So, from that perspective, going 100 Gbps IB was actually a better deal for me.

  • @OsX86H3AvY
    @OsX86H3AvY4 ай бұрын

    talking SSDs reminded me too of a recent issue I had where my ZFS share would start out at like 400MBps (10G connections) but would quickly drop down BELOW GIGABIT speeds and it was 6 spinning 7200rpm drives with a 128GB SSD caches.....i mention it because REMOVING the cache actually sped it up to like 350MBps+ consistent speeds - the SSD was what was bottlenecking my rusty ZFS pool because it was trash flash as an arc cache...anyways not really related but i thought that was one of those interesting things where i had to tweak/test/tweak/test/ad infinitum as well

  • @LtdJorge

    @LtdJorge

    Ай бұрын

    Most people don’t need L2Arc and it consumes more RAM for no real benefit. If you only have one SSD for multiple HDDs, you should set it up as a special VDEV. Although having 2 mirrored is recommended, since losing data from the special VDEV destroys your pool (irrecoverably).

  • @ryanmalone2681
    @ryanmalone268111 күн бұрын

    I have 10G on my network and only saw it go above 1G once with actual usage (i.e. not testing), so I’m upgrading to 25G just because I feel like it.

  • @OsX86H3AvY
    @OsX86H3AvY4 ай бұрын

    great vid, keep em coming! also i recently went to 10G/2.5G for my home network so i looked into 40G at the time - i didnt think i heard it but be careful about going 40G -> 4x10G as thats to my understanding only going to work for switches and I think its only for one brand (mellanox maybe? i forget)...i ended up going with an x710-da4 instead for 40GB in 4x10G without needing to worry about any issues....i think those cables were specifically made for going from a core switch to your other switches or something like that but if im wrong pls correct me, mostly i just remember it seemed like a hassle waiting to happen so i didnt go that way

  • @ElectronicsWizardry

    @ElectronicsWizardry

    4 ай бұрын

    I don't have a ton of experiences with those breakout cables, but I think your right that there pretty picky, and you can't just use them for every use case.

  • @ericneo2
    @ericneo24 ай бұрын

    I'd be curious to see what the difference is you used ISCSI instead of SMB or if you could do FC what the RAMDISK to RAMDISK would be.

  • @ElectronicsWizardry

    @ElectronicsWizardry

    4 ай бұрын

    I didn't try iSCSI, but I did try NFS with similar performance as SMB. I might do a video in the future looking at NFS, SMB, iSCSI and more.

  • @silversword411
    @silversword4114 ай бұрын

    Love to see: Lists of windows and linux commands to check smb configuration. Then server and client troubleshooting commands. Break down your process and allow repeatable testing for others! :) Great video

  • @jeffnew1213
    @jeffnew12134 ай бұрын

    I think 40GbE is getting pretty rare these days, replaced with 25GbE. While a 40GbE switch port breaks out into 4 x 10GbE connections, a 100GbE switch port breaks out into 4 x 25GbE connections. Switches with 25GbE ports and 100GbE ports are available at quite decent pricing for higher-end home labs. Further, the Mellanox ConnectX-3 series of cards has been deprecated or made all out incompatible with some server operating systems and hypervisors in favor of ConnectX-4 cards. I am thinking of ESXi here, which is what I run at home. I have a Ubiquity USW-Pro Aggregation switch with 4 x 25GbE SFP28 ports, each connected to a Mellanox ConnectX-4 card, two in two Synology NASes, and two more in two Dell PowerEdge servers. Good video. You're a good presenter!

  • @SharkBait_ZA

    @SharkBait_ZA

    4 ай бұрын

    This. Mellanox ConnectX-4 cards with 25Gbps DAC cables are not so expensive.

  • @skyhawk21
    @skyhawk21Ай бұрын

    I’m finally at 2.5gb but old mix of hdds on windows box using storage spaces don’t max it out.. also was going to get server box a 10gbps nic to go into switch sfp port

  • @skyhawk21
    @skyhawk21Ай бұрын

    Got a sodola switch with 10gbs sfp plus ports but don’t know what cheap compatible adapter to buy and which is best to go to a future 10gbs nic at server

  • @shephusted2714
    @shephusted27144 ай бұрын

    it makes too much sense in this day and age where networking is arguably the weak link - especially for small biz - dual nas and ws connected with 40g (no switch) makes a ton of sense, saves time and ups productivity and effciency - maybe you could make 'the ultimate nas' with an older platform in order to max out the ram - z420 may be the ticket with a lower powered cpu? could be a decent starting point for experimentation/bench lab

  • @Aliamus_
    @Aliamus_4 ай бұрын

    Set up 10gbe (Dual XCAT 3's with a switch, XGS1210-12, in between to connect everything else, 2 sfp+ 10gb, 2 2.5gbe rj45, and 8 1gbe rj45) at home, usually get around 600MB/s when copying files, if I make a ram share and stop all dockers and vm's I get +-1.1 GB/s depending of file size, Iperf3 tells me I'm at 9.54-9.84 Gb/s.

  • @idle_user
    @idle_user3 ай бұрын

    My homelab has the issue with different NIC speeds. Proxmox 3-node cluster 4x2.5Gb. Personal PC 10Gbe. Synology NAS with (4x) 1Gb. I use SMB multi-channel on the NAS to get bursts of speed when it can. It helps out quite a bit.

  • @insu_na
    @insu_na4 ай бұрын

    my servers all have dual 40gbe nics and they're set up in a circular daisy chain (with spanning tree). it's really great for when I want to live-migrate VMs through Proxmox, because it goes *fast* even if the VMs are huge. not much other benefit unfortunately since the rest of my home is connected by 1gbe (despite my desktop PC also having a 40gbe nic that is unused due to cat🐈🐈‍⬛based vandalism)

  • @CassegrainSweden

    @CassegrainSweden

    4 ай бұрын

    One of my cats bit of both fibers in a 10 Gbps LAG :) I wondered why the computer was off the network and after some investigating noticed bite marks on both fibers. The fibers have since been replaced with DAC cables that apparently does not taste as good.

  • @insu_na

    @insu_na

    4 ай бұрын

    @@CassegrainSweden hehe The cable runs are too far for DAC for me, so I gotta make do with Cat6e, I'm afraid that if I use fiber again maybe this time when breaking the fiber they might look into it and blind themselves. Glad your cat and your DAC are doing fine!

  • @ninjulian5634
    @ninjulian56344 ай бұрын

    ok, maybe i'm stupid, but pcie 3x4 only allows for a theoretical maximum of 4GB/s so 32Gb/s, right? how could you max out one of the 40GB/s ports with that connection?

  • @ElectronicsWizardry

    @ElectronicsWizardry

    4 ай бұрын

    Yea your right, you can't max out 40GBe with a gen3 x4 SSD. I did most of my testing with RAM disks in this video to rule out this factor. You can get a lot more speed over 40GBe than 10GBe with these ssds though.

  • @ninjulian5634

    @ninjulian5634

    4 ай бұрын

    ​@@ElectronicsWizardryoh okay, i thought i was going insane for a minute lol. but my comment was in relation to your statement around 0:45 and not ssds. great video though. :)

  • @ElectronicsWizardry

    @ElectronicsWizardry

    4 ай бұрын

    Oh derp. A gen 3 x4 slot and a gen 2 x8 slot would limit speeds a bit, and I unfortunately didn't do any specific testing. I should have checked a bit more, but I believe I could get close to full speed on those slower link speeds.

  • @declanmcardle
    @declanmcardle4 ай бұрын

    Why not use a different protocol to SMB? NFS/FTP/SFTP/SCP? Host your own speedtest web server and then see what the network speed is?

  • @declanmcardle

    @declanmcardle

    4 ай бұрын

    20 seconds later....yes, jumbo frames too...

  • @ElectronicsWizardry

    @ElectronicsWizardry

    4 ай бұрын

    I use smb as my default as it’s well supported. I tried nfs as well and I got similar performance and didn’t mention it in the video. I might look into different protocols for network file copies later on.

  • @declanmcardle

    @declanmcardle

    4 ай бұрын

    @@ElectronicsWizardry Also, when striping the volume, make sure you get the full 5GB/s from hdparm -t or similar and then maybe express the 3.2GB/s over the theoretical max to get a %age.

  • @declanmcardle

    @declanmcardle

    4 ай бұрын

    Also, you probably know this, but use atop or glances to make sure it's not some sort of interrupt bottleneck. Also, again, shift H on top shows threads to avoid using htop.

  • @Anonymous______________
    @Anonymous______________4 ай бұрын

    Multi-threaded or parallel IO workloads are the only way you're going to realistically saturate a 40GbE nic. Iperf3 is a very useful tool for doing line testing as it uses raw TCP performance and it can also help with identification of potential CPU bottlenecks. If you're attempting to simply test performance through NFS or SMB those protocols at significant overhead in and of themselves. Edit: Apparently my previous comment was removed by our KZread overlords lol.

  • @dominick253
    @dominick2534 ай бұрын

    Some of us still struggling to fill a gigabit 😂😂😂

  • @MagicGumable

    @MagicGumable

    4 ай бұрын

    this is actually true but hey, people also buy sportscars without ever driving them faster than 60mph

  • @deeeezel

    @deeeezel

    4 ай бұрын

    My best switch is 10/100 😅

  • @Mr.Leeroy

    @Mr.Leeroy

    4 ай бұрын

    Modern HDDs do 1.3 - 2 Gbps each though

  • @brainwater

    @brainwater

    4 ай бұрын

    I finally utilized my gigabit connection fully for full backup. Scheduled it for each night, but since i set it up with rsync it now backs up 20GB+ in a minute since it doesn't have to transfer much.

  • @ckckck12
    @ckckck1211 күн бұрын

    Brosef... Love your show. How the hell are you talking about 40gbe and flash me the panties of two PS/2 plugs for kb and mouse... This is like a joke or something.... Lol

  • @ElectronicsWizardry

    @ElectronicsWizardry

    11 күн бұрын

    Servers love their old ports. Stuff like VGA is still standard on new servers along side multi-hundred gig networking. PS2 is basically gone on new servers, but I was testing with some older stuff.

  • @pepeshopping
    @pepeshopping4 ай бұрын

    When you don’t know, it shows. MTU will help, but can have other issues. Try checking/learning about TCP window size, selective acknowledgments, fast retransmit and you won’t be as “surprised”…

  • @zyghom
    @zyghom4 ай бұрын

    and I just upgraded my 1Gbps to 2.5Gbps... geeeeeez, 40Mbps...

Келесі