INSANE PetaByte Homelab! (TrueNAS Scale ZFS + 10Gb Networking + 40Gb SMB Fail)

Ғылым және технология

Check out our INSANE 1PB network share powered by TrueNAS Scale!
FEATURED GEAR:
Netapp DE6600 geni.us/netapp_de6600_60bay
NetApp DE6600 digitalspaceport.com/netapp-d...
DE6600 SAS2 Modules geni.us/6600_SAS2_EMM
DE6600 PSU X-48564-00-R6 geni.us/DE6600_PSU
DE6600 Replacement Tray X-48566-00-R6 geni.us/DE6600_tray
DE6600 Fan Canister X-48565-00-R6 geni.us/DE6600_fan_canister
👇HOMELAB GEAR (#ad)👇
RACK - StarTech 42U Rack geni.us/42u_Rack
DISK SHELF (JBOD) + CABLE
Netapp ds4246 geni.us/netapp_4246_caddies
Netapp ds4243 geni.us/netapp-ds4243-wCaddy
QSFP to 8088 (SAS Cable needed for 4246 & 4243 JBODs) geni.us/mCZCP
HARD DRIVES shop.digitalspaceport.com
RAM
DDR4 RAM geni.us/DDR4_ECC_8x32GB
SERVER
Dell r720 geni.us/OAJ7Fl
Dell r720xd geni.us/5wG9n6
Dell t620 geni.us/dell_t620_256gb
SERVER RAILS + CABLE MANAGEMENT
APC Server Rails geni.us/APC-SERVER-RAILS
Cable Zip Ties geni.us/Cable_ZipTies
Monoprice 1U Cable Mgmt geni.us/Monoprice_1UCableMgmt
Cable Mgmt Tray geni.us/ServerRackCableMgmt
Dymo Label Maker geni.us/DYMO_LabelMaker
HBA
LSI 9207-8e geni.us/LSI-9207-8e
ENCLOSURE
Leviton 47605-42N geni.us/leviton_47605-42N
SWITCH
Dell 5548 Switch geni.us/Dell_5548
Mellanox sx6036 Switch geni.us/Mellanox_SX6036
Brocade icx6610 Switch geni.us/Brocade_ICX6610
UPS
Eaton 9PX6K geni.us/Eaton9PX6K
Eaton 9PX11K geni.us/Eaton9PX11K
Be sure to 👍✅Subscribe✅👍 for more content like this!
Join this channel to get Store discounts + more perks www.youtube.com/@digitalspace...
Shop our Store (receive 3% or 5% off unlimited items w/channel membership) shop.digitalspaceport.com/
Please share this video to help spread the word and drop a comment below with your thoughts or questions. Thanks for watching!
☕Buy me a coffee www.buymeacoffee.com/gospaceport
🔴Patreon / digitalspaceport
🛒Shop
Check out Shop.DigitalSpaceport.com for great deals on hardware.
DSP Website
🌐 digitalspaceport.com
Chapters
0:00 TrueNas Scale PetaByte Project
0:48 Unboxing a PetaByte
1:55 Putting drives in NetApp DE6600
4:22 JBOD Power Up
4:47 Wiring Up 40Gb Network
7:00 ZFS SSD Array Install
8:10 TrueNas Scale Hardware Overview
9:24 Create ZFS Flash Array
10:00 Create PB ZFS Array
11:00 Setup SMB Share TrueNas Scale
12:30 Map 1PB Network Share
13:05 Moving Files over 40Gb
14:30 40Gb network SMB Windows 11
16:20 Troubleshooting SMB Windows networking performance
19:35 Could it be the EPYC CPU?
#homelab #datacenter #truenas #zfs #homedatacenter #homenetwork #networking
Disclaimers: This is not financial advice. Do your own research to make informed decisions about how you mine, farm, invest in and/or trade cryptocurrencies.
*****
As an Amazon Associate I earn from qualifying purchases.
When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.
Other Merchant Affiliate Partners for this site include, but are not limited to, Newegg, Best Buy, Lenovo, Samsung, and LG. I earn a commission if you click on links and make a purchase from the merchant.
*****

Пікірлер: 174

  • @HomeSysAdmin
    @HomeSysAdmin Жыл бұрын

    2:36 Ooooh that perfect drive cube stack!! Wow 1PB in a singe array - you're making me 8x 18TB look tiny.

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    I had a hard time committing to taking them and putting them in the jbod the stack looked so good.

  • @CaleMcCollough
    @CaleMcCollough10 ай бұрын

    He must be single. There is no way the wife would allow that much server hardware in the house.

  • @quochung9999

    @quochung9999

    6 ай бұрын

    He paid off the house I guested.

  • @GaryFromIT

    @GaryFromIT

    5 ай бұрын

    He does in fact have a wife, she even videos him with it for hours at a time.

  • @slug.racing

    @slug.racing

    25 күн бұрын

    Maybe you should borrow some big boy pants.

  • @BigBenAdv
    @BigBenAdv Жыл бұрын

    You probably need to look into NUMA and QPI bus saturation being the issue on your Truenas box since it's and older dual-socket Xeon setup. Odds are the QPI bus is saturated when performing this test. For some context: I've successfully ran single connection sustained transfers up to 93Gbit/s (excluding networking overheads on the link) between on Windows 2012 R2 boxes in a routed network as part of an unpaid POC back in the day (2017). Servers used were dual-socket Xeon E5-2650 v4 (originally) w/ 128GB of RAM, running Starwind RAMdisk (because we couldn't afford NVME VROC for an unpaid POC). Out of the box without any tuning on W2012R2, I could only sustain about 46-50Gbit/s. With tuning on the Windows stack (RSC, RSS, NUMA pinning & processes affinity pinning), that went up to about 70Gbit/s (the QPI bus was the bottleneck here). Eventually, I took out the 2nd socket proc for each server to eliminate QPI bus saturation and the pinning/ affinity issues and obtained 93Gbit/s sustained (on the Arista switches running OSPF for routing, the actual utilization with the networking overheads was about 97Gbit/s). The single 12C/24T Xeon was only about 50% loaded with non-RDMA TCP transfers. The file transfer test was done with a Q1T1 test on Crystaldiskmark (other utilities like diskspd or Windows Explorer copies seem to have some other limitations/ inefficiencies). For the best chance at testing such transfers, I'd say that you should remove one processor from the Dell server running Truenas. 1) Processes running on cores on socket 1 will need to traverse the QPI to reach memory attached to socket 2 (and vice versa). 2) If your NIC and HBA are attached to PCIe lanes on different sockets, that's also traffic that will hit your QPI bus. 3) Processes on socket 1 accessing either the NIC or HBA attached to PCIE on the 2nd socket will also hit your QPI bus. All of these will potentially end up saturating the QPI and 'artificially' limit the performance you could get. By placing all memory, NIC, and HBA to only one socket, you can effectively eliminate QPI link saturation issues.

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Incredible info. Thanks for writing this up I will remove a processor and test this out with the nic and HBA attached

  • @punx4life85
    @punx4life85 Жыл бұрын

    Awesome vid! Thanks g! Picked up another 66tb for my farm

  • @thecryptoecho
    @thecryptoecho Жыл бұрын

    Love catching up on your build. You never stop building.

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    I'm going to get into software more at some point here, but man machines are fun!

  • @rodrimora
    @rodrimora Жыл бұрын

    I believe that the windows explorer copy/paste is limited to 1 core go that would be the bottle neck. Also I think at 14:40 you said the "write cache", but the RAM in ZFS is not used for write cache as far as I know, only for read cache.

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Yeah I'm checking into robocopy GUI here. I spent a day trying to get smb multichannel to work with the other 10gb nic in the computer so I hope to be able to track it down soon. In the past my Ryzen 3600 stomped this transfer speed so your spot on. I thought ram buffered writes in ZFS?

  • @xxcr4ckzzxx840

    @xxcr4ckzzxx840

    Жыл бұрын

    ​@@DigitalSpaceport Rodri is right here. Single Core only for Windows explorer copies. SMB Multichannel is a PAIN between Windows and Linux. If you ever get that to work reliably make a dedicated Video about it PLEASE! For the buffered writes; you are right here. ZFS buffers ~5s of writes in RAM and moves it to the Disks. Thats btw the reason Disk benchmarks on ZFS are useless and also why the copying speed flactuates quite a bit on most spinning rust setups. There is an option to tune it ofc, so that you buffer exactly as much data in RAM, as your drives can write until the RAM buffer is full again. If I only could remember the Name... EDIT: If SMB Multichannel is a nono, then try NFS v4 with a Linux System. It will perform substantially better, as SMB is single-threaded only too iirc. EDIT2: openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-txg-timeout - Have a look at that. Its the Cache thing above and might need tuning in your setup. BEWARE! This is a deep, deep rabbit hole!

  • @BloodyIron
    @BloodyIron9 ай бұрын

    Thanks for showing these examples!

  • @DigitalSpaceport

    @DigitalSpaceport

    9 ай бұрын

    I try man I try. It's a lot of guesswork over here so appreciate the feedback on things you find valuable.

  • @BloodyIron

    @BloodyIron

    9 ай бұрын

    @@DigitalSpaceport Well I've more been watching a bunch of your eps for IT specific infra, not the crypto stuff myself. So thanks again!

  • @arigornstrider
    @arigornstrider Жыл бұрын

    That stack of 20TB drives! 🤤

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    #density

  • @TheSouthernMale

    @TheSouthernMale

    Жыл бұрын

    He is just trying to insure he keeps a head of me in the pool 🤣🤣🤣🤣

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    that word you use there "Trying"....are you sure your using it right?

  • @TheSouthernMale

    @TheSouthernMale

    Жыл бұрын

    @@DigitalSpaceport Of course I am, be careful or one day you will fill a strong wind as I pass you by. 😛 Maybe someday you and I will be number 1 and 2 in the pool, you of course being the later. 😛😛

  • @TheSouthernMale

    @TheSouthernMale

    Жыл бұрын

    @@DigitalSpaceport You also seam to forget that while you are using compressed plots to stay a head of me, I am not so far, just imagine that strong wind as I pass you by once I compress them. 🤣🌪🌪🌪🌪

  • @chrisumali9841
    @chrisumali9841 Жыл бұрын

    thanks for the demo and info, MegaUpload lol... Have a great day

  • @laughingvampire7555
    @laughingvampire755511 ай бұрын

    that sound of fans is just relaxing to me

  • @DigitalSpaceport

    @DigitalSpaceport

    11 ай бұрын

    10 hour server white noise video hummm yeah

  • @pfeilspitze
    @pfeilspitze9 ай бұрын

    19:38 "now we have this set up in a much more common-sense [...]" -- I'm a ZFS noob, but is 60 disks in a single Z2 really a good idea? Seems like the odds of losing 3/60 disks would be relatively high, particularly if they all come from one batch of returned drives. What if it was 6x (RaidZ2 | 10 wide) instead, say? Then it could stripe the reads and writes over all those vdevs too...

  • @christiandu7771
    @christiandu7771 Жыл бұрын

    Thanks for your video, can you tell me where you buy these disk (not available in your shop) ?

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Hard drives do sell fast. We notify paid channel members first of new stock as soon as it is posted to the store. Members also receive a 3% or 5% discount code depending on which level they sign up for (a code is generated for each month which has unlimited use while the code is active.) kzread.infojoin Another way you can get notified is to subscribe to the e-mail list on shop.digitalspaceport.com. We send out an e-mail notification when hard drives come back in stock if there are any left after the channel members have been notified.

  • @JasonsLabVideos
    @JasonsLabVideos Жыл бұрын

    Nice setup !! Looking skookum man !! Keep going !

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Thanks appreciate it 👍

  • @juaorok
    @juaorok Жыл бұрын

    That awesome Right now I have a Supermicro SC836 16 bay with 7 x 12tb hdds and 96Gb of ram I'm upgrading little by little, saving money to upgrade my network

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    I'm fairly unhappy with my 40Gb performance but attribute it to the EPYC not having a high core speed + smb multichannel not working as I was hoping. The 10Gb nic is an easy win on almost any machine and fully maxes out here however.

  • @TheSasquatchjones
    @TheSasquatchjones Жыл бұрын

    My God! it's been a while. Great video!

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Howdy 🤠

  • @Firesealb99
    @Firesealb99 Жыл бұрын

    You had me at "caddies not needed"

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    It feels good to not need to screw in caddies for sure

  • @electronicparadiseonline2103
    @electronicparadiseonline2103 Жыл бұрын

    That's freakin insane. Your out of your mind DS. That's a ton of storage and you look like you just came home from the grocery store or something.

  • @trousersnake1486
    @trousersnake148611 ай бұрын

    This is waaaay above my knowledge base of pc hardware but its impressive what I do understand. Looking to upgrade from my ryzen 5900x x570 system to an eypc system when finances allow.

  • @DigitalSpaceport

    @DigitalSpaceport

    11 ай бұрын

    Its funny how things get out of control in life lol

  • @samishiikihaku
    @samishiikihaku Жыл бұрын

    Not sure of the differences, but Dell, before EMC, used the same enclosure style as well. PowerVault MD3060E and other varities. Though the prices may be a bit different.

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    NetApp made both of these variants and there are not meaningful differences that I have seen at all provided you use the SAS controller for the management node. Just stickers and of course the Dell front bezel. I do have some of the Dell branded EMMs and they work out perfect.

  • @samishiikihaku

    @samishiikihaku

    Жыл бұрын

    @@DigitalSpaceport Yep. Just wanted to show another option. Incase people can't find the netapp version.

  • @notmyname1486
    @notmyname14867 ай бұрын

    just found this channel, but what is your use case for all of this

  • @TVJAY
    @TVJAY Жыл бұрын

    I am new to your channel, is there any chance you can do an overall tour of your setup and how you got to where you are?

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    What a great idea. I'll get one in the works here. Welcome and thanks!

  • @Mruktz
    @Mruktz Жыл бұрын

    I have a humble homelab, but what would you even realistically need a petabyte storage system for?

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    video on that all soon

  • @TannerCDavis
    @TannerCDavis Жыл бұрын

    Arn't you limited to 6gbps sas cable connections? Do you have multi path option on to get above 6? The speeds above 12gbps are probably due to writing to ram, then slows down to write to disk thru the wire connections.

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    SAS2 is pretty decent if you have wide mode running. The DE6600 can do that on the first connected device but not on the second daisy chained (that I have been able to figure out at least)

  • @mitchell1466
    @mitchell1466 Жыл бұрын

    Hi, loves your video I noticed when you where in the iDrac you were on a Dell 720XD, I am looking at going to 10GB for my setup and was wondering what 10GB NIC you have installed?

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Mellanox connect x2

  • @mitchell1466

    @mitchell1466

    Жыл бұрын

    @@DigitalSpaceport hey thanks for the reply Did you have any difficulty getting it to work with Scale or did you just plug it in and TrueNAS picked it up?

  • @LampJustin
    @LampJustin Жыл бұрын

    You don't plan on using a single Rz2 in production, right? Right? One Rz2 shouldn't be much wider than 8 drives for optimal performance and redundancy. Recovering from a failure with a 60 drive z2 would take a freaking long time and chances are really high that other drives will go boom as well. It has to read all 1PiB after all...

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Nooooo. I have a second video filmed with the same array and it's much more common sense and safe in it's layout.

  • @LampJustin

    @LampJustin

    Жыл бұрын

    @@DigitalSpaceport good good, the other setup would have been pure insanity :D

  • @apdewis
    @apdewisАй бұрын

    The power bill on that setup must me monstrous. My more modest setup of R630s and 320s costs me enough. I am envious.

  • @DigitalSpaceport

    @DigitalSpaceport

    Ай бұрын

    One would think but I'm base rate 10c/kWh and so it isn't that bad. I also shed load dynamically via HA and proxmox and don't run all the machines at once very often, unless needed. Usually about 250 total with the house for the electric bill. Cooling is consistently the largest user in the garage.

  • @apdewis

    @apdewis

    Ай бұрын

    I can only envy that per/kWh rate as well. Mine is somewhere around A$0.22. That said still ends up being a lot better value than AWS is at work...

  • @xtlmeth
    @xtlmeth Жыл бұрын

    What SAS card and cables did you use to connect the JBOD to the server?

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    LSI 9207-8e and 8088-8088 cables. Linked in description to exact ones I bought.

  • @trillioncap
    @trillioncap Жыл бұрын

    wow invredible

  • @user-qs6ws3sw5p
    @user-qs6ws3sw5p Жыл бұрын

    Quick question, could the DE6600 handle SATA HDD or only SAS? Could i just buy a SAS HBA card and plug it into my ubuntu server? Thanks

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    It handles SAS or SATA. Get like a 9207-8e vs say a 9205 or 9200

  • @user-qs6ws3sw5p

    @user-qs6ws3sw5p

    Жыл бұрын

    @DigitalSpaceport thanks for the reply. One controller can handle all the 60 drives with a acceptable response time? ( chia farming ;-) )

  • @linmal2242
    @linmal224211 ай бұрын

    Does this array run 24/7 or is it powered down most of the time? What is your power bill like?

  • @DigitalSpaceport

    @DigitalSpaceport

    10 ай бұрын

    Runs 24/7 and is around 2.2 amps at 245V so 540w per jbod. Per disk that is 9 watts which is among the best per tray efficient jbods I have measured

  • @jonathan.sullivan
    @jonathan.sullivan Жыл бұрын

    I'm interested in seeing what you can get for performance with multiple Vdevs, Tom Lawrence has a good breakdown video for logically how many disks per vdev one hsould have for performance. 1 Raidz2 vdev across 60 disks def isn't it but it's fun to see. #subcribed

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    For sure it was just to show the size lol but I am working on a followup here. Looks like I have a solution for the pathetic SMB transfer speeds also.

  • @watb8689
    @watb8689 Жыл бұрын

    you have some insane homelab. how is the energy bill coming

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    runs 225-250 per month. Not really that bad.

  • @maximloginov
    @maximloginov Жыл бұрын

    Hello. What kind of raid you finaly using for plotting? Stripe? Raidz2?

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    RAID0 of 12 disks. So far none have blown out and the performance is awesome. Worst case one blows out and I need to replot it, not a huge deal. I can do that in a few days.

  • @user-ve9ju8zg1q
    @user-ve9ju8zg1q Жыл бұрын

    Hello,Is it possible to update the ESM/EMM firmware?

  • @thumbnailqualityassurance7853
    @thumbnailqualityassurance7853 Жыл бұрын

    How does TrueNAS know how to light the disk failure LED on the netapp if a disk fails?

  • @ernestoditerribile

    @ernestoditerribile

    Жыл бұрын

    It’s not TrueNas doing that. It’s the HBA/RAID/Disk controller of the NetApp checking the S.M.A.R.T. Status

  • @fisherbu
    @fisherbu Жыл бұрын

    nice job! how to make a plot only 74gb?

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Gotta create it with the Gigahorse or bladebit cuda. Bladebit isn't farmable yet, but Gigahorse is

  • @visheshgupta9100
    @visheshgupta910011 ай бұрын

    I was wondering if you could do a video on TrueNAS scale with multiple nodes. There is no video on youtube that discusses this in detail. In layman terms, I would like to deploy 3 different servers, and control them all from one place. My question is do we need to install TrueNAS scale on every server? Or is it that we have 1 TrueNAS Scale server, and others are TrueNAS Core?

  • @DigitalSpaceport

    @DigitalSpaceport

    11 ай бұрын

    If you need to use True Commander (the software that manages multiple nodes) I think you need to contact them about a license. I have heard its affordable by I don't really know that. You dont mix and match Core + Scale really.

  • @visheshgupta9100

    @visheshgupta9100

    11 ай бұрын

    @@DigitalSpaceport Thank you, that is exactly what I was looking for.

  • @visheshgupta9100

    @visheshgupta9100

    11 ай бұрын

    ​@@DigitalSpaceport I am planning on building a homelab, I was thinking of having multiple servers for different kinds of media. For instance, one for game storage, the other for critical data & backup, and so on and so forth. So essentially, I wont need all the NAS running at all times. I want to be able to power on just the system I need, so it would work as a standalone system, and also the ability to control all the systems from one place so that I don't have to configure users/permissions/shares for each and every system individually.

  • @visheshgupta9100

    @visheshgupta9100

    11 ай бұрын

    ​@@DigitalSpaceport I was considering TrueNAS at first, but now I am kind of leaning towards UnRaid due to their recent implementation of ZFS. So essentially I could use UnRaid known for its parity, or use ZFS known for its speed and reliability or use both at the same time. In terms of flexibility, to mix and match drives, ease of use, low hardware requirements, I believe UnRaid has an upper hand. What are your thoughts?

  • @MHM4V3R1CK

    @MHM4V3R1CK

    11 ай бұрын

    @@visheshgupta9100 I think truenas command is free now. Truenas scale is the Linux/debian flavor of truenas and core is the older but very stable BSD flavor btw

  • @gustcol1
    @gustcol1 Жыл бұрын

    I have the same problem with my 100Gbps network..

  • @ewenchan1239

    @ewenchan1239

    Жыл бұрын

    Unless you're writing to an array of enterprise grade NVMe U.2 SSDs, 100 Gbps for storage for a home lab user -- you'll never be able to hit more than a few percent of the 100 Gbps line speed/capacity. (I have 100 Gbps as well (Infiniband).) Even if you enable NFSoRDMA, if you're going to be using spinning rust, it's not going to make THAT much of a difference. (The highest I've been able to momentarily get is about 32 Gbps kernel/cached writes. More often than not, my system hovers around 16 Gbps nominal max.)

  • @blueprint4221
    @blueprint4221 Жыл бұрын

    watch that bend radius, sir

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    so about that.... Needed this comment before the video lol

  • @carbongrip2108
    @carbongrip2108 Жыл бұрын

    I hope you enabled Jumbo Frames on all your NICs...

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Yes I do. I have a home side segment that bridges to these devices but the big gear is all on Jumbo

  • @philippemiller4740
    @philippemiller47404 ай бұрын

    60 wide raidz2 doesn't make much more sense haha. Try 10 wide raidz2 x 6. That would make much more sense no? Maybe you're limited by smb, have you tried using iscsi or nfs?

  • @rfitzgerald2004
    @rfitzgerald20047 ай бұрын

    On your shop, do you ship to UK?

  • @DigitalSpaceport

    @DigitalSpaceport

    7 ай бұрын

    Yes just make sure to add your phone number on the form. Customs will stop the shipment w/o that.

  • @TheDropForged
    @TheDropForged6 ай бұрын

    Serious question, I see people with server equivalent to enterprise. Do people really need that size?

  • @TheDropForged

    @TheDropForged

    6 ай бұрын

    Aa you do crypto stuff

  • @DigitalSpaceport

    @DigitalSpaceport

    6 ай бұрын

    Yeah ai had a smaller half rack prior which is more than enough nowadays for most homelabbers

  • @ewenchan1239
    @ewenchan1239 Жыл бұрын

    I couldn't find the comment where you asked me about SMB Direct in Windows 11, but it does look like that if you go to Control Panel -> Programs and Features -> Turn Windows Features On and Off -> SMB Direct -- it does appear that in Windows 11, you can enable it. Just thought that I would pass that along to you.

  • @user-ve9ju8zg1q
    @user-ve9ju8zg1q Жыл бұрын

    How do you solve the problem that the indicator light of the sata hard disk is not on?

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    The SATA light is technically on, its just very very faint. It does not show well on the JBOD with the camera. If it is important to you to have the full blinking light, then an SATA to SAS interposer would be needed.

  • @StenIsaksson
    @StenIsaksson Жыл бұрын

    I heard something about Windows not being supported with Epyc CPU's Wrong apparently

  • @DigitalSpaceport

    @DigitalSpaceport

    11 ай бұрын

    Yeah so just use the ryzenmaster drivers and it works great. I was discouraged initially also by what I had been seeing others say.

  • @hescominsoon
    @hescominsoon Жыл бұрын

    try it over iscsi isntead of smb. also sclae does not perform as wlel as core does across the board in my own testing. unless you want to run vm's then sclae is the way to go. if you want only storage then core is your best bet.

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    I've had a lot of folks tell me to go core for performance and I'm going to check it out. I use proxmox for VMs also so no reason really. I will do a video on iSCSI I think after I learn some more about it and give a "noobs perspective" to iSCSI. I used it only once in the past via a Dell MD1000i and it was a painful thing as a result of that. Time for another round!

  • @hescominsoon

    @hescominsoon

    Жыл бұрын

    @@DigitalSpaceport iscsi in core is easy.... I do not use proxmox as my hypervisor so I don't know about setting up from iscsi on that end.

  • @MyersJ2Original
    @MyersJ2Original11 ай бұрын

    Are those drives used you sell or new?

  • @DigitalSpaceport

    @DigitalSpaceport

    11 ай бұрын

    Refurbished from Seagate but they dont have power on hours like used pulls do.

  • @contaxHH
    @contaxHH10 ай бұрын

    will the floor collapse?

  • @DigitalSpaceport

    @DigitalSpaceport

    10 ай бұрын

    No but I did static load calculations on the 4" 6 slump reinforced slab in the garage and decided to put plate steel squares under the risers to distribute the load slightly better. So far zero issues. It should be okay even fully loaded, which it's not. Calculated to 800, 1200 and 2000 lbs per rack from left to right facing from front. Good question. Also don't put racks on wooden subflooring if they approach 1000 lbs without additional load deflection bracing.

  • @GreatVomitto
    @GreatVomittoАй бұрын

    You know that your raid is fast when you are limited by the CPU.

  • @DigitalSpaceport

    @DigitalSpaceport

    Ай бұрын

    A never ending problem except I got some RDMA going now!

  • @capnrob97
    @capnrob979 ай бұрын

    For a home lab, how could you even begin to use a petabyte worth of storage?

  • @Murr808
    @Murr808 Жыл бұрын

    What do you do about windows updates ?

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Suffer through them eventually. They only allow you to postpone so much unfortunately. I try to do them when behind KB

  • @jondonnelly4831

    @jondonnelly4831

    Жыл бұрын

    You can block it by added some entries to the hosts file.

  • @sm8081
    @sm8081 Жыл бұрын

    Envy….Bad feeling, I know…😅

  • @skyhawk21
    @skyhawk21 Жыл бұрын

    Need help, got whs server with 50tb of drives, need a cheap good quality 2.5gb switch maybe with 10gb port and also cheap quality 10 gb card for server??? 1gb don’t cut it

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Skip the 2.5G and just roll out 10Gbit. You will be much happier. This mikrotik switch is great to start with. geni.us/goNi9C

  • @MelroyvandenBerg
    @MelroyvandenBerg3 сағат бұрын

    Cat door. You get it? 😅

  • @bokami3445
    @bokami34452 ай бұрын

    OMG! How long would a scrub take on this monster!

  • @trininox
    @trininox3 ай бұрын

    How's the electric bill?

  • @DigitalSpaceport

    @DigitalSpaceport

    3 ай бұрын

    250/mo

  • @marcelovictor3031
    @marcelovictor3031 Жыл бұрын

    how many hds of 20 tb do you have?

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    The real question is how many of 22TB do I have😜

  • @seelook1
    @seelook110 ай бұрын

    I'm going to sound like a kid. THIS IS SO COOL! I WANT IT. My wife will kill me if i ever add something like this lol.😅

  • @DigitalSpaceport

    @DigitalSpaceport

    10 ай бұрын

    You have 1 life. Live your best one. (advice that gets folks divorced hahaha)

  • @TheSouthernMale
    @TheSouthernMale Жыл бұрын

    Considering I have found 8 block so far this month and lost a lot of Chia I am going to try solo farming for a while and see how it goes. Of course with my luck I will stop winning blocks now. LOL

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    LMK! Chia cat 🔮

  • @jimmerin5034
    @jimmerin5034 Жыл бұрын

    5 axis stabilization makin me dizzy ahaha

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    Ill try to use less 🚁

  • @ewenchan1239
    @ewenchan1239 Жыл бұрын

    The other problem that you might be running into it looks like according to wiki, your Intel Xeon E5-2667 v4 are only 8-core/16-thread processors each, which means that having to deal with/manage a 60-wide raidz2 ZFS pool is going to tax the processor quite heavily when it is trying to manage that many drives with only 16-cores/32-threads total (possibly). Keep an eye on out on your %iowait.

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    I have part 2 filmed here of me trying out other more logical configurations then 60 wide. Out this week!

  • @ewenchan1239

    @ewenchan1239

    Жыл бұрын

    @@DigitalSpaceport I look forward to watching that video. (The reason why I mention to keep an eye out on your %iowait is because my server has 36 drives, 32 of which are handled by ZFS under Proxmox, and under heavy disk load tasks, the %iowait will jump/start to climb until those tasks are finished, and then the %iowait will fall back down.)

  • @pudseugenio4118
    @pudseugenio4118 Жыл бұрын

    looks like a gold bar from a cave

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    YARRRR 🏴‍☠️🦜

  • @Alan.livingston
    @Alan.livingston Жыл бұрын

    That’s a lot of adult material you seem to be hoarding there, sir.

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    The backups backup has backups

  • @nexovec
    @nexovec10 ай бұрын

    Uhh 60 drives in one vdev :D this video is so wrong :D Good job.

  • @franciscooteiza

    @franciscooteiza

    6 ай бұрын

    Not if you are using dRaid (not the case in this video)

  • @leo_craft1
    @leo_craft17 ай бұрын

    They cost 1000€+

  • @TheSouthernMale
    @TheSouthernMale Жыл бұрын

    Hey, if you see me leave the Pool its because I am losing money on it. I calculated that if I was solo farming I would have earned 4.5 additional XCH in the last 10 days. I will wait and see if the wining trend continues.

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    I have gone that path indeed sir. GL to you, if you show back up I'll know why 😂

  • @TheSouthernMale

    @TheSouthernMale

    Жыл бұрын

    @@DigitalSpaceport I am not saying I will leave yet, but if I do leave and then come back it will only be to jump a head of you in the pool. 😛

  • @DennisJlg-cu1vw
    @DennisJlg-cu1vw Жыл бұрын

    Hello my tip do not use Win 10 or 11 but a Win server why, Alex from The Geek Freak found out that there is a limit to the network in normal systems and this is not active in servers. For info

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    This is easy to test out! Will toss a server 2022 on and see what kinda perf benefits I can get

  • @linegik
    @linegik Жыл бұрын

    Chia dump

  • @Paberu85
    @Paberu85 Жыл бұрын

    I wonder why would somebody do such a thing to his own house, and wallet?..

  • @ajandruzzi

    @ajandruzzi

    Жыл бұрын

    I assume you’re looking for an answer better than “because he can”.

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    you and me both

  • @jondonnelly4831
    @jondonnelly4831 Жыл бұрын

    Why do you need a PetaByte as a home user/ small office user ?

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    I have a lot of isos

  • @rsqq8

    @rsqq8

    Жыл бұрын

    This man ISO’s 😂

  • @FakeName39
    @FakeName39 Жыл бұрын

    This dude running amazon from his home

  • @charliebrown1947
    @charliebrown194710 ай бұрын

    you need higher MTU. you're breaking these files into too many 1500 byte packets (think DDoS attack). btw i hope you didnt keep that 60 wide raidz2 configuration... that's silly. Oh, you also have your ARC set to use 50% of your ram (default) and half your ram is literally doing absolutely nothing at all. change the default to allow it to use 90% or even more

  • @InSaiyan-Shinobi
    @InSaiyan-Shinobi Жыл бұрын

    Is this at your house Wtf?

  • @jj-icejoe6642
    @jj-icejoe6642 Жыл бұрын

    All that wasted money…

  • @16VScirockstar
    @16VScirockstar Жыл бұрын

    The throughput is constrained through the PCIe 3.x bottleneck, de.wikipedia.org/wiki/PCI_Express

  • @DigitalSpaceport

    @DigitalSpaceport

    Жыл бұрын

    I think SAS2 is bottlenecking even before PCIe3 here. I have another video in the works that will check into but pushing the limits of my spindles is a topic I am diving more and more into.

  • @16VScirockstar

    @16VScirockstar

    Жыл бұрын

    @@DigitalSpaceport nice! Just out of curiosity, what do you need this setup for? It must be immensely expensive. As a former SAN admin, I can tell, striping over more disks doesn't give you more linear performance. I remember the threshold, 17 years ago, was at around 10 disks per stripeset.

Келесі