Yes, It’s Real: PCI Express x32
Ғылым және технология
Check out the MSI MPG GUNGNIR 300R AIRFLOW at lmg.gg/zCGkN
You've heard of PCI Express x16, but did you know there's such a thing as x32?
Leave a reply with your requests for future episodes.
► GET MERCH: lttstore.com
► GET EXCLUSIVE CONTENT ON FLOATPLANE: lmg.gg/lttfloatplane
► SPONSORS, AFFILIATES, AND PARTNERS: lmg.gg/partners
FOLLOW US ELSEWHERE
---------------------------------------------------
Twitter: / linustech
Facebook: / linustech
Instagram: / linustech
TikTok: / linustech
Twitch: / linustech
Пікірлер: 490
Scooby Doo and the gang unmasking this ghost as SLI/Crossfire
@tech-wondo4273
15 күн бұрын
Ikr? idk why it had to be a whole vid
@BlueEyedVibeChecker
15 күн бұрын
@tech-wondo4273 "Money! Ak yakyakyakyak"
@prawny12009
15 күн бұрын
Aren't those limited to x8 x8?
@SterkeYerke5555
15 күн бұрын
@@prawny12009 Not necessarily. It depends on your motherboard
@brovid-19
15 күн бұрын
I award you seven ahyuks and a guffaw.
x16 is all you’ll ever need - bill gates, probably
@buff9267
15 күн бұрын
turns out bill is lame
@darrengreen7906
15 күн бұрын
hahaha
@Nightykk
15 күн бұрын
Based on a quote he never said - possibly, probably.
@lexecomplexe4083
15 күн бұрын
PCI didn't even exist yet, let alone PCIe.
@chovekb
15 күн бұрын
Sure, like 16 x x16 it's like a 16 core CPU LOOOL
PCI-E does support x32 single link devices, even if it does not use a single socket. It is specified in the PCI Express capability. There is also x12
@steelwolf411
15 күн бұрын
Also x24 in some high end stuff.
@shanent5793
15 күн бұрын
No one ever used it, hence the removal from the latest revision
@steelwolf411
15 күн бұрын
@@shanent5793 It was used in Cisco UCS for some VICs as well as other things. Also I believe it was used by IBM for specific cryptography accelerators.
@cameramaker
15 күн бұрын
@@steelwolf411 there is no x24 in the spec. Some PHI MXM cards claimed x24 but it was running in either 2x12 / 3x8 / 6x4 mode.
@bootchoo96
14 күн бұрын
I'm just waiting on x64
Thank you Mr. Handsome Mustache man
@drummerdoingstuff5020
15 күн бұрын
Grinder called…Jk😂
@realfoggy
15 күн бұрын
His wife would agree
@ImMadHD
15 күн бұрын
He really is so cute 🥰
@Blox117
15 күн бұрын
i was thinking of the other guy when you said mustache man
@Fracomusica
15 күн бұрын
Lmao
This just sounds like SLI/Crossfire with extra steps
@Arctic_silverstreak
15 күн бұрын
Well sli is used for synchronizing gpu while this is is just fancy name/way to aggregate high speed network card
@StrokeMahEgo
15 күн бұрын
Don't forget nvlink haha
Riley Yoda needs to be a regular thing.
Back in the day X32 meant something different to us entry level audio production folks 😂
@ToastyMozart
14 күн бұрын
And Sega fans.
@SuperS05
12 күн бұрын
I still use an behringer x32♥️
x32 and beyond are very common in ultra high end modular servers. If you look at the server manufacturer Trenton Systems, they have massive PCI-E array capability. Of course its still PCI-E a migration from PCI and that has its bottlenecks but when you want parallelism they do it very well. (Not affiliated; just impressed)
@shanent5793
15 күн бұрын
You are mistaken, there has never been an implementation of x32, which is why it was deleted from PCIe 6.0
@sakaraist
15 күн бұрын
@@shanent5793 Weird, then why do I have a x32 NIC on my desk? It was only not used in consumer boards, it very much so exists in the commercial space. You often find them as riser cards, x48 is the highest i've personally dealt with. I've also got an x32 FPGA dev kit sitting at my bench.
@shanent5793
14 күн бұрын
@@sakaraist If they were referring to the total number of lanes, then this wouldn't be noteworthy because RYZEN Threadripper consumer boards have had more than 32 lanes for several years already, but they're never referred to as PCIe x32 devices. Riser cards are just glue, not end devices and are out of scope. In the case of NICs, they may have two x16 ports that can be connected to different sockets in a system to save inter-socket bandwidth, but PCIe will still treat them as two separate devices. FPGAs could of course be programmed to implement PCIe x32, but if you want to use the hardened PCIe IP it will still be x16. If your devices have actually negotiated a PCIe x32 link at the hardware level, I would love to know the part numbers because even PCI-SIG doesn't know about them and they're definitely not off-the-shelf
@jnharton
10 күн бұрын
@@shanent5793This needs more upvotes, honestly. ----- Just because the slot can carry 32 lanes doesn't mean that there are must be any true 32 lane devices. Makes perfect sense that you might make a single board that is a carrier for more thin one device and use a single slot. Especially in an industrial context where one larger slot might be better than a bunch of extra slots and little cards everywhere. Kind of a throwback to the days of large card edge connectors for parallel buses, only using each signal line as a separate communications lane.
@robertmitchell5019
10 күн бұрын
@@shanent5793 Wow did you watch the same video I did? @ 3:30 they show Nvidia cards using X32. (off the shelf BTW). They call it Infiniband because NVIDIA. And yes I know Infiniband is the communication standard that uses the PCIE x32 specs.. Just like NVME is the communication standard that uses PCIE x4.
A pretty good slot to put in your Virtua fighter cartridge
@thepinktreeclub
15 күн бұрын
ha! good one
1:30 the binary says: "Robert was herrr" 🤓
@robertm1112
15 күн бұрын
nice
@DodgerX
15 күн бұрын
Hey robert @@robertm1112
@Trident_Euclid
15 күн бұрын
🤓
@carabooseOG
14 күн бұрын
How do you have that much free time?
@CoopersHyper
14 күн бұрын
@@carabooseOG i dont lol, i just put it in a binary to text translator lol
0:18 You guys worked really hard on this shot; you probably should’ve stayed on it longer. 😂
SLI and crossfire failed back then, but with modern high speed interconnect tech, I think we can bring it back.
@zozodj2r
15 күн бұрын
When it comes to gaming, it wasn't about the interconnect. It was about the sync between the two which had frame lag.
@christophermullins7163
15 күн бұрын
SLI or crossfire will never make sense.. it didn't back then as it's difficult to get working much less smoothly. Best case it to have all chips and memory as close to one another as physically possible. Considering we regularly +30-70% uplift in GPUs just 1.5 years later.. you're better off just throwing out your old flagship and get the new one than to try and mate 2 together. It will use more than 2x power and deliver much less than 2x performance. I get that this was probably .mostly a joke but I am just here to bring the real world to the discussion.
@illustriouschin
15 күн бұрын
Marketing just needs a way to spin it and we'll be buying 2-4 cards again for no reason again.
@WilliamNeacy
15 күн бұрын
Yes, I'm just not happy buying one $1000+ GPU. I want to have to buy multiple $1000+ GPU's!
@shanthoshravi5073
15 күн бұрын
Nvidia would much rather you buy a 1200 dollar 4080 than two 300 dollar 4060s
That just sounds like SLI with extra steps.
@TheHammerGuy94
15 күн бұрын
Without the proprietary connector
@eliadbu
15 күн бұрын
why are you people keep comparing it to SLI, it has nothing to do with SLI. It is more like link aggregation.
@TheHammerGuy94
11 күн бұрын
@@eliadbu SLI needs both the PCIE lanes and an extra SLI bridge to enable faster data between the cards. But this was from the time when PCIE wasn't fast enough for NVidia's standards. now with PCIE 4 and 5 being as fast as it is, we mostly don't need the SLI Bridge anymore. keyword: mostly. but in simpler terms, X32 lanes is more like using RAID 0 on storage.
@eliadbu
11 күн бұрын
@@TheHammerGuy94 In SLI, the PCI_E is used to communicate with both devices at the same time as they both work in unison to render interlacing frames, this is more like having a second card its whole purpose is to pass the communication to the main card, so it is not as RAID 0 - as with RAID 0 both devices are part of an array and are the same. We don't need SLI bridge anymore because SLI is pretty much a dead technology.
So... This is just SLI?
@Dr2Chainz
15 күн бұрын
Had the same thought hah!
@sussteve226
15 күн бұрын
No, it is not
@raycert07
15 күн бұрын
Sli for non gpus
@Arctic_silverstreak
15 күн бұрын
Not really, just a fancy name of link aggregation for, mostly, network card
@ManuFortis
15 күн бұрын
Kind of, but not really. It is using similar methods, but it's not exactly the same. This is probably closer to what is done on AMD's workstation cards with being able to attach a display synch module between multiple workstation gpu's to output as a single monitor signal with one of these: AMD FirePro™ S400 Sync Module for instance with AMD's workstation cards. (Nvidia has their own version, but I don't know the details.) If you look at the card shown by Riley in the video, you'll see that cable connecting them. I'm not sure of it's exact connector specs, but it will be somewhat similar in nature to the JU6001 connector that can be found on the AMD WX series cards. Sometimes it's populated with an actual socket/port. Sometimes not. Essentially, if I understand correctly, instead of each of the cards sharing the workload intended between them, they are all doing their own work, or shared work perhaps in some cases, and outputting it all to the same monitor. It's a subtle but important difference, because SLI/Crossfire typically is used for splitting workloads between GPU's to get a better end result, where as display synch (as I will call it for now), is more about combining separate or even shared workloads into a single tangible visual result. That sync card is effectively doing what Riley explained about the x32 setup, and the asynchronous data streams typical of PCI compared to when they are... well... synced. Maybe not the worlds best explanation, but I hope it helps.
MSI tech support are the worst in the industry. You know what they told me? This is verbatim: “We don’t troubleshoot incompatibility”
@vickeythegamer7527
12 күн бұрын
😂
@maxstr
7 күн бұрын
Really?? In the past, MSI has always had the best warranty and repair service. I had a video card that was displaying weird corrupt garbage after like 6 months, and they replaced it at no cost. I had an MSI laptop that I smashed the screen by shutting the lid on a pencil, and MSI replaced the screen under their one-time replacement warranty. But that was years ago, so I'm guessing things have changed?
@simongreen9862
7 күн бұрын
I don't know; my 2017 AM4 motherboard is still getting BIOS updates as of January 2024, which was necessary for me to swap the original 1080Ti with a new 4070 I got last month.
@5urg3x
6 күн бұрын
@@simongreen9862Can we take a moment and ask the question why the fuck isn’t UEFI/BIOS firmware open source? Really should be.
@simongreen9862
6 күн бұрын
@@5urg3x I agree with you there!
... You telling me I don't need it. I'm an American. I don't need multiple 64 thread Ryzen Epic servers. But I got em, and they got 128 PCIE lanes each!
The Mellanox NICs also allow them to be connected to PCIe lanes from both CPUs. It levels out the network latency by not requiring ½ of the traffic to jump an interprocessor link to get to the NIC.
Simplifying complex tech stuff like PCI Express x32 - just brilliant. Keep up the informative and clear tech explanations.
GPUs are getting so wide these days, they might as well support PCIe x32
Now let's wait for x64
@fujinshu
13 күн бұрын
And then maybe x86?
In FPGA is fairly common to see X32, Microsoft had a board that allow u to control two FPGAs with this lanes, the trick was even it was an X32 actually it was like emulating the connection been two X16 lanes by readdressing the lanes.
A 32 lane PCI bus, awesome! GPU card makers can use it for their premium cards...and only use four lanes. Awesome....
@jamegumb7298
15 күн бұрын
Each time someone buys any current Intel 1700 and you add an ssd it gets bumped down to ×8 anyway leaving 4 of the very few lanes you have useless. AMD has the same thing, in practiced expect a card to always ×8. Then any bench you see where the compare ×8 to ×16 there is minimal to no difference unless you go down a generation. Just make GPU link on desktop ×8 by default and make room for 2 more NVMe slots.
@commanderoof4578
15 күн бұрын
@@jamegumb7298AMD does not have the same thing Unless its a dogshit motherboard you can have 2X NVMe slots at full speed at the time and an X16 slot Its when you go past 2 you run into issues as your either adding multiple to the chipset or you start stealing lanes Without any performance loss on conflicts you can have these configurations in AM5 2x NVMe + 1 x16 4x NVMe + 1 x16
@DeerJerky
15 күн бұрын
@@jamegumb7298 eh, on AM4 I have 2 NVMes, one on gen 4 and the other on gen 3. My GPU is still on 16 gen 4 lanes, and iirc AM5 only increased the lane count
Doesn't the most recent Mac Pro have a double PCIx16 link to their custom AMD GPU?
LTT is like the MCU where this video is just setting up the next home server episode.
This video reminded me of SLI. The physical setup looks identical, you got two devices occupying two PCI Express x16 slots and have an extra cable/connection between the devices.
Cisco iirc has a PCI-E x24 for their MLOM + NIC (they may call it a VIC) on some of their stuff.
"I'm sure some of you are already thinking of ways you can justify your purchase." Wow, calling me out just like that?
lmao 2 4090s on one card would be absolutely insane
@PixyEm
15 күн бұрын
Nvivia Titan Z 2024 Edition
@benwu7980
15 күн бұрын
There was a time when stuff like that did get made. I bought a Dell that was meant to have a 7950GX-2, but it arrived with an Ati card.
@jondonnelly4831
13 күн бұрын
The cooling would be problematic, would need a 360mm radiator maybe a 420mm. Though I guess if you afford one the cooling and power costs wont matter ! Big problem with SLI is that memory is become a bottleneck. The two cards VRAM don't add together. So 2 X 24 is still just 24. So it would need like 2 x 48. Fuk that would be insanely expensive.
In the early 90s even simple soundcards need the ISA slot...and were long an beefy.
last intel mac pro had two 16x slots combined for dual gpu amd cards. I guess technically that’s 32x
@cameramaker
15 күн бұрын
its not. many servers have a long slot for holding riser boards (eg. 3 cards in 2U rack mount server), but those are NOT SINGLE DEVICE slots. Same as a dual x16 for dual gpu is not a single PCIe device.
Just a thought 💭🤔.. if you want a super small SFF build the motherboard only has 1 PCI express slot usually plus some nvme slots. So if you have 1x 32x pci express slot you could have 1 card that has the GPU and SSDs and dedicated npu and a 10gb NIC ect all in 1 expansion card especially if you have 1 side of the PCB for the GPU and the other side for the npu and SSDs and NIC and all the other hardware you want. It would make for very capable SFF builds or very very tidy full size builds that only has the motherboard and CPU and cooler and ram and 1 expansion card that's a mix of all kinds of different hardware.. so as much as we don't need 32x PCI express lane's for general hardware but the idea and the 32x slot could definitely be used up..
Reminds me of those old school gargantuan 16bit ISA slots used to overcome speed limits.
x64 just needs 5 more to work properly...😏
In the early 90s, I had a custom Orchid super board with an Orchid Fahrenheit 1280. It's a 32bit Vesa local bus. All my friends were jealous of it's gaming performance. But it didn't get accepted mainstream.
3:58 I like that the connector is labeled as "black cable" even though it's not black.
I've seen server motherboards with X24 physical slots that just connect to exciting PCIe switches.
My god that segue reminded me of STEFON in SNL... The MSI MPG Gungnir 4000 Battleflow Monster Extreme has EVERYTHING!
0:00 woah MSI Z68A-GD80 that was my first ever gaming motherboard that baby Linus is showing.
Wait a minute. That binary looks suspicious. All either having started with 01 or 011. It's ascii! Quickly, someone translate it! Edit: ive noticed some binary as 01000000 which isnt ascii, but it is 1 away from capital A. BUT, A huge majority of the stuff looks like readable letters
"Not fast enough? Just add more lane!"
I would love it if using one PCIE slot didnt disable another. I dont think we're ready for the jump to x32 until this bandwidth limitation for lanes is addressed.
@rightwingsafetysquad9872
15 күн бұрын
That limitation doesn't exist in the products that use x32. Desktop CPUs may only have 8-24 lanes, but server chips have hundreds.
@alexturnbackthearmy1907
15 күн бұрын
@@rightwingsafetysquad9872 True. Old server processors have WAY more PCIE lines then even top of the line modern desktop processors (PCIE 3.0 tho), and if that isnt enough, just get yourself dual cpu system.
I glanced at the thumbnail and thought it was about a new longer x series barrel for p320 for some reason.
I KNEW IT, i was sure I've seen an oversize PCIe somewhere!
I'm still surprised optical connections aren't used yet (again? (SPIDF)). I'm expecting USB (or whatever apple calls it next) to have a fibre down the middle in that tiny blank part of the C connector at some point. Bend insensitive SMOF is cheap enough now that is plausible at scale. SFPs are getting there too.
The end pointing the reference to Linus got me dead 🤣🤣
I am glad that we got SLI PCIE before GTA6.
It need in all the lanes to arrive the bits at the same time, routing 32x x2 (each lane is a differential pair) , 64 tracks to trace all to the same chip is very hard, all traces must have the exact same lenght, or there will be penalties, delays penalties.
I feel the bandwidth could be used by an SFF with some sort of breakout expansion slots.
wonder how many years it'll be before PCIE X16 is phased out..... remember how long AGP slots lasted for...... only time will tell..... and who knows what it'll be replaced by....
@Arctic_silverstreak
15 күн бұрын
I mean physically the connection maybe phased out but i think it's very unlikely that the pcie itself will be phased out too
@chrisbaker8533
15 күн бұрын
AGP only lasted for about 13 years, 1997 to 2010. Pci-e is currently at 22 years. launched in 2002. As far as when it might get phased out, when ever it stops being able to handle the data we need to transfer. Maybe 10 to 15 years on the current trajectory. OR it may wind up like USB and never die. lol
@sakaraist
15 күн бұрын
@@chrisbaker8533 On desktops possibly, However PCIE is a core component of a metric shitload of embedded systems and fpga dev boards.
Thanks for the video!
that was the best quickie I've had in years.
The last time I saw a product with an x and a 32 next to it was in 1994. That didn't go well! Here is hoping this is not a gimmicky in-between product and is an actual leap into the future. #SEGA #32x
@kousakasan7882
11 күн бұрын
I had a custom Orchid super board with an Orchid Fahrenheit 1280. It's a 32bit Vesa local bus. All my friends were jealous of it's gaming performance. But it didn't get accepted mainstream.
1:47 The way you explain that sounds a lot like SLI graphics cards.
Looks over at the EDSFF 4C+ slot a PCIe x32 slot in wide use in server PCIe cards. I guess we wont tell him about you.
Miss my a8n32x sli fsb would post above 340 nuts board handled anything I tossed at it back then. Will be missed ( oh it's in a box still memories)
With 3kg+ graphics cards a long slot would be a good idea long as it can still accept 16x. Extra power too, maybe 4050 work without cables and no card droop.
Of course I knew, the server in my basement has two of them... although it just uses them for risers with different slot setups.
Would a dual gpu card benefit from the x32 possibly allowing for more gpus in a smaller space in a sever?
There are also OCP ports
0:05 was the only B-roll available of a motherboard with PCI and PCIe slots?
After you said "beyond 16 lanes..." my pc froze for a moment. LOL!
So, is this the future of SLI? 16 lanes talking between the GPU’s on the board and 16 lanes talking to the cpu? From each gpu?
need all that sweet x32 for the next great A.I. film, music, art and book creation app / bit miner.
I would like to see an update to the actual PCIE slot standard. It doesn't have to be exactly like this in that I don't care about the specifics like the connector type but I think I would like this architecture. It would be something like an MCIO connector that would only have 4 lanes by default. That is it. No more than that would be allowed in the connector. Each individual connector would be specced to provide power between 50-100W. I don't care the specific range. Just that it should be able to provide up to a specified power. You would still support pcie bifurcation which means you could then turn a 4 lane port into a 2x2, a 2x1x1, or a 1x1x1x1. This could be amazing for addin cards as if you wanted to add a bunch of PCIE devices you would simply assign a single pcie lane to them. Honestly not too much different here from the current spec. Here is where it would get spicy though. Part of the spec would be spacing between each individual MCIO connector. The reason for that is because you would also allow not only for the bifurcation of the slot but for the combination of the slots as well. Maybe the devices going greater than 4 would simply use Driver Binding like you said, I imagine it would be relatively easy to bind 2-4 4 lane pcie connections. Single mode would be the default but you could choose to combine up to 4 of the slots together as well in the bios. This would mean that you could still have devices connect to up to 16 PCIE lanes if you wanted but if you didn't then you would simply have 4 individual MCIO connectors you could direct attach to instead. It would be hugely more versatile. Also you would get your greater power delivery in that a more power hungry device using all 4 connectors would be supplied with like up to 200-400W of power directly. Sure you are going to have devices, especially GPUs, that still need additional power but that should be rare if they could work with up to 400W. I think you could even allow some backwards capability if you made available an adapter to go from the 4 MCIO connectors and PCIE. Then you would just need to provide cheap standoffs for the screws at the back. This wouldn't be a problem for most cards but if you had a chonker like a 4090 you could have z height issues in the case. For most regular cards it wouldn't be an issue though and the issue would go away eventually as people switched to the new standard.
@drummerdoingstuff5020
15 күн бұрын
👀
Sooo if on X32 PCI devices talk with each other, can’t you do SLI with it? Wasn’t that the problem that the NVLink was too slow and they couldn’t really communicate?
0:26 that’s what he said.
X32 is often used to connect 2 server nodes together
yup. and i also just hinted an "idea" in one of recent digital foundry's video comment section about : independent GDDRxX memory module/sticks in M.2 form factor.(could come in any format, 2230, 2242 or 2280) for MULTI PURPOSES of usage cases. from adding more VRAM to BOTH iGPU and discrete GPU, to actually adding CPU fast remote cache(extra huge L4 cache anyone?). i see a market for that. i hope somebody pick the idea up. imagine you have an iGPU, and then simply plug an M.2 16GB GDDR6X memory sticks into one of your M.2 slots, and have the iGPU driver to recognize it and have some instructions to use it, will make your iGPU has 16GB of GDDR6X VRAM. and your CPU could steal some of the paging of it for its "extra cache" of theoritical L4 cache. your OS simply need to add instuctions to use it if it detected available.
@alexturnbackthearmy1907
15 күн бұрын
Good idea...but it already exists, called RAM sticks. They are also much faster then m.2 device will ever be. You can even use it as very fast SSD (volatile one, so dont store anything important in ram).
Riley sounds like the announcer from the price is right when he does his sponsor bit
Kind of ironic since Intel‘s current LGA 1700 platform is pretty bad regarding PCIe flexibility, for example not being able to do PCIe Bifurcation.
so... every SLI rig was running x32 all along?
1:47 Isn't this exactly what old GPU's with an SLI bridge did? Or am I not understanding correctly?
it still cracks me up whenever someone says dada instead of data
So using this system it would be theoretically possible to have two linked x16 slots with an rtx 4090 in each , this would give a backdoor form of sli ....
Every time you say 'Dad a Center', a piece of my soul dies.
Imagine they bring back SLI/Crossfire via PCIE-6.0 x32. Imagine 2-4 5090s or 8950 XTXs in one rig pushing 8k 4+ rays and 4+ bounces path tracing at 120fps.
Technically, a pcie gen 5x16 slot is like a pcie gen 1x256 slot
I was just wondering this yesterday
Wasn’t the intel cheese grater Mac Pro have a dual x16 for a dual-fire pro thing? I remember seeing their fancy slot that was supposed to be for high bandwidth gpu stuff
Why weren't the number of CPU lanes ava mentioned? Mine has 24 lanes, there is no 32 possible 2x 16's or not, am I wrong? One will be 16x and the other 8x
I'm curious, why isn't there a PCIe x3, x6, or x12? (1+2, 2+4, 4+8)
@shanent5793
15 күн бұрын
Because that's not how it works, the video is incorrect and PCIe link widths are hardware and have nothing to do with software. The fundamental reason for preferring x2 and x4 has to do with efficient clock division. Links larger than x4 are built up from multiple x4 (still at the hardware level) so x12 was actually in the spec but was recently removed from Gen 6 because it was never used. The x3 and x6 connections would be asymmetric and were not included because they complicate the lane reversal feature which allows designers to flip the order of the lanes for design convenience
@alexturnbackthearmy1907
15 күн бұрын
@@shanent5793 Well unless you replace 1+2 connection by 1+1+1...at which point you may start asking yourself, wouldnt it be easier and more compact to just use X4 slot to begin with.
@shanent5793
15 күн бұрын
@@alexturnbackthearmy1907 they would still be three separate links from a hardware perspective, so the OS would have to manage three devices. In principle there could be multiple devices on one card, as long as the host supports it and they are aligned in a binary sequence eg. 1+1+1 or 2+1, but not 1+2
0:17 _But there is another_ Shouldn't Yoda have said _But, anotherone there is_ ?
Can someone explain why no one has used SLI to boost VR? Seems like a no brainer to have each eye run by an individual GFX card, given that modern VR headsets already use 2 screens instead of just 1 tablet glued into ski goggles.
aha the last time i did driver binding was to bond 2 56k dialup modems together into 1....in 1999
How many PCIe lanes do you need to play Crysis?
@gamecubeplayer
15 күн бұрын
probably 4096
@alexturnbackthearmy1907
15 күн бұрын
All of them.
Now we just need Desktop Chips to actually provide a reasonable amount of lanes so we can have 4 or more X16 slots
Video idea- Usb-c Explained: Everything about the Usb-c type and all its types!
400 Gigabit huh? And I am here with my mobile-DSL hybrid connection that maxes out at a 150mbit connection and that will never ever be more than that in this skyscraper.
this will be useful for the upcoming intel cps and nvidia gpus
So it's Crossfire/SLI for server network cards basically.
Pour one out for the man-hours spent on the 1-second star wars clip at 0:18. Worth it.
2:20 but seriously at this point you won't be using consumer grade hardware, this is in the area of high end server and custom equipment.
How comes that we can have more than 2 or 3 M2 SSD if one of them uses 4 PCIe lanes and graphics card uses 16 when CPU only supports 20 lanes ?
my mum walked past and asked if i was watching something with steve carrell and i'm never going to unhear that.
yeah but how do we get GAMES that can multithread batchcalls, shadows, physics etc. that are all crushing my gameplay since they are all singlethreaded. ?? I'm sure there are good reasons for the single-threadedness but I need to have greater render distance of the objects which means having hundreds more objects which means thousands more batch-calls which means 10fps because it's all on one thread because of well it needs to be otherwise the game gets ahead of itself but there's really no way to use all of this "80+ cores" ??
@tsmspace
15 күн бұрын
you can x32 my ____ if it's not going to make the game work better. No one cares how it looks once they want to really play, they want it to be capable.
All I heard was it was not SLI/Crossfire's fault it died. Twas complex graphics drivers that killed the beast.
@alexturnbackthearmy1907
15 күн бұрын
And lack of support. Out of handful of games supporting SLI, only select few actually scale with amount of cards, and most "SLI supported" games have completely messed up implementation of it so its very laggy, and second, third and fourth cards arent even doing anything.
@kevinerbs2778
15 күн бұрын
Drivers are even more complex now with D.L.S.S & just looking at intel Arc cards we're still stuck with drivers being the limiting factor even on DX12. look that insane performance increase they're still gaining just from driver fixes for Arc card.
@kevinerbs2778
15 күн бұрын
@@alexturnbackthearmy1907 There are 1,064 games for DX11 that support or can use S.L.I that's 20% out the 5,889 games for DX11. Here's something that real disappointed me, it's taking 8 times longer for games to come out on DX12 than DX11. In 10 years DX12 has barely gotten out as many games a what DX11 was releasing per-year in around a ten-year span. There's 415 DX12 games out now, around 50% of support some sort of raytracing in the 8 years & 9months that DX12 been out. There's were 347 games were released per year on DX1 in a ten-year span.
x32? Longboi slot. I do know what I'd do with one (if I needed it): network and disk transfers.
I think a double length connection would help with sag no? Just don’t wanna have to plug that shit in
Sheesh… I was thinking I can finally get the full use of my gpu but it’s fine, I’ll stay on 8x8 for a while longer.
You could just make the video card with a ribbon to another PCIe slot. GPU's are already double wide.