State of ROCm 5.3 in 2022: 6x Mi210, 1petaflop, in the 2u Supermicro AS -2114GT-DNR

Ғылым және технология

Wendell dives back into his new 2u Supermicro AS-2114GT-DNR server to talk more specifically about the 6 AMD Instinct MI210s held within! So many flops!
Thanks Engenius for Sponsoring this video!
Check out The ECW336 Here: www.engeniustech.com/engenius...
**********************************
Check us out online at the following places!
linktr.ee/level1techs
IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
-------------------------------------------------------------------------------------------------------------
Intro and Outro Music: "Earth Bound" by Slynk
Other Music: "Lively" by Zeeky Beats
Edited by Autumn

Пікірлер: 189

  • @jd_flick
    @jd_flick Жыл бұрын

    I am really hoping AMD can make not using CUDA a reality

  • @harryshuman9637

    @harryshuman9637

    Жыл бұрын

    It's all up to the devs really.

  • @zacker150

    @zacker150

    Жыл бұрын

    I've given up on AMD gpus ever competing in compute. Hopefully Intel's OneAPI works out.

  • @RaaynML

    @RaaynML

    Жыл бұрын

    @@zacker150 It's so weird to comment this on the same video in which you heard them currently competing in several of the top super computers

  • @LeDabe

    @LeDabe

    Жыл бұрын

    @@RaaynML The AMD environment lacks tooling. Though a new tool, MIPerf, is coming and should play a similar role to the nsight compute nvidia provides

  • @youkofoxy

    @youkofoxy

    Жыл бұрын

    They are trying hard, very hard, however the curse of the Ctrl-c Ctrl-v run too strong in the programming community.

  • @kamrankazemi-far6420
    @kamrankazemi-far6420 Жыл бұрын

    Being able to write code once and being able to run on either platform is so huge.

  • @ramanmono

    @ramanmono

    Жыл бұрын

    Yes, Java promised to do this a gabillion years ago. Sadly I don't see any new tool getting any closer.

  • @AI-xi4jk
    @AI-xi4jk Жыл бұрын

    It would be cool to see just some torch benchmarks of some regular ML models vs 3090 and other Nvidia cards.

  • @markpoint1351
    @markpoint1351 Жыл бұрын

    my god Wendel you really made my day with that Shinning meme 🤣!!! thank you

  • @iyke8913
    @iyke8913 Жыл бұрын

    Wendell flips heavy server gear with ease and grace, meanwhile, ....... Linus drops everything.

  • @CycahhaCepreebha

    @CycahhaCepreebha

    Жыл бұрын

    The virgin tech enthusiast vs. the chad IT professional.

  • @stuartlunsford7556
    @stuartlunsford7556 Жыл бұрын

    I really hope everyone starts pronouncing it Rock'em, like Rock'em Sock'em Robots. It's much more funner that way.

  • @scott2100

    @scott2100

    Жыл бұрын

    Same here, I thought that was just how it was pronounced

  • @bakedbeings

    @bakedbeings

    Жыл бұрын

    People pronouncing it another way hadn't occured to me.

  • @jadesprite

    @jadesprite

    Жыл бұрын

    Small m implies that it should be pronounced this way!

  • @myselfremade

    @myselfremade

    Жыл бұрын

    Always have, always will.

  • @vtheofilis
    @vtheofilis Жыл бұрын

    That Shining meme was pure gold. So, ROCm can help port CUDA stuff on OpenMP or whatever the open standard is called, in the data center side. I hope that it is also easier for desktop CUDA code to be ported, so that, for example, ANSYS can support AMD GPUS more easily.

  • @brenj

    @brenj

    Жыл бұрын

    👍🏻

  • @hammerheadcorvette4

    @hammerheadcorvette4

    Жыл бұрын

    RocM (formerly HSA) has had tools to port CUDA workloads for years, but th presence and convenience of CUDA has been too strong for people to care. All it takes is an Open Source project and a company willing to change from the norm for whatever reason.

  • @Richardus33
    @Richardus33 Жыл бұрын

    love this channel learned allot over the year, thanks Wendell!

  • @stranglehold4713
    @stranglehold4713 Жыл бұрын

    I regard yourself and Steve Burke as the two best voices in the computer hardware space. Your channel is treasure trove of information

  • @crookedtuna
    @crookedtuna Жыл бұрын

    Been using ROCm on a 6700xt for stable diffusion and I'm shocked how well it performs considering it's not even a CDNA GPU.

  • @andrew_hd

    @andrew_hd

    Жыл бұрын

    It's really cool tech to tinker with. I'm as well using 6700XT in SD. It's so nice to have 12 Gb vram.

  • @zabique

    @zabique

    Жыл бұрын

    Could you recommend any tutorial how to make it work?

  • @chriswright8074

    @chriswright8074

    Жыл бұрын

    Most recent AMD consumer GPUs had support for it

  • @Ronoaldo

    @Ronoaldo

    Жыл бұрын

    Do you happen to have any tutorials on running such models with customer GPUS. I have a 6800XT and would love to work on it. The far I got was using the default Docker container with Tensorflow, not sure if I'm on the right track? Thanks for any input.

  • @Gastell0
    @Gastell0 Жыл бұрын

    12:47 - MI25 also supports SR-IOV, but there's no public documentation on how to actually utilize it

  • @wayland7150

    @wayland7150

    Жыл бұрын

    Tell us more please.

  • @2megaweeman

    @2megaweeman

    Жыл бұрын

    @@wayland7150 I think @antonkovalenko is referencing the way you can flash the vbios of a wx9100 on a mi25 and use it for GPU task. I think the only way right now to do it after you flash is to use gpu-p(Hyper-v). Look for vega 64 GPU virtualization

  • @wayland7150

    @wayland7150

    Жыл бұрын

    @@2megaweeman Yeah, unfortunately the MI25 does not make sense for the homelab at the current price. Really wanting SR-IOV, it would make these cards worth a lot more than VEGA if someone smart could show us how to do that.

  • @TheDoubleBee
    @TheDoubleBee Жыл бұрын

    I work in the field of photogrammetry, a subset of computer vision, and I'm praying to whatever deity is willing to listen to make CUDA obsolete, but everything is moving so, so slow. Quite a while back I came across SYCL and I was mightily impressed, but it was in super early stages and I haven't checked back recently. Nvidia has had a horrible stranglehold on the whole computer vision industry for quite a while, but there might be some cracks showing given their recent open-sourcing of CV-CUDA libraries, which, you don't need me to point out, is an incredibly un-Nvidia move to pull - following their earlier and also un-Nvidia move of sort-of open-sourcing their driver for Linux.

  • @Pheatrix

    @Pheatrix

    Жыл бұрын

    Nvidia started also updating their support for OpenCL. You are now not stuck forever on version 1.2 if you have a Nvidia GPU but can now use 3.0! Maybe you should have a look into OpenCL. It's pretty much CUDA but as an OpenStandard with support from all major vendors (for both GPU+CPU). It just needs publicity...

  • @ll01dm
    @ll01dm Жыл бұрын

    It's good to hear rocm has got easier to install. Back when I was using a Vega 56 I tried installing it. It was a nightmare. I gave up and just used a docker image.

  • @jonteno
    @jonteno Жыл бұрын

    going too be so fun watching you do vids on these! The enterprise side is so interesting atm!

  • @paxdriver
    @paxdriver Жыл бұрын

    You should just try out stable-diffusion making 4k images instead of 1024x1024. The processing requirements scale quadratically as does pixel density with larger text to image generation so it's not feasible on normal human system, but the algorithm and walkthroughs are so organized anyone should be able to download the weights, set it up and get it running. You'd be the first with 4k diffusion, and you could even trying training it up to get better at faces and hands using that u2 sized sweet, sweet top rack candy 😍

  • @Nobe_Oddy
    @Nobe_Oddy Жыл бұрын

    OMG WENDELL!!!! @ 3:00 Is that Betty White as a ZOMBIE on your desk?!?!?!?! THATS AWESOME!!!! lmao!!!

  • @tanmaypanadi1414
    @tanmaypanadi1414 Жыл бұрын

    16:47 🤣 relentless execution

  • @WolfgangWeidner
    @WolfgangWeidner Жыл бұрын

    Important stuff, thank you!

  • @spuchoa
    @spuchoa Жыл бұрын

    Great video!

  • @CycahhaCepreebha
    @CycahhaCepreebha Жыл бұрын

    I like these little looks into Wendell's server room. It's basically my dream home setup. I've no clue what I'd do with it all, probably waste time putting pihole on kubernetes or something, but still. I'm actually really excited about new, improved ROCm. I've got torch running on a 6900XT so I can sort of do CUDA through ROCm already, but it's still missing an awful lot of features and performance compared to the Nvidia version, 99% of the time I'm better off just using an Nvidia card, even though my best Nvidia stuff is two generations behind RDNA2. I think consumer-accessible and actually fun machine learning things like Stable Diffusion is a great thing for this field, the more people who get into CUDA and ROCm, the more emphasis will be placed on accessible hardware with >8GB of GDDR and decent compute capabilities that are easy enough to use that even I could set it up. Unfortunately the reality is that, despite the advances they've made, AMD aren't really a competitor yet. Nvidia still has an enormous headstart, and breaking the "vendor lock-in" that CUDA so effectively creates is only the first step. AMD need to actually deliver competitive performance. They're in a good position to do that, chiplets are the future and Nvidia's monolithic dies are getting truly ridiculous (>600mm²!); AMDs yields are going to be far higher, which means they should be able to afford to stuff more cores into their products. That they aren't is somewhat baffling to me.

  • @bobsyouruncle1574
    @bobsyouruncle1574 Жыл бұрын

    Oh your server room sounds niiiice. :))

  • @KKolbet
    @KKolbet Жыл бұрын

    The title is as appealing as the scientific names of most plants.

  • @randomhkkid
    @randomhkkid Жыл бұрын

    Would love to see stable diffusion performance on this machine. How large an image can you generate with the pooled gpu memory?

  • @chooka003
    @chooka003 Жыл бұрын

    I'd LOVE this for BOINC!!! "Drool"

  • @NaumRusomarov
    @NaumRusomarov Жыл бұрын

    modern fortran is still used even today for scientific computing. if you're a scientist who doesn't have time to deal with the quirks of c-languages, then fortran is really the best choice for you.

  • @mvanlierwalq
    @mvanlierwalq Жыл бұрын

    Perhaps not the only reason, but the DOE's Energy Exascale Earth System Model (E3SM, the DOE climate model), requires big-time FP64 flops. AMD is, and has been for a while, WAY ahead of NVIDIA when it comes to FP64. Btw, running E3SM might be a good test. As far as I know, DOE has developed containerized versions of E3SM, and you should be able to download and run it (or a small chunk of it) on that machine.

  • @mvanlierwalq

    @mvanlierwalq

    Жыл бұрын

    I'll add that traditionally climate and weather models have been written in Fortran. DOE has sunk a lot of effort into getting code refactored into C++ to be able to use GPUs. NASA instead has just stuck with CPUs in their machines. Big question where the field as a whole goes from here.

  • @ChristianHowell
    @ChristianHowell Жыл бұрын

    Very good video... I think I know why everyone is rushing to support AMD... About 3 months or so ago I was watching a tech video about self driving and the gist was that full self driving will require around 2PFbf16 and if AMD hits their target with MI300 it will have around 2.5PF(QOPS?) as MI250X has 383TOPS with MI300 aiming for 8X the AI perf (from AMDs presentation)... That's exciting AF...

  • @Marc_Wolfe
    @Marc_Wolfe Жыл бұрын

    Maybe in the future we can see what us poor people can still do with an MI25. I struggled for a little bit to get ROCM installed (apparently Vega support ended after ROCm 5.0 I think it was, specific versions of Linux too apparently), then I gave up and flashed it's vBIOS to a WX9100... after bashing my head off my keyboard to figure out the right buttons to press to get the flash to work... and realizing there were 2 BIOS chips that needed flashed.

  • @ewilliams28

    @ewilliams28

    Жыл бұрын

    I've seen those for less than $100 on eBay. I would really love to get one or two of those working for a VDI project that I'm working on. I really hate GRID.

  • @Marc_Wolfe

    @Marc_Wolfe

    Жыл бұрын

    @@ewilliams28 Paid $80 plus tax for mine. I'd love a good excuse to use it for more than just gaming, but that was my main goal; so not a big concern, just nerd desires.

  • @Ronoaldo
    @Ronoaldo Жыл бұрын

    16:41 This was amazing!!!😂

  • @joshhua5
    @joshhua5 Жыл бұрын

    I’ll set this up on my desktop tonight, been watching rocm for a while. Maybe I can finally retire the M40

  • @LA-MJ
    @LA-MJ Жыл бұрын

    N00b question. Can one test RocM on consumer RDNA2?

  • @tanmaypanadi1414

    @tanmaypanadi1414

    Жыл бұрын

    Asking the real questions. As far as I know No but I am sure someone will figure it out.

  • @linuxgeex
    @linuxgeex Жыл бұрын

    ROCm is great because you can have the same machine learning setup on your workstation as on the supercomputer. This will succeed for the same reason that x86 succeeded and the same reason that Linux succeeded - accessibility by the masses. I believe the popular term these days is Democratisation.

  • @garytill
    @garytill Жыл бұрын

    Let's get that onto a 1ru tray.. nice.

  • @ewilliams28
    @ewilliams28 Жыл бұрын

    I would love to be able to use Instinct cards and be able to get rid of GRID as well.

  • @spinkey4842
    @spinkey4842 Жыл бұрын

    0:48 AAAAHHHHHHHHHH!!!!!!!! him no want things plugged in his body

  • @matiasbrandolini
    @matiasbrandolini Жыл бұрын

    Level 1? more like, level 2000. I didnt understand a word until I heard Fortran.. maybe because Im a COBOL programmer :)

  • @sailorbob74133
    @sailorbob74133 Жыл бұрын

    I'd love to see some follow up on this one.

  • @gsedej_MB
    @gsedej_MB Жыл бұрын

    Great video. I would just like more broader (radeon cards) support. I eas playing with rocm since its release on rx480, but totaly lost interes with lack of rdna(1) support and even rx480 lost its official support. And all the details with pcie-atomics and almost none laptop dgpu and apu support. But again nice that they at least enterprice support.

  • @builtofire1
    @builtofire1 Жыл бұрын

    i guess Wendel has electricity bills

  • @Mr_Wh1
    @Mr_Wh1 Жыл бұрын

    4:20 - A little server room ASMR for us all.

  • @RaspyYeti
    @RaspyYeti Жыл бұрын

    Would it be possible for AMD to create it's own titan by having an RDNA die and an CDNA die in a SoC? Would they be able to use Async compute to feed the CDNA die and boost Raytracing calculations?

  • @denvera1g1
    @denvera1g1 Жыл бұрын

    Get this man some Mi 250x

  • @DarkReaper10
    @DarkReaper10 Жыл бұрын

    Hi Wendell, I think you mistook Fortran for Cobol here. Fortran is used in science applications that get sent to HPC clusters, not really useful for finance.

  • @OGBhyve

    @OGBhyve

    Жыл бұрын

    He definitely means Fortran here. Fortran, C, and C++ are the best supported languages for GPU programming. Those languages also have the OpenMP support he mentioned.

  • @DarkReaper10

    @DarkReaper10

    Жыл бұрын

    @@OGBhyve I know but his explanation that Fortran exists because of legacy finance applications is a Cobol backstory. I am a fellow HPC guy, I know Fortran very well.

  • @OGBhyve

    @OGBhyve

    Жыл бұрын

    @@DarkReaper10 It's used in Finance too, but I see your point that it is more popular in scientific applications.

  • @landwolf00
    @landwolf00 Жыл бұрын

    Hi Wendell. Do you intend to benchmark rocm for pytorch? I'm very interested in this and it seems like it doesn't really exist on the web. As others have said, Cuda dependence is scary!

  • @hedrickwetshaves1997
    @hedrickwetshaves1997 Жыл бұрын

    @Level1Techs Could you please explain all the different FP64 FP16 FP32 Int8 and is there anyway to compare them with each other?

  • @cedrust4111
    @cedrust4111 Жыл бұрын

    @Level1Techs does Nivida or Intel have a direct competitor against the instinct accelerator?

  • @owlmostdead9492
    @owlmostdead9492 Жыл бұрын

    The day CUDA is not the only option will be a good day

  • @justwhyamerica
    @justwhyamerica Жыл бұрын

    Patrick boyle runs a finance channel and might be willing to work with you on actually using openBB

  • @ramanmono
    @ramanmono Жыл бұрын

    So what's OneAPI and HIP? Now we need have 5 API's for example to run raytracing on GPU in Blender (nvidia optix and cuda, AMD HIP, Intel oneapi an mac metal). How will a small team or individual working on a piece of software that need GPU acceleration get that to work (decently optimized) with all mainstream platforms?

  • @Pheatrix

    @Pheatrix

    Жыл бұрын

    They could usw OpenCL. An already existing API with Support from all major vendors for CPU and GPU computation (and everything else that implements it. e.g. FPGAs). It also supports all major OS (Windows Linux Mac and even Android just to name a few). I just don't get it why we need another standard that does the exact same thing.

  • @ramanmono

    @ramanmono

    Жыл бұрын

    @@Pheatrix Yeah, but it's bad buggy and you could never close to the performance of Cuda. That is why it is abandoned. So seriously no dev is gonna use opencl for high performance gpgpu. Apple too completely removed support for it in favor of their own way better performing metal api.

  • @Pheatrix

    @Pheatrix

    Жыл бұрын

    @@ramanmono Boinc, pretty much every cryptominer and a Lot of other programms use OpenCL. The performance gap between cuda and OpenCL ist there because Nvidia decided to only support up to OpenCL 1.2 however there are a lot of features that require at least 2.0 Recently Nvidia bumped the supported version up to 3.0 so the performance gap should no longer be there. And the bugs: well every vendor hast to implement their own driver and compiler. AMD is known for buggy drivers and as I already said Nvidia pretty much abandoned OpenCL in favor for their proprietary solution. All of these problems are solvable. And with way less work than creating a completely new solution that solves the exact same solution

  • @philhacker2405
    @philhacker2405 Жыл бұрын

    Blender would be Fun.

  • @ChinchillaBONK
    @ChinchillaBONK Жыл бұрын

    Hi, is it possible to do a basics video about ROCm ? Sorry to bother you and thanks. Also what are the differences in uses between EPYC , Threadripper CPUs and the many different GPUs like AMD Instinct ones Vs Nvidia A6000?

  • @Marc_Wolfe
    @Marc_Wolfe Жыл бұрын

    17:02 Doom 2016 LOL

  • @myselfremade
    @myselfremade Жыл бұрын

    TIM ALLEN NOISES

  • @mrfilipelaureanoaguiar
    @mrfilipelaureanoaguiar Жыл бұрын

    250v 20 Amps, at some point that could cook food or boil big amounts of water, that's super serial seriasly serial

  • @Veptis
    @Veptis Жыл бұрын

    I supposed in the future we will look at Intel, their accelerator hardware (GPU Max?) And software stack (oneAPI) which includes all kinds of solutions. None of which seem finished tho.

  • @jannegrey593
    @jannegrey593 Жыл бұрын

    OK. I hope to see also some more modern Radeon Instincts here. Unless the MI210 is one. IDK if AMD changes their names for those cards honestly, but I did hear about MI250 and MI300 - latter of which probably isn't out yet. I hope someone will educate me on this, because honestly quick google search has a lot of problems with sources that IDK if I should trust.

  • @KL-ky8fy

    @KL-ky8fy

    Жыл бұрын

    it's the same architecture as mi250, they are both CDNA2, lunched in March this year,

  • @samuelschwager

    @samuelschwager

    Жыл бұрын

    MI250 was launched 11/2021, MI210 03/2022, MI3xx is expected for 2023.

  • @dracleirbag5838
    @dracleirbag5838 Жыл бұрын

    What does it cost

  • @danielsmith6834
    @danielsmith6834 Жыл бұрын

    As for why Oak Ridge chose AMD for Frontier -- my guess is that Nvidia has massively optimised their silicon for AI workloads, where AMD has targeted more general GPGPU compute workloads. For a general purpose HPC system, FP64 is critical. Looking at the relative FP64 performance (especially FP64/W) shows how wide the gap is. Why Facebook/Meta are looking to switch? Given I'd imagine most of their workload is AI/ML, that's a much tougher puzzle.

  • @duckrutt

    @duckrutt

    Жыл бұрын

    I don't see Meta swapping vendors but I can see them bringing up their cool new software every time they need to buy a batch of Tesla cards.

  • @Jack-qj2pr
    @Jack-qj2pr Жыл бұрын

    One bug I found with ROCm is that it just doesn't work at all if you mix a Radeon Pro Duo Polaris with an RX Vega 64. It just doesn't detect anything if you mix cards. Pretty frustrating.

  • @TheKazragore

    @TheKazragore

    Жыл бұрын

    I mean is mixing cards any sort of norm? Not making excuses (it not working sucks), merely pointing out that may not exactly be a priority usecase for fixes.

  • @Jack-qj2pr

    @Jack-qj2pr

    Жыл бұрын

    @@TheKazragore I agree. I'd imagine with it being a relatively niche scenario, nobody would've tested it or even considered it. I just compiled ROCM again yesterday and my issue seems to have been fixed now, so happy days :)

  • @kortaffel
    @kortaffel Жыл бұрын

    Why are they only supporting OpenCL on Instinct? Why don't we have Vulkan or a new VulkanCompute version available? I heard OpenCL is stuck

  • @Yandarval
    @Yandarval Жыл бұрын

    Every time I see Wendel go into the Server room. All I can think of is, where is your hearing protection, Wendell?

  • @thesunexpress
    @thesunexpress Жыл бұрын

    Do a dnetc run on it?

  • @LeDabe
    @LeDabe Жыл бұрын

    rocprof is soon to be hidden under an GUI called MIperf that has yet to be released by AMD but is available on Crusher (a TDS of frontier)

  • @LeDabe

    @LeDabe

    Жыл бұрын

    it will provide information similar to what Nsight compute does. Imo tooling was one of the last big problem with working with AMD cards.

  • @Pheatrix
    @Pheatrix Жыл бұрын

    There already is an open standard for this: OpenCL ! It runs on pretty much everything (including CPUs, FPGAS, and GPUs) and with OpenCL 3 you also get a newer version than 1.2 on Nvidia Devices. Why do we need a new standard if we can just use the one that already exists and has support from every major vendor?

  • @Misiek-oc7bu
    @Misiek-oc7bu Жыл бұрын

    but can it run crysis

  • @zyxwvutsrqponmlkh
    @zyxwvutsrqponmlkh Жыл бұрын

    But can it run Cuda?

  • @BogdanTestsSoftware
    @BogdanTestsSoftware Жыл бұрын

    Could you tell the wire monkey to wear hearing protection, so that they don't get hearing damage? You got me to laughing w/ tears about the #shining and AMD's relentless execution!

  • @bryantallen703
    @bryantallen703 Жыл бұрын

    but, can 1 MI250 run 64 instances of CRYSIS 64-bit

  • @luridlogic
    @luridlogic Жыл бұрын

    Can rocm be setup in Debian rather than Ubuntu?

  • @squirrel6687

    @squirrel6687

    Жыл бұрын

    Anything can. I run Debian Bookworm with everything from PerceptiLabs, Anaconda with all the juices even with SecureBoot and Nvidia and their CUDA Toolkit. Once up and running, no upgrade hell as with Ubuntu.

  • @scottxiong5844
    @scottxiong5844 Жыл бұрын

    MM laser...it is fine. :D xD

  • @Nec89
    @Nec89 Жыл бұрын

    IM SUPER SERIAL GUYS! CONSOLE CABLES ARE REAL!!1!

  • @Level1Techs

    @Level1Techs

    Жыл бұрын

    IM SERIAL! :D

  • @camofelix
    @camofelix Жыл бұрын

    All they have to do is make hip *checks notes* not shit It’s still a PITA to work with

  • @NKG416
    @NKG416 Жыл бұрын

    i don't know shit about HPC, but it seems everyone likes opensource it kind of felt i bought stuff from the right company

  • @maximusoptimus2000
    @maximusoptimus2000 Жыл бұрын

    Just compare it with supercomputers from about 20 years ago

  • @engineeranonymous
    @engineeranonymous Жыл бұрын

    In my humble opinion AMD should focus on unified memory architecture like Apple M series CPU's. You can not offload a lot of computations to GPU because the memory transfer requirement simply kills your gains. An unified architecture will make every operation as a target for acceleration and Nvidia has no answer for this since they only make GPU's. AMD CPU's with built in GPU's can break benchmarks for both Intel and Nvidia. Correction : I'm such a fool. HBM unified memory will come to AMD in 2023 for datacenters with MI300 in 2023. They announced it in Financial Analyst Day 2022. I can't believe I missed it.

  • @tanmaypanadi1414

    @tanmaypanadi1414

    Жыл бұрын

    xylinx might be the able to help with accelerators but it's a few years off before we see any applications in the consumer realm.

  • @jesh879

    @jesh879

    Жыл бұрын

    You realize AMD was the one who created the HSA foundation right? HSA was demonstrated before Zen 1 existed. When AMD moves on this, no one will be doing it better.

  • @engineeranonymous

    @engineeranonymous

    Жыл бұрын

    @@jesh879 Yeah I know but HSA only includes cache coherency (that's what I understand from v1.2 of standard) but Apples implementation goes beyond what AMD or Intel called UMA. In M1 CPU and GPU share the same ram and can use it utilize when needed.

  • @garrettkajmowicz
    @garrettkajmowicz Жыл бұрын

    Why hasn't AMD upstreamed their TensorFlow support?

  • @intoeleven

    @intoeleven

    Жыл бұрын

    ROCm has supported tensorflow repo at their GitHub

  • @garrettkajmowicz

    @garrettkajmowicz

    Жыл бұрын

    @@intoeleven Yes. They have a fork of TensorFlow. Which is why I've asked why they haven't upstreamed it. If it isn't mainline, it doesn't really matter that much.

  • @intoeleven

    @intoeleven

    Жыл бұрын

    @@garrettkajmowicz They are upstreaming and syncing it constantly. Their own fork is for customers.

  • @NavinF
    @NavinF Жыл бұрын

    No mention of consumer AMD GPUs? It kinda feels like AMD doesn't care about ml. Researchers use CUDA because it's officially supported on their desktops.

  • @Cooe.

    @Cooe.

    8 ай бұрын

    They aren't going after individual researchers... 🤦 They want super computers, data centers, and multinational companies where it's MUCH easier, more efficient, and more profitable to gain market-share. And it's working. And RDNA cards did eventually get ROCm support though.

  • @NavinF

    @NavinF

    8 ай бұрын

    @@Cooe. Meh. Many off the shelf models require CUDA for at least one layer. Still makes no sense to use AMD for machine learning

  • @Cooe.

    @Cooe.

    8 ай бұрын

    @@NavinF Massive data centers aren't using off the shelf models, ya dingleberry... 🤦

  • @Cooe.

    @Cooe.

    8 ай бұрын

    @@NavinF Also, ROCm lets you run CUDA code anyways even if you're lazy (even though you won't get quuuuuuite the performance you would running it natively w/ the same FLOPS on Nvidia).

  • @linuxgeex
    @linuxgeex Жыл бұрын

    Cloud managed IOT can go straight to hell. They should ship an app that runs on your phone and provides an API that the IOT gear detects, and let you pair with bluetooth or with a button and extreme close range (easy to detect with the WiFi or BT hardware.) After that you should be able to manage it from the same app running on your PC, and you should be able to install a PKI signature onto the IOT device which forever locks it to a cert under your control, so it can't be hijacked, not even by your child/spouse/roommate/landlord etc.

  • @snowwsquire

    @snowwsquire

    Жыл бұрын

    iot is dumb, internet protocol is overkill for a lightbulb, matter over thread is the future. zwave/zigbee for right now

  • @Nobe_Oddy
    @Nobe_Oddy Жыл бұрын

    Wendell is gonna suddenly disappear and we won't hear from him for 6 months and it'll turn out that while making his video about using the Supermicro on the stock-martket with-in 5 minutes of turning it on he managed to become 3 richest man on the planet and spent the last 6 months on HIS private private island LOL :D

  • @synt4x.93
    @synt4x.93 Жыл бұрын

    Did the title change? Or am i high.

  • @Level1Techs

    @Level1Techs

    Жыл бұрын

    Title changed. Views are low and we're hoping the title change will fix it ~Editor Autumn

  • @synt4x.93

    @synt4x.93

    Жыл бұрын

    @@Level1Techs Great video, as always.

  • @Level1Techs

    @Level1Techs

    Жыл бұрын

    Thanks!

  • @tanmaypanadi1414

    @tanmaypanadi1414

    Жыл бұрын

    Let the clicks and engagement rise up.

  • @tanmaypanadi1414

    @tanmaypanadi1414

    Жыл бұрын

    @@Level1Techs Is there any way to get notifications as soon as the video drops? discord notifications work for me for some channels , is there something similar on the forums for us free tier folks other than KZread .

  • @dgo4490
    @dgo4490 Жыл бұрын

    Come on, trading? Is that the best usage for this hardware?

  • @Rintse
    @Rintse Жыл бұрын

    This title will get clicked by no one who is not a serious enthusiast/nerd.

  • @WiihawkPL
    @WiihawkPL Жыл бұрын

    now they should make an ai accelerator that doesn't cost a kidney

  • @Jake9066

    @Jake9066

    Жыл бұрын

    Sorry, "AI accelerator" contains two $-add words, so $$$ instead of $

  • @johnferrell1962
    @johnferrell1962 Жыл бұрын

    Should I get this or the 4090?

  • @dawwdd
    @dawwdd Жыл бұрын

    Intel CPUs are working excellent with PyTorch and it should be easy to join new GPUs considering oneapi, amd not so much lets hope it changes in near future and amd software will get better performance and some stability. I don't know anyone who use AMD over NVIDIA in Machine/Deep learning right now cause of ROCm extremely poor quality and problems with consumer gpu not working with ROCM at all so you can't develop locally, but there are few folks works with scientific computation focus mostly in HPC that use AMD for Float64 calculations.

  • @RobHickswm

    @RobHickswm

    Жыл бұрын

    I use ROCm over Cuda sometimes. I've benchmarked a fair amount of tensorflow code for my research and it is neck and neck with last gen hardware (Radeon VII's vs A/P100's) . It is very easy to get it running particularly if you use the ROCm docker images for your tool of choice. And the tensorflow/jax code just runs with no modifications.

  • @dawwdd

    @dawwdd

    Жыл бұрын

    @@RobHickswm Cool but Tensorflow isn't PyTorch I tested 3090 with Radeons in close price points and they are always few times slower maybe in the extremely high end datacenter they are close enough but I haven't any AMD card to test it.

  • @RobHickswm

    @RobHickswm

    Жыл бұрын

    @@dawwdd I've only tested the Radeon VII (which uses HBM2 memory like the datacenter cards) and for things I'm doing (not canned ML benchmarks) it is as fast/faster than the Nvidias with a few exceptions here and there depending on the op. You're right. Not pytorch, just jax and tensorflow.

  • @starfleetactual1909
    @starfleetactual1909 Жыл бұрын

    Greek

  • @evrythingis1
    @evrythingis1 Жыл бұрын

    Maybe Intel and Nvidia will learn that they shouldn't rely on being a monopoly for their success.

  • @HellsPerfectSpawn

    @HellsPerfectSpawn

    Жыл бұрын

    What are you blabbering about Intel provides more open source code to Linux then all the other PC players combined

  • @evrythingis1

    @evrythingis1

    Жыл бұрын

    ​@@HellsPerfectSpawn Yeah, totally of their own accord, not because their monopoly was so severe that that literally had to after years of ILLEGALLY doing MSFT's bidding.

  • @HellsPerfectSpawn

    @HellsPerfectSpawn

    Жыл бұрын

    @@evrythingis1 ??? What mental gymnastics are you jumping through mate?

  • @evrythingis1

    @evrythingis1

    Жыл бұрын

    @@HellsPerfectSpawn Do you not know anything at all about Intel's history of Antitrust violations!?

  • @HellsPerfectSpawn

    @HellsPerfectSpawn

    Жыл бұрын

    @@evrythingis1 Again what kind of mental hoops are you jumping through. Are you trying to suggest that because Intel got sued in Europe it suddenly found a reason to go open source??

  • @codejockey216
    @codejockey216 Жыл бұрын

    Second, haha

  • @stephenreaves3205
    @stephenreaves3205 Жыл бұрын

    first?

  • @mostwanted002

    @mostwanted002

    Жыл бұрын

    yup

  • @benjaminmujakovic2664

    @benjaminmujakovic2664

    Жыл бұрын

    good boy

  • @rtkevans
    @rtkevans Жыл бұрын

    Dude wth is that framed picture on your desk??? Looks satanic…

  • @marcusaurelius6607
    @marcusaurelius6607 Жыл бұрын

    and now it’s may 2023 and nobody cares about ML on amd cards. unless it’s a drop-in replacement, nobody will migrate their massive ML tech stacks to eh, what do you call it.. radeon?

  • @FLOODOFSINS
    @FLOODOFSINS Жыл бұрын

    It's a shame this guy doesn't have any kids. He has so much knowledge crammed inside his head.

  • @tanmaypanadi1414

    @tanmaypanadi1414

    Жыл бұрын

    The KZread channel is his baby

  • @nathanlowery1141

    @nathanlowery1141

    Жыл бұрын

    We are his spawn

  • @Onihikage

    @Onihikage

    Жыл бұрын

    He doesn't need children to leave a legacy. _We_ are his legacy.

  • @Blacklands

    @Blacklands

    Жыл бұрын

    Well, he has a forum and a KZread channel...! He's teaching many more people than just the kids he doesn't have!

  • @FLOODOFSINS

    @FLOODOFSINS

    Жыл бұрын

    @@Blacklands a forum is way better than having your own child and seeing your legacy live on along with everything that you can pass on to him besides tech stuff. You're so wise, maybe he can put that on his tombstone "I have a forum"

Келесі