AMD Reveals MI300X AI Chip (Watch It Here)
Ғылым және технология
At AMD’s most recent product reveal event, the company’s CEO Lisa Su unveils generative AI chip to work inside the data center, the new MI300X.
Never miss a deal again! See CNET’s browser extension 👉 bit.ly/39Ub3bv
Subscribe to our channel: / @cnethighlights
Пікірлер: 303
Lisa Su shows how much better it is to have a CEO that has an engineering background opposed to a some turkey with a MBA.
@futureshocked
Жыл бұрын
true
@Wooster77
Жыл бұрын
an mba, not a mba
@ingodwetrust304
Жыл бұрын
Bingo , that's why I love Elon, Jensen and Lisa companys much much better than others ....
@HiddenAgendas
Жыл бұрын
facts
@TheLazyVideo
Жыл бұрын
Not just an engineer, she’s a PhD engineer
I always supported AMD as the first desktop I built was running on an Athlon and an AMD graphics card. Keep going Lisa you are killing it.
@Shannon-ul5re
Жыл бұрын
me to, all the way back to the K6 😂
@awesomegmg956
Жыл бұрын
Me too. Although my first CPU was Cyrix 486, my first DIY upgrade was AMD 5X86 133. Also my first CPU with fan.
@Pixel_FX
Жыл бұрын
@@istealpopularnamesforlikes3340 King? yeah king of power consumption lmao. almost 300W at stock. slight OC will make it guzzle 400W+
@BenedictPenguin
Жыл бұрын
@@Pixel_FX naaah intel is way more efficient on real world daily drive use case, unless u are forcing that chip to run at its turbo for most of ur workload then yes AMD is better there
@lunascomments3024
Жыл бұрын
@@Pixel_FXhaha yes. i just can't cool down this thing called Intel CPU.
I love watching the competition in the AI hardware and software space heat up the way it has. We are truly living during a special moment in the industry and in our species’ history.
@Learna_Hydralis
Жыл бұрын
Let's hope it's not our last moment.
@DoctorJack16
Жыл бұрын
@@Learna_Hydralis exactly!
@petrushka2
Жыл бұрын
I wonder where is intel in all these domain.
@parkout95
Жыл бұрын
There have been so many movies that show otherwise… 😢
@duladrop4252
Жыл бұрын
His intent is not about the AI, but the concept of competition that steers technology to a greater advancement. Monopoly can stagnate technology we just have to thank AMD for that, for continuing to strive for technology advancement.
Subtitle Error M1300X(X) ☞ Instinct MI300X GPU
@PretentiousStuff
Жыл бұрын
ok
@miyagiryota9238
Жыл бұрын
Sure nvidia, u wish
@SpaceCaseZ06
Жыл бұрын
Ridiculous error to make. and to not fix it four hours later! CNET, you're acting like some fly-by-night.
@lightward9487
Жыл бұрын
AMD error hahahahahahahaha
@mosso3715
Жыл бұрын
Probably an AI error
The difference between the AMD chip and NVidia chip is less than the difference between their CEOs. Practically, AMD ceo looks like the female version of Nvidia ceo.
@digranes1976
Жыл бұрын
They are related. Found this online “Lisa Su's own grandfather is actually Jen-Hsun Huang's uncle”
@teekanne15
Жыл бұрын
@@digranes1976 if thats true, this is somehow hilarious.
@Eleganttf2
Жыл бұрын
@@teekanne15 it is in fact, confirmed and true xD the Verge already fact checked it again in 2020 article haha
@Pixelsplasher
Жыл бұрын
Let's prompt AI to generate them dancing together.
@yasunakaikumi
Жыл бұрын
at least no long pause with mama sue
This is great but the message is missing software part. How easy it is to integrate with frameworks, etc. Nvidia spent a lot of effort on CUDA and AMD needs to show something similar and make it clear to developers that barrier of entry is low. That’s the key to adoption.
@DanOneOne
Жыл бұрын
I am pretty sure they will write all the python libraries for their chips. With so much money involved there, it will not be an issue.
@blakjedi
Жыл бұрын
They write software models for their clients link Ms and Sony
@robjamo1441
Жыл бұрын
ROCm, HIP its all there
@mdzaid5925
10 ай бұрын
They are working on RoCm but it will take some time to be picked up.
@robjamo1441
10 ай бұрын
Redshift, Blender cycles and Boris FX sapphire tools are nearly done for a start@@mdzaid5925
AMD is doing well. I trained some DL and RL models during the study. The memory is always a big issue. I believe APUs would work well for the purpose of training RL model.
Them showing inference with falcon-40b in bfloat16 gets me a little worried. Int8 inference is quickly gaining ground and I would have really liked seeing the actual performance compared using this. Moreover, if you bought this thing, you would probably be more interested in actual finetuning or training models yourself. Yet no information on training performance was given.
@kusumayogi7956
Жыл бұрын
What difference using bfloa16 and int4/8 in deep learning?
😂😂😂It's funny how Intel is so behind. AMD and Nvidia are already into AI stuff, while Intel has just started playing with GPUs.
@tluangasailo3663
Жыл бұрын
Intel instead is now focusing on manufacturing/fabrication, equally important
@DarkWizardGG
Жыл бұрын
FYI, Intel just recently launched their Quantum CPU if u dont know. Dont worry wont be very behind, sooner or later theyll build their very own AI. Just wait and see, bruh. Lol😁😉😄😅🤗🤗🤗
Someone should make drivers for it to support gaming and make a gpu card with it capable of gaming (not only ai but can do it too)
No matter whoever wins , between them and nVidia, the real winners laughing to the bank will be TSMC and ASML
How long until we find out all these AMD keynotes have been AI-generated?
From the lack of numeric values it can be assumed that everything except memory is worse than on H100. But still nice that someone is actually trying to compete with Nvidia.
@samyy974
Жыл бұрын
maybe on price/performance ratio they are better positionned
@mikeb3172
Жыл бұрын
CPU & GPU are more closely connected as well
@Dayta
Жыл бұрын
haha omg your comment just saved me .. while i was just listening to the audio of this clip i totaly forgot that it amd .. until i read your coment i was like .. what ? compete with nvidea i thought this was nvidea oh .. it was not .. hahao omg :D
@R4K1B-
Жыл бұрын
@@samyy974 People buying these chips don't care about price. They care about speed and efficiency (Effciency not to save money, but because of power limits in a factory)
@boltfixer24
Жыл бұрын
Ye 7900xtx vs 4090 shows that already
I suppose we should be thankful for any kind of competition at this point, we need more market pressure to keep NVIDIA from drifting towards more obscene pricing. But it looks like Jensen isn’t concerned by AMD, especially in the realm of AI. I’ll always bite the bullet and buy NVIDIA mainly because I’ve never had an AMD card that didn’t have issues, but especially because CUDA is king
@Humanaut.
Жыл бұрын
I never had an AMD card that had issues either. I've had both, Nvidia and amd and never had problems with either one of them. You're just buying into the marketing, and that's fine as long as you know you're doing it. Best thing would be to actually be informed enough to know the details of your purchasing decision though. Only reason to buy Nvidia over and is for top for the line cards for raytracing (4090, maybe 4080), if you need it for specific productivity tools or if you really place a lot of emphasis on dlss3/framegeneration. In pure rasterized gaming performance on low to mid tier buying Nvidia is nothing more than a dumb decision.
@xjohnny1000
Жыл бұрын
My experience too. I've bought and managed dozens if not hundreds of systems and I can always count on AMD having a higher level of required support. Their CPUs like Threadripper are amazing but chipsets are not reliable and GPUs even less. My brother thought I was biased so he swapped over to an AMD GPU and had nothing but problems. He switched back to GeForce shortly afterwards and it's been rock steady ever since.
@Humanaut.
Жыл бұрын
@@xjohnny1000 which gen was he on? Never had any problems with my rx 480 whatsoever and based on tech reviewers RDNA3 is their most solid and reliable gen yet.
@xjohnny1000
Жыл бұрын
@@Humanaut. I can't speak for the latest cards but I think it's really a software/driver problem rather than hardware. AMD also has a long history of poor thermal management which probably doesn't help either.
@lunascomments3024
Жыл бұрын
@@xjohnny1000there's this DDU software that you must run after swapping to AMD GPU otherwise GeForce and Adrenalin is going to battle each other out. Seems like Nvidia has good drivers but actually, it's the user that are just dumb.
Fix your title CNET! I really thought AMD was releasing a new chip called M"1300"X.
@andrewyoung9516
Жыл бұрын
😂
This is the mi300x, and the mi300a is the one with a cpu on it for ai inferencing, not the mi300x
Consumer and Prosumer version? Why not? 😢 Or when?
Lisa Sue seems to be getting younger and younger.
@verlax8956
Жыл бұрын
it's that next gen amd liquid coolant she be drinking
@DarkWizardGG
Жыл бұрын
In the near future she'll be immortal by the help of her own AI. lol😁😄😉🤖🤖🤖🤖
@pneumonoultramicroscopicsi4065
3 ай бұрын
She uploaded herself entirely on an MI300X chip and she has 60 gb left
CNET title in error - it's MI300X, not M1300X - I, not 1
@HarryXiao88
Жыл бұрын
yeah, it's also very clear why they'd made such mistake...
yes, bringing AI LLMs to local use just incase ChatGPT shutsdown.
Does anyone have experience using amd hardware for deep learning?
Can it run Gollum though?
how much ? under $1000?
This is competition for companie and not for regular gamers. Although, ai server can be connected to online ai bots and such. So it can be future online gaming like MMO and such.
@fuckkatuas2837
Жыл бұрын
Might be added like an addon processor like physx to drive AI in games to accelerate VR use. A subdued population distracted by games is the best population. WEF/Blackrock maybe behind all this?
Correct the typo in the title first, please.
Definitely an easier keynote to sit through than Jensen's ramblings about which computer is the heaviest. I don't know how these new chips will stack up against Hopper, although even if they're not as good companies are going to rush to buy them up because of the AI boom.
02:47 chatGPT with Nvidia is still faster when generate languange
@littlelostchild6767
Жыл бұрын
Piyush Arora 6 hours ago They ran Falcon 40B model, which is an open source competitor to ChatGPT. For perspective, it's heavy and doesn't run on my 4090 GPU (24GB VRAM with 80GB DDR5 RAM). The system goes out of memory after a few commands.
@kusumayogi7956
Жыл бұрын
@@littlelostchild6767 maybe because chatGPT is better
@kusumayogi7956
Жыл бұрын
@@littlelostchild6767 chat gpt use 175 billion parameter and Falcon only 40 billion
hope Rx 7995xTx 3D like this: with 88B transistor: 36gb vram or 24gb hbm3
did someone just do image to image of Nvidia's Presentation
But can it work with PyTorch or TF?
@tehehe5929
Жыл бұрын
AMD ROCM stack works with these quite well.
Just in time when smaller language models are starting to run circles on large number models. On the demo it’s the training that needs all the power.
So how is this product gonna work outside the data center.
Damn these rocks that they tricked into thinking are getting pretty good.
AMD needs something as good as better than what CUDA is today.
@duladrop4252
Жыл бұрын
CUDA is already going to its end. PyTorch is working on ROCm's LLVM directly and other AI frameworks are doing the same. Meaning AMD's ROCm are being supported by them without working on CUDA remapping. Frameworks and Developers are already making a move to get rid of CUDA's dependency...
@FrozzenFreak
Жыл бұрын
Modular AI is solving the software issues around AI. So NVIDIA will likely lose its moat there
@cadetsparklez3300
Жыл бұрын
Cuda is irrelevant on ai, its just a bs lock nvidia used to control the design industry
@adaml.5355
Жыл бұрын
AI will not have anything to do with GPU compute or CUDA within the next several years.
@luizconrado
Жыл бұрын
However, for now, having an NVIDIA GPU makes learning and working with AI on a small scale much easier, correct? I believe AMD would get a lot of traction if they made it extremely easy to set up a computer using Linux or Windows to learn and work with small/medium AI tasks. Do you disagree?
Wow, very exciting times ahead
Miner seeing 5.2tb bandwidth like🤯
The more you buy, the more you save.
@alteredcarbon3853
Жыл бұрын
You don't have to understand the technology, you don't have to understand the strategy!
"This is going to work out great for us, or terribly, because we are all in." - Jesen Huang 2017
how does quantum computer compete with this AI chips?
how much will one of these cost?
@RealShinpin
Жыл бұрын
@n n :( Maybe I'll sell my kidney for one.
@RealShinpin
Жыл бұрын
@n n I want to be able to run high level LLMs. Currently I can only run low level stuff, even on mid to mid-high tier hardware. I wouldn't mind setting it up as a server if I had to.
@RealShinpin
Жыл бұрын
@n n Do you know of any old server grade hardware that might be useful to consumers to get now? Like maybe the stuff they're replacing? I heard something about the p40 accelerator being useful for llms.
Would be really nice to have 1.5TB dedicated to training models
Wow the title of the video is wrong lmao
GPT3 has 175B parameters
We are excited to show you: SKYNET Mi300X
@SleepyRulu
Жыл бұрын
Funny joke
@novemberalpha6023
Жыл бұрын
@@SleepyRulueverything is funny until T101 rolls out of the factory.
@SleepyRulu
Жыл бұрын
@@novemberalpha6023 sure
nvlink-c2c is the future, it will address amd chiplets latency and power issue
Correct your title and description to MI300X not M1300X lol.
Cnet... its MI300X not M1300X 😅
How it’s fair against Nvidia GH200?
Yes, but does it run Tensorflow 2 ?
@DarkWizardGG
Жыл бұрын
I guess, it could run tho. Why u not just try it?! Lol😁😄😅👍👍👍
@NisseOhlsen
Жыл бұрын
@@DarkWizardGG because I'd have to buy the card to find out ?
@DarkWizardGG
Жыл бұрын
@@NisseOhlsen what card are u talkin about then, GPU?! And u said "TensorFlow 2" is theres version 2 already?! 😁
The Nvidia CEO and AMD CEO look like they are a couple.
@xelerator2398
Жыл бұрын
lol seriously
@JaysenTC
Жыл бұрын
They need matching leather jackets
@DanishBashir-sz6vs
Жыл бұрын
I heard somewhere they are actually relatives
@Freshbott2
Жыл бұрын
They’re cousins once removed!
@jbob34345
Жыл бұрын
Imagine if they merged and had a child, they'd be the ultimate CEO
I believe the title should be "MI"300X, "i" for internet instead of one.
Title of video is Incorrect. It's MI300X
all llms are capable of writing poem. try ask it to solve some differential equations...
everyone making GPUs for AI is screwed as more photonic systems come online
Its M "I" 300X not M"1" 300X
Why do I keep thinking that she must be related to Jensen from Nvidia. Hons I ship it
@DanishBashir-sz6vs
Жыл бұрын
Because they are actually relatives
@aidenkim6629
Жыл бұрын
@@DanishBashir-sz6vs what that’s crazy
sad bcos the launch is slow and production was not ready...
But does the generative AI know how to play Crysis?
@marshallmcluhan33
Жыл бұрын
You need the uncensored models for that
@novemberalpha6023
Жыл бұрын
This Generative AI *IS* the Crisis.
@marshallmcluhan33
Жыл бұрын
@@novemberalpha6023 That's only the half of it.
But can it run Minesweeper?
@DarkWizardGG
Жыл бұрын
I guess no. For now, only that BOT from NVidia could do that.😁😉😄🤖🤖🤖🤖
Gamers take notice. Gaming is no longer the game.
@LifeWithRilla
Жыл бұрын
Gaming doesn’t matter when compared to business. It’s a tiny industry
@leonxus2701
Жыл бұрын
@@LifeWithRilla gaming industry put them here, where they are now.
@LifeWithRilla
Жыл бұрын
@@leonxus2701 doesn’t matter.
@novemberalpha6023
Жыл бұрын
Next chip will be able to alter the game's story on the fly. You play CoD MW, you may end up killing Captain Price instead of Makarov.
@DanishBashir-sz6vs
Жыл бұрын
@@novemberalpha6023 this cracked me up. Thanks for making my day...was quite depressed
Awesome
!price MI300
A time will come in the future where everything can be done by AI (like mundane human tasks as well) , how will the human civilization function? What will governments do? What will people do for jobs or to earn a living? cause you need money for everything.
Jeez.. Chip name is MI300X not M1300X and I wait entire video when will they introduce more advance version of 300X lol You cought me CNET Bot!
does A.I say AMD is a good investment at this price?
:(. I want amd to do good, but this is literally the same speed if not significantly slower than 2x 3090’s not even 4090’s.
But Nvidia strength is GPUs needed to train LLMs.
good
chances are ChatGPU cannot run on any single server
@marshallmcluhan33
Жыл бұрын
GPT4ALL with some uncensored models are OK, no GPU needed.
苏妈宣布AMD不会动摇老黄GPU的独特性,而是走了另一条路。
@RuohongZhao
Жыл бұрын
苏妈堆HBM3,疯狂拓展带宽,这简直是炼丹的神器啊。不过,如果能针对大语言模型优化芯片架构就完美了。
@cadetsparklez3300
Жыл бұрын
Does chinese not have a word for green?
@GreenCappuccino
Жыл бұрын
@@cadetsparklez3300retty sure 老黄(lao huang) is just a phonetic nickname for jensen huang. it translates to old yellow due to google translate shenanigans
@Darkyber
Жыл бұрын
@@cadetsparklez3300 🤣🤣我觉得从你的理解也没错。
nvidia가 돈에 취해 안하던 것을 AMD에서 선제적으로 해서 다행입니다. 192GB/Card !
Lisa is COOL
How do we know it isn’t her secretary behind stage typing the poem?
@davidhernandez-fq6qe
Жыл бұрын
It's probably a pre-recorded video of writing the poem but speed up to make it look like it wasn't a human.
@hendrx
Жыл бұрын
facts, there's no way they wanted to deal with an outrage or wrong response
ChatGPT / Language models are the least impressive part of AI. AI is not overhyped - the OpenAI company IS overhyped. They are a flash in the pan
Can it run crysis tho
I dont like how she has the same haircut as Jensen. It throws me off XD
You want better engineered product Put an engineer on the helm of it
Intel =cpu Nvidia = gpu BUT AMD = cpu+gpu lol
Wait so does MT300X Al Chip have chatgpt installed on it because they only show it writing a poem 😅
@jwhite1337
Жыл бұрын
I think Microsoft has exclusive rights to the chatGPT code base and data. If no one else can run chatGPT but Microsoft, so a public demo using chatGPT would be less useful for all other customers. I would think they ran a demo privately for Microsoft at some point. Hugging face is an AI community that promotes open source contributions. So all customers can run that LLM for their business and run their own benchmarks to confirm. I believe chatGPT newest versions overall performs the best but open source has closed the gap significantly. The amount of progress the open source community has done lately is nothing short of remarkable, will be interesting to see were we will be in 6 months.
@thomasireland1770
Жыл бұрын
no
@Piyush.A
Жыл бұрын
They ran Falcon 40B model, which is an open source competitor to ChatGPT. For perspective, it's heavy and doesn't run on my 4090 GPU (24GB VRAM with 80GB DDR5 RAM). The system goes out of memory after a few commands.
@DarkWizardGG
Жыл бұрын
@@Piyush.A I guess, u need a 32GB VRAM GPU to run that Falcon40B. Yes its pretty damn heavy for an LLM like that. Me, Im using the lesser LLM like WizardVicuna13B.😁😉😄🤖🤖🤖
@kazedcat
Жыл бұрын
@@DarkWizardGG No a 40B parameter AI cannot run on 32GB VRAM unless your running the quantized 4bit version. For the full 16bit version you need 120GB of VRAM to run it 80 GB VRAM for just the weights alone and another 40GB overhead to run the model itself.
Not only is this a incredible piece of technology it could quite literally be one of the biggest threat to millions of people's lives and job's. Maybe we should stop and think about this one simply thing just because we can do a thing doesn't necessarily mean we should. I can believe how so many people have failed to realize the dangers of A.I. and the treats it's bringing along with each and every advancement in technology. It's truly terrifying to think what this world is going to be like in the next few decades. Yes you should be afraid very afraid of the wrong people using this technology to control and manipulate all of humanity or even destroy civilization as we have known it. I hope enough people wake up and realize the truth before it's to late to do anything about any of it. This is not a good thing and I can't hardly believe anyone doesn't recognize it...?...💯
She sounds very articulate
@DarkWizardGG
Жыл бұрын
Shes been articulate ever since, bro. Lol😁😄👍👍👍
She's look like nvidia CEO
go go RED team
Hey is it just me or is AMD killing it again. I thought they wouldn't be able to keep up with team green but hey, this sounds pretty reasonable at first glance. Another good call for Lisa perhaps?
Nvidia should be having a reason to be concerned now that MI300X instinct performance is out, While their Two Hopper H100 via NVLink can only get 3K+ Tflops, the MI300X can do 5,218 TFplops on a "Single GPU", even the two combine H100 can't even beat a single MI300X. Watch out Nvidia, AMD is coming...
Salute to Lady Lisa Su, real awesome engineering with high leadership level
Just add RAM slots to GPU instead of onboard RAM, so that users can add whatever amount of RAM they want.
@user-bj4fe4zj7i
Жыл бұрын
@typingcat
Жыл бұрын
@@N_N23296 Well, about space, now that most graphics cards take 2 slots and some even 3 slots, they could make the cards double-layered to have the space for RAM slots. And about the speed, I don't know how much slower slotted RAM is compared to onboard RAM, but maybe, it could be a two-tier system. That is, a GPU has on-board RAM and slots for RAM. The GPU would use the onboard RAM first, and only if there is not enough onboard RAM, it then tries to use slotted RAM. I know it would be slower, but currently, if you run out of VRAM, the process just crashes or doesn't get executed. I think slower execution is better than not being able to run at all.
@lunascomments3024
Жыл бұрын
lol wut. 50000 MB/s speed vs 5300000 MB/s HBM3. there's a crystal clear choice here.
lol for those of you who are paying attention
She’s Jensen with makeup
@Jakwine
Жыл бұрын
@@N_N23296 I had completely forgotten about it
I trust AMD they should put this in a laptop or a hand held windows or linux pc.
@Eleganttf2
Жыл бұрын
???
@fthishandleshit
Жыл бұрын
How stupid. Laptop connection.
@vmafarah9473
Жыл бұрын
it is better to be fit in your smartwatch so your hand turn into AI.
@novemberalpha6023
Жыл бұрын
Currently this chip will be used in a machine like Mainframe or series of servers that can handle a mountain of data. Laptops are not used for that purpose. If someone puts it in a laptop, he or she has to cram up a lot of current technologies in that laptop to keep up with that chip. That would be nearly impossible as the solution for the heating issue alone would take a considerable space which will be hard to make room in a small place like laptops. Even if it happens, that will not only alter the purpose of the laptop but also pump up the price of that laptop way outside the purchasing power of the potential buyers.
@univera1111
Жыл бұрын
It will best run on personal LLM in personal research. Listen to AMD CEO she's bringing LLM to individuals.
its good but its not nearly as good as nividias hopper
The AI world is getting better and better.
@cybercrazy1059
Жыл бұрын
It will mess you completely
@antdx316
Жыл бұрын
@@cybercrazy1059 for the good
Can I use that chip as brain for my ai wife?
MI330x not M1300x... It was literally the first words out of her mouth........
❤
New AI chip in AMD win win for everyone hope Western Democratic country monopolize this things. To image recognition. astronomy .autonomous systems .cyber security. Warfare. Simulation mechanics. Medical fields . Weather . Chemistry. Material Science. Satellite constellation. Coding. big data gardening analysis photo in video editing .games speech and voice to text recognition. In VR RA education. Robotic systems. Logistics Etc
Dr Dre writes better poetry
always 2nd best as usual
But can it run CUDA? 😢
@hondajacka2
Жыл бұрын
Don’t use AMD for training. It’s not compatible most of the time and will waste you endless hours trying to debug.
@marshallmcluhan33
Жыл бұрын
@@clehaxze Isn't AMD notoriously bad for this? Their software support is so bad some are claiming the defects are in the hardware since the bugs continue. I hope AMD takes the software stack seriously and improves support.
@JackRoyL
Жыл бұрын
cuda is overhyped.
@mikelay5360
Жыл бұрын
@@JackRoyLith very good reason. If you were in the industry you'd understand why
@clehaxze
Жыл бұрын
@@marshallmcluhan33 Not on Linux. AMD on Linux is better then NVIDIA due to both kernel space and userland drivers are open source. Also AMD is much better at adopting open standards like GBM and wayland. While NVIDIA tried 10 years doing their own EGLStream, that the community simply hates. The compute stack is still behind NVIDIA. But it works nevertheless. And there's sane PyTorch out-of-box support.
👍