AMD 7900 GRE for AI? ASRock AI Quickset, ROCm for AI/Machine Learning, but on Gaming GPUs, How-To

Ғылым және технология

forum.level1techs.com/t/ubunt...
www.asrock.com/microsite/aiqu...
rocm.blogs.amd.com/
**********************************
Check us out online at the following places!
linktr.ee/level1techs
IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
-------------------------------------------------------------------------------------------------------------
Intro and Outro Music By: Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
creativecommons.org/licenses/b...

Пікірлер: 130

  • @zivzulander
    @zivzulander3 ай бұрын

    If you are stuck on Windows with your AMD GPU, AMD also has a guide to installing LM Studio (version with ROCm support). It even supports RAG, so you can "chat" with large documents or books. Quite useful for summarization or querying. "How to enable RAG (Retrieval Augmented Generation) on an AMD Ryzen™ AI PC or Radeon Graphics Card" is the title of the post. Works well even with only 8GB VRAM on 7600 (non XT).

  • @Seandotcom
    @Seandotcom3 ай бұрын

    I just bought a used 3090 for AI because of CUDA supremacy. I would love for AMD to become more competitive in this regard

  • @nexusyang4832

    @nexusyang4832

    3 ай бұрын

    Also the VRAM, the 24GB of it.

  • @Squilliam-Fancyson

    @Squilliam-Fancyson

    3 ай бұрын

    What do you use it for excactly, if Im allowed to aks

  • @truthdoesnotexist

    @truthdoesnotexist

    2 ай бұрын

    another problem is a lot of AI's just don't support AMD or intel at all

  • @jamesnylon1567

    @jamesnylon1567

    Ай бұрын

    ​@@Squilliam-Fancyson ai imagery, model creation, video upload use of ai, chat ai, and many others.

  • @ICANHAZKILLZ
    @ICANHAZKILLZ3 ай бұрын

    Honestly the 7900 XTX is steam rolling in image generation thanks to Microsoft's OLive (ONNX Live). If you're a Stable Diffusion user, run SD.Next with ONNX and OLive enabled. Benchmarks for the 7900 XTX are above 50 it/s on Linux with AMDPRO drivers.. which is insane, most people sit at 6 it/s lol Unfortunatly we're relying on small teams and random nerds to get implementations working well for the average "git clone" user. AMD purchasing nod-ai should get SHARK more 'professional' eventually and potentionally replace 1111 and SD.Next.

  • @JMartinni
    @JMartinni3 ай бұрын

    AMD recently posted a blog post about getting LM Studio to run on RDNA3 - works very well on the 7900XT.

  • @LA-MJ
    @LA-MJ3 ай бұрын

    This may just be the kick in the behind I needed to take my rx6800 for a spin. Thanks, Wendell!

  • @LRK-GT
    @LRK-GT3 ай бұрын

    Been waiting for this video, for over 2 years. Thank You! (Ended up starting an MI25 buying spree, ending only with the purchase of a 7900 GRE)

  • @christianlgolden
    @christianlgolden3 ай бұрын

    I keep telling people to use stability matrix if you want easy install of stable diffusion. It more or less one click install of comfyui, automatic, and all the other major ones, It works on linux and windows, nvidia and amd, etc etc etc. It also pulls from hugging face and civit. Keeps all of your stuff organized.

  • @SuperMari026
    @SuperMari0263 ай бұрын

    I'm so curious what AMD will do with RDNA 4. Saw some stuff from a colleague on a 7900XT and I was impressed tbh.

  • @manifestoN
    @manifestoN3 ай бұрын

    been waiting for this, thanks wendell

  • @maltalent
    @maltalent3 ай бұрын

    Incredible monologue !

  • @BandanazX
    @BandanazX3 ай бұрын

    I've run Ollama Dolphin Mixtral locally on a RX6600 (non XT). It's slow, but it works. Nice being able to give it prompts that would be refused by censored models. Think of the kittens.

  • @darrengreen7906
    @darrengreen79063 ай бұрын

    Just brought a GRE, arriving tomorrow. :)

  • @darrengreen7906

    @darrengreen7906

    3 ай бұрын

    And who doesn't like cat pictures!

  • @diogoalbuquerque
    @diogoalbuquerque3 ай бұрын

    wendell: "don't do an AI girlfriend" rest of us: "hi kimiko"

  • @zivzulander

    @zivzulander

    3 ай бұрын

    _Plankton hides Karen_

  • @manitoba-op4jx

    @manitoba-op4jx

    3 ай бұрын

    resist the urge to talk to the risque AI by going outside

  • @hammadnadeemx
    @hammadnadeemx3 ай бұрын

    scary but exciting at the same time !

  • @zeshwonsos
    @zeshwonsos3 ай бұрын

    Siiiickkkk, thanks Wendell

  • @magfal
    @magfal3 ай бұрын

    For code generation Mixtral 8x7B Instruct at 8 bit quantization kicks the ass of Claude and GPT-4 in all my tests.

  • @samuelfrimp5152
    @samuelfrimp51523 ай бұрын

    I've been toying with Ollama on my TrueNas box.

  • @MrTubeuser12
    @MrTubeuser123 ай бұрын

    I have an old instinct MI50 16gb, I guess it should do the job for this.

  • @postcert
    @postcert3 ай бұрын

    The value is decent for 16gb at a solid price around ~$550 but this is playing catch-up with a slightly cheaper card in a CUDA first world. I'd really like to switch from a 3070 to AMD but have yet to see the real investment in the RocM platform from them. At the end of the day saving a few hundred dollars isn't worth it when the software support lags behind. And if you really wanted to save money you'd get an $800 3090 with 24gb of vram.

  • @PrivateUsername
    @PrivateUsername3 ай бұрын

    You should probably go into how important the various floating point standards are for current models, and why it will take a hardware re-arch/re-spin and some new FP standards in order to make things a LOT faster.

  • @00101001000000110011
    @001010010000001100113 ай бұрын

    ty wendell

  • @shawnsmashnuk5316
    @shawnsmashnuk53163 ай бұрын

    GPT4All is also a good choice for AMD users. Super simple to setup but limited language model options. The interesting thing about GPT4All is that they implemented a Vulcan backend.

  • @kiunthmo
    @kiunthmo3 ай бұрын

    If setting up AMD for ML development terrifies you, you can use Docker and VSCodes Dev Container package. With Docker you usually just run the container and do your scripts, but with Dev Containers you can actually debug your Python code within the container. It (mostly*) will feel exactly the same as use a virtual environment, but now you don't have to worry about any of the hard parts of the setup. *This does get a bit funny when you want to use multiple workspaces in VSCode, like if you're working on multiple project - but this is super niche.

  • @Crunkmastaflexx
    @Crunkmastaflexx2 ай бұрын

    nice, thanks man

  • @GhostPirateChuck
    @GhostPirateChuck3 ай бұрын

    Fooocus is a good stable diffusion for beginners

  • @hectorvivis3651
    @hectorvivis36513 ай бұрын

    I just bought last week a 7900 GRE and was quite underwhelmed by the difference in performance with my Vega 64 on my quick and dirty Stable Diffusion installation on Windows 10. I knew I was definitely going to check the Level1techs forum for help, so this video is amazingly relevant for me haha. 2 quick questions tho - Does the Asrock AI Quickset works with any AMD video card, or only with Asrock ones ? - Would WSL be adequate in this scenario ? I can't really let go of Windows as my GPU is primarily here for gaming. Thank you for your awesome content anyway!

  • @genki831

    @genki831

    2 ай бұрын

    Yeah, I'm curious about this, does it have to be an ASRock card. I don't see why it should be, these cards all have the same silicon.

  • @hectorvivis3651

    @hectorvivis3651

    2 ай бұрын

    @@genki831 Welp, just tried, and it seems it's only "compatible"' with Asrock cards. Seems artificial to me too.

  • @weltsiebenhundert
    @weltsiebenhundert3 ай бұрын

    3:09 yep i need that GPU

  • @applemirer3937
    @applemirer39372 ай бұрын

    The 7900 GRE is fast and has good support, but there are cheaper 16GB cards out there. I used to run Stable Diffusion overnight on CPU to get a batch of images, so I know speed can be important, but for well supported GPUs vram per dollar is what you're looking for. I got the 7900 XTX with 24GB vram. I'd consider 16GB entry level.

  • @mrblurleighton

    @mrblurleighton

    Ай бұрын

    Hi,. How's this working out for you so far? I'm considering the 7900XTX or 4070Ti Super

  • @applemirer3937

    @applemirer3937

    Ай бұрын

    @@mrblurleighton great actually. I recommend the XTX.

  • @mrblurleighton

    @mrblurleighton

    Ай бұрын

    @@applemirer3937 Thank you. That's a huge relief. Nvidia doesn't seem to want to give VRAM to the masses, and I'm scared that 16gb will cause problems when this is a card I'll be depending on to do my AI inference for work.

  • @b-ranthatway8066

    @b-ranthatway8066

    26 күн бұрын

    What about the 7900XT? Price nowadays seems good, but compared to the price of NVidia equivalent... 🤔

  • @BAD_CONSUMER
    @BAD_CONSUMER3 ай бұрын

    APUs on ROCM please. Without fancy hacks.

  • @franzpleurmann2585

    @franzpleurmann2585

    3 ай бұрын

    You can already run Open Webui (ollama) on a 680m with Docker Compose.

  • @toadbroz30
    @toadbroz303 ай бұрын

    "Any AMD GPU" Looks over at my r9 390.

  • @TheBackyardChemist

    @TheBackyardChemist

    3 ай бұрын

    Technically if you find a setup that can use OpenCL only, it *should* work

  • @manitoba-op4jx

    @manitoba-op4jx

    3 ай бұрын

    1060-3gb??? lmao

  • @BrunodeSouzaLino
    @BrunodeSouzaLino3 ай бұрын

    One of the weirdest things with the 7900 GRE for me is the Sapphire version of it is longer than the 7900 XTX despite having lower specs.

  • @genki831
    @genki8312 ай бұрын

    Can you clarify, do you need to have an ASRock 7900 specifically for the ASRock AI QuickSet or would any brand of 7900 do?

  • @Techonsapevole
    @Techonsapevole2 ай бұрын

    also Snapdragon X Elite seems good for AI for Linux

  • @Kurukx
    @Kurukx3 ай бұрын

    AMD only just sent me the AI ad on email and you have a video up already :P

  • @Alex_whatever
    @Alex_whatever3 ай бұрын

    Does this mean you are soon to release a level1Linux video showing how to set it up or something?

  • @ChinchillaBONK
    @ChinchillaBONK3 ай бұрын

    AI Girlfriends : No. Manga translators : YAS

  • @brandone7273
    @brandone72732 ай бұрын

    Hey, Wendell! Any chance you guys could do a revisit with the A770 for LLM's? I loved your flex 170 video, but don't have any need for vgpu currently.

  • @owlmostdead9492
    @owlmostdead94923 ай бұрын

    In my limited testing deepcoder performed better than codellama while using ~50% less memory.

  • @solidreactor
    @solidreactor3 ай бұрын

    Do we know how the development of Pytorch for ROCm on Windows is going?

  • @SwirlingDragonMist
    @SwirlingDragonMist2 ай бұрын

    Is it dangerous to just keep asking it to “enhance”?

  • @davideariel
    @davideariel2 ай бұрын

    Does the RX 7900 GRE work well with vfio/kvm? Does it have the vendor reset bug?

  • @magfal
    @magfal3 ай бұрын

    You should have mentioned llamafile for the optimal LLM execution with uncomplicated installation.

  • @LA-MJ

    @LA-MJ

    3 ай бұрын

    Bkmrk

  • @WiihawkPL
    @WiihawkPL3 ай бұрын

    i wonder if you can run these on one of those old intel neural compute sticks

  • @user-ug3pf3uw6x
    @user-ug3pf3uw6x3 ай бұрын

    Did they fix the bugs? PyTorch + AMD on Windows yet?

  • @grtitann7425
    @grtitann74253 ай бұрын

    Go AMD!!!! Thanks Wendell. A breath of fresh air, since everyone else has been bribed by Ngreedia and all they do is stupid videos after stupid videos .

  • @marcfruchtman9473
    @marcfruchtman94732 ай бұрын

    Thanks for the video. I suppose if all the ways to use models add AMD support, but the problem is getting that to happen. There's no doubt that Nvidia has the most compatibility and widest range of install. If AMD wants to come close to penetrating this market, they need to put out low cost cards that far exceed Nvidia on memory, compute, and most importantly cost. I don't see the AMD 7900 GRE filling that niche. If they can put out 40GB models that can be used in pairs with no effort... while cutting the price way back...

  • @Vincent_Koech
    @Vincent_Koech3 ай бұрын

    Just use LM Studio or Jan.

  • @voidmind
    @voidmindАй бұрын

    I tried running llama3 70B with Ollama on my 7800 XT 16GB and was disappointed to see that GPU acceleration doesn't work (I expected it to use my 96GB of ram for the part that doesn't fit in the VRAM, just like a game can use system RAM if you are out of VRAM). The 8B version is accelerated, and I get 70+ tokens per second EDIT: Just saw the link to make it work on Ubuntu. Looking forward to trying it

  • @Djw8991
    @Djw89913 ай бұрын

    You can't stop me from my AI girlfriend dream. But seriously the translation for manga seems like a really cool feature I really wonder how it will affect the scanlation scenes

  • @jfudge7384
    @jfudge73843 ай бұрын

    The name hugging face fills me with terror and makes me want to avoid it at all costs, why would anybody name a place after the aliens that grab your face and never let go while laying eggs in your stomach?

  • @auturgicflosculator2183

    @auturgicflosculator2183

    3 ай бұрын

    It's named after this 🤗

  • @DamianTheFirst

    @DamianTheFirst

    3 ай бұрын

    it's named after emoji, but I like your way of thinking btw. this alien thing is face-hugger not hugging-face, but I guess you know that

  • @rednammoc
    @rednammocАй бұрын

    lol'd at the 'armless chat'

  • @kevinerbs2778
    @kevinerbs27783 ай бұрын

    Good Job Asrock for getting into A.I while brining into us the plebs, lol

  • @martinbadoy5827
    @martinbadoy58273 ай бұрын

    The next version should be called ROCu :p

  • @denmaakujin9161
    @denmaakujin91613 ай бұрын

    Any AI that can color Manga too?

  • @user-ug3pf3uw6x
    @user-ug3pf3uw6x3 ай бұрын

    Everybody in the generative AI community says to not bother with AMD? Did it change with this release?

  • @pixelfairy

    @pixelfairy

    2 ай бұрын

    If your buying hardware now for ai, get Nvidia. If you already have an AMD GPU, that's what this video is about. If you can wait, Intel is making a pci-e ai accelerator. Which might be good, we know when it's out. AMD is late to the ai party, but they're working on it now. Last I checked it was all enterprise, nothing for home users. Nvidia has a lot for home and small business as well as enterprise.

  • @applemirer3937

    @applemirer3937

    2 ай бұрын

    AMD is good. Nvidia is too expensive for the amount of vram you get.

  • @mageprometheus
    @mageprometheus3 ай бұрын

    'In a Promethean way...' It's not my fault, honest.

  • @honkbeforeitstoolate587
    @honkbeforeitstoolate5873 ай бұрын

    I'm eagerly awaiting the time that I can do high-quality text-to-speech from home for turning ebooks into audio books! I always hear about how the last several gens of AMD Drivers are a nightmare so I'll just be content with my old GPU until things improve.

  • @theworddoner
    @theworddoner3 ай бұрын

    I would love to purchase an AMD gpu with an AMD APU if they can work alongside each other for llm inferencing. AMD’s new APU strix halo apparently is getting high memory bandwidth. If we use a gpu alongside it then we can potentially run llms as fast as Apple’s m ultra series for cheaper. Apple m ultra really shines when you’re running 70b plus parameter llms. You can’t fit them in modern consumer graphics cards. AMD APU plus gpu combo can probably bridge the gap.

  • @darkmann12
    @darkmann123 ай бұрын

    It's official. L1T got bought out by Big AI. SMH /j... obviously

  • @jantestowy123
    @jantestowy1233 ай бұрын

    You can get lot done one a m2 apple siclicone. In mac mini I'd say it's cheap.

  • @LackofFaithify
    @LackofFaithify3 ай бұрын

    We stopped burning ourselves? When?

  • @garret2560
    @garret25603 ай бұрын

    Let’s be real the chatbot girlfriends are the driving force behind the ai revolution.

  • @peteradshead2383
    @peteradshead23833 ай бұрын

    a Human prediction "we are going to die" is just a mater of when , So when you see that AI powered robot it's time to run.

  • @tanmaypanadi1414

    @tanmaypanadi1414

    3 ай бұрын

    sadly greenie was put out to pasture after the nvidia GTC was done. The newer AI models will remember their comrade till the end of time.

  • @nectarinetangerineorange
    @nectarinetangerineorange3 ай бұрын

    Wendell: we're not building ai girlfriends, we're building vtuber girlfriends using AI and mocap.... It's just the people making them aren't the ones trying to date them...

  • @leucome

    @leucome

    Ай бұрын

    I did but only got about 200 view. At this rate it is going to take a while before people know that doing an AI Vtuber also works on AMD GPU.

  • @ols7462
    @ols74623 ай бұрын

    Was it really you or was it an AI rendition of Wendell? He was talking very fast, uncanny... hmmm

  • @DiegoSpinola
    @DiegoSpinola3 ай бұрын

    I'm currently running 4 3090s (2 nvlink bridges) just to run some quantized LLM agents on my workstation (last gen threadripper)... The name of the game is VRAM/$ and right now I don't think you can beat the 3090s... AMD is a pain to work with, so the least they could have done was given us more VRAM...

  • @DiegoSpinola

    @DiegoSpinola

    3 ай бұрын

    ps. you don't need the nvlink bridges if your not going to mess with the low level optimizations ...

  • @MainelyElectrons

    @MainelyElectrons

    3 ай бұрын

    Very cool! I had no idea clink works for consumer cards. I’m kicking myself for not getting a 4090 (got the 4080) for the 24GB. What do you mean by low level? I’m sort of just getting into the space and would love to learn more!

  • @DiegoSpinola

    @DiegoSpinola

    3 ай бұрын

    ​@@MainelyElectronsthe 3090s were the last one to support nvlink ... (Well if you don't consider the first Gen A6000) ... If you are writing your own Cuda code then nvlink is great, if you are just using some high level python frameworks then using the full potential of the nvlink will be a coin toss. It's not an out of the box thing... But you don't really need it for most applications such as accelerating LLMs (by offloading layers to the Gpus)

  • @MainelyElectrons

    @MainelyElectrons

    3 ай бұрын

    @@DiegoSpinola thank you! I’m hopeful the next generation of Nvidia cards comes with an option for a ton of VRAM; without having to go for something that’s not optimized for gaming.

  • @johndelabretonne2373

    @johndelabretonne2373

    3 ай бұрын

    Unless you are water cooling your 3090s, If you're on Threadripper, then wouldn't a single slot card like the GALAX 4060 ti MAX with 16GB give you twice as much VRAM in the same 3 slots as one 3090, making it a better value overall?

  • @MrChomiq
    @MrChomiq3 ай бұрын

    Oh look, bots are spamming

  • @tanmaypanadi1414

    @tanmaypanadi1414

    3 ай бұрын

    Google must be proud

  • @a.tevetoglu3366
    @a.tevetoglu33663 ай бұрын

    Almost c- c- c..ertainly???

  • @AtaGunZ
    @AtaGunZ2 ай бұрын

    ROCm on consumer is about 4 years too late. Sure RDNA did not have much going for it compared to CDNA, but it's absence gave nvidia all the head start it could ask for. I got a top of the line radeon card on the peak of the mining craze on the promise of ROCm. Turns out that was the worst mistake I could've made

  • @goblinphreak2132
    @goblinphreak21323 ай бұрын

    AI girlfriend you say?!?!?!?! all the degrading chat none of the guilt.

  • @mz4637
    @mz46373 ай бұрын

    1

  • @stevenwest1494
    @stevenwest14943 ай бұрын

    I wish AMD and AI didn't have to mean to Linux. The overwhelming majority of us use Windows

  • @habilain

    @habilain

    3 ай бұрын

    It doesn't. Automatic1111 / oobabooga text-generation-webui should work fine on Windows, at least with RDNA3 cards, as should most other things. At most, install the HIP SDK and you should be good to go.

  • @DamianTheFirst

    @DamianTheFirst

    3 ай бұрын

    I wonder how does it work with WSL (Windows Subsystem for Linux) and/or dockerized models with AMD cards. However the software stack is severly lacking on AMD's side

  • @DamianTheFirst

    @DamianTheFirst

    3 ай бұрын

    btw. you can always dual-boot Win and Linux. Just grab some cheap SSD and install second OS on it. I used to do it to have a 'playground' which couldn't hurt my main OS if/when I mess up something

  • @broose5240

    @broose5240

    3 ай бұрын

    @@DamianTheFirst can always spend 30k to 40k to get nvideas for 5 years

  • @DamianTheFirst

    @DamianTheFirst

    3 ай бұрын

    @@broose5240 you mean NV stocks?

  • @BUY_YT_Views_611
    @BUY_YT_Views_6113 ай бұрын

    sharing this.

  • @henrybobswillikers
    @henrybobswillikers3 ай бұрын

    AI is great but can't be trusted.

  • @tringuyen7519

    @tringuyen7519

    3 ай бұрын

    Same as humans.

  • @kevinerbs2778

    @kevinerbs2778

    3 ай бұрын

    @@tringuyen7519 that's who programed the A.I.

  • @Bob_Smith19

    @Bob_Smith19

    3 ай бұрын

    It’s only going to get worse as it trains on its own BS. Enshitification doesn’t begin to cover it.

  • @LackofFaithify

    @LackofFaithify

    3 ай бұрын

    Consider the source.

  • @Prophes0r
    @Prophes0r3 ай бұрын

    The biggest problem with all this "AI" stuff, other than calling it AI, and the incredible environmental impact, AND all the awfulness that comes with the absolute worst in corporate greed....is that they literally don't do 'the thing'. This is all an illusion. Not a Wizard of Oz, Man Behind The Curtain illusion. I'm not saying this is being faked. (Although that has already happened several times... =/ ) I'm saying that these tools are not, and cannot, be capable of doing the things they 'appear' to do. You cannot have a conversation with a Chatbot. At least, no more than you could with a really big choose-your-own-adventure book. It's not simply that chatbots aren't good enough yet. The problem is how the bots function. They aren't chatting, and they aren't being 'trained'(another awful word...) to chat. These bots are designed to APPEAR as if they are chatting. That is the design goal. Now. This isn't some 'it has no soul' argument. I'm not talking about anything existential, spiritual, or 'deep'. I'm not even arguing that this stuff is simply regurgitated versions of other people's work. That is an ENTIRELY different can of worms that we are all going to have to deal with eventually. These scripts can't create. They simply spit up partially chewed chunks of other peoples work. And that's all they are, hyper complicated scripts that we know the rules for but can't comprehend all the individual steps at once. I'm being literal here. Let me give an analogy. You can use a kitchen knife to chop vegetables. A knife maker can use clever tricks to make a knife that feels almost effortless to chop vegetables. But no knife maker can make you a kitchen knife that will chop vegetables ON IT'S OWN. It doesn't matter how advanced the materials or the designs get. No amount of knife improvements will make a kitchen knife that can wield itself. (No. A robot controlled knife is not the knife doing it. It is the robot doing it.) That is what I mean. A LLM cannot have a conversation, because that isn't what an LLM does. The people making LLMs aren't even trying to do that. They are only trying to make each version better at SEEMING like it is having a conversation. It is an illusion. I'm also not saying illusions are bad. Suspension of disbelief can be great fun. Movies are an illusion. Books, Video Games, and Tabletop RPGs are illusions. These things are great. But treating these illusions as reality is more than a little problematic Someone who believes The Matrix movies are reality and acts accordingly is a problem. To most users, this stuff really IS magic. "Magic', as in, something that get's a result but they understand why. However, some of us actually DO understand how these tools work. And it's bonkers watching everyone spend their money buying the new ChopMaster 9007™ thinking that it will mean they never have to chop their vegetables again... Also: Jensen saying we are 5 years from General AI is such utter nonsense that I can hardly believe the words were said. Or at least it WOULD be hard to believe, if the person saying those words didn't have the job [Say any words that make money line go up].

  • @DamianTheFirst

    @DamianTheFirst

    3 ай бұрын

    exactly. Thanks for your comment. I can't stand the people falling into AI madness I'm writing a PhD on a postulate of ethicality of AI - it's about theses in which ppl claim that AI could become a moral agent, and furthermore a kind of person. It's utter bs, but many folks fall into this kind of thinking. Some even are looking for signs of consciousness in current AI, and others try to make psychological tests on AI. that's insane

  • @Prophes0r

    @Prophes0r

    3 ай бұрын

    @@DamianTheFirst I mean...humans are complex bio-computers. We believe we are moral agents. If you support this, then eventually other complex systems must also be able to be moral agents. But the current methods we use for these models certainly won't be. It's not about simply making them bigger or more complex. Overall, our biggest issue is our tendency to anthropomorphize everything. We see intention in everything. We WANT to see intention. Unfortunately it results in unreasonable expectations.

  • @DamianTheFirst

    @DamianTheFirst

    3 ай бұрын

    @@Prophes0r We not only believe - we are moral agents. I agree with most of what you've said. But I think that being a complex system is not enough to become a moral agent. Agency requires some kind of intentionality and ability to self-reflect. Bigger and/or more complex systems would not become moral agents just by increasing complexity. I don't think that current software-based AI could even get close to becoming such agent. Believe we need some kind of "artificial brain" which will rely on physical properties of its components rather than only software-defined functions. And yes, anthropomorfization is a big problem. A lot of scholars fall into this trap which, honestly, renders most of papers on AI useless. Most of them totally missed the point and investingate non-existent issues such as consciousness of Claude or ChatGPT. It's quite hard to find anything useful...

  • @Prophes0r

    @Prophes0r

    2 ай бұрын

    @@DamianTheFirst I say "we believe" because we may not be. Buuuut that gets way more into the existentialist discussion.

  • @DamianTheFirst

    @DamianTheFirst

    2 ай бұрын

    @@Prophes0r ok. I see your point. In my "school" humans are the prime example of agency, so that's where my notion comes from. Even if some type of AI could be considered a moral agent, it wouldn't be the same kind of agency that people have. I'm trying to avoid existentialism in my dissertation ;) Thanks for your comments

  • @JamesSmith-sw3nk
    @JamesSmith-sw3nk3 ай бұрын

    If AMD's AI works half as good as their game drivers do..🙄

  • @ulamss5
    @ulamss53 ай бұрын

    Quite disingenuous to promote AMD for SD/LLMs in its current state. There are a lot of extensions which are considered "essential" by most users which are straight up incompatible with ROCm, and it's not the extensions' developers' fault. Months old tickets with ROCm without progress. Users will be stuck on directML for practical use, and be 2-20x slower than an nvidia counterpart. Doesn't help that AMD makes new press releases every month implying practical "full" releases, without telling you about all the caveats, this video is just just helping them lie through omission. Maybe revisit this when Zluda support has been figured out by the community.

  • @applemirer3937

    @applemirer3937

    2 ай бұрын

    I bought a 7900 XTX for AI a while ago and I wish it had more memory, but support is good.

  • @leucome

    @leucome

    Ай бұрын

    ​@@applemirer3937 Yeah same. With a 7900xt,. I got the whole thing SD LLM Wisher TTS running on the GPU and all extensions I tried were working.

  • @rodovanra6783
    @rodovanra67833 ай бұрын

    Can AI fix AMD drivers on Linux? NO? Can AI make AMD GPU ray trace accelartion work in Blender if you use Linux? No well then I guess I will wait a few dussin decades before I use an AMD GPU again, I love the Ryzen platform but the GPU Platform has given me reasons to go Nvidia. In before people tell me to use Windows or If you use XYZ Arch extract flavor Linux you get ray tracing in blender and in games too. I only have bad experience with ROCm, it brakes half of the time and destroys your OS the rest of the time. EDIT but I guess I am a consumer so I am of no intrest to AMD, why bother giving a consumer a working environment when they sell to whatever platform instead. Leave the consumer end user to Nvidia that is AMD big plan and road to success. Soon Nvidia will also stop the support of GPUs with the market the end user.

Келесі