No video

When M1 DESTROYS a RTX card for Machine Learning | MacBook Pro vs Dell XPS 15

Testing the M1 Max GPU with a machine learning training session and comparing it to a nVidia RTX 3050ti and RTX 3070. Cases where Apple Silicon might be better than a discrete graphics card on a laptop.
Get TG Pro: www.tunabellys... (affiliate)
▶️My recent tests of M1 Pro/Max MacBooks for Developers - • M1 Pro/Max
▶️ Is M1 Ultra enough for MACHINE LEARNING? vs RTX 3080ti - • Is M1 Ultra enough for...
▶️ GPU battle with Tensorflow and Apple Silicon - • GPU battle with Tensor...
▶️ Python Environment setup on Apple Silicon - • python environment set...
▶️ Apple M1 JavaScript Development Environment Setup - • M1 MacBook JavaScript ...
▶️ Apple M1 and VSCode Performance - • Apple M1 and VSCode Pe...
#m1 #m1max #ml #pytorch #rtx3070 #macbookpro #intel12thgen #rtx3050ti #dellxps15
ML code:
sebastianrasch...
💻NativeScript training courses - nativescriptin...
(Take 15% off any premium NativeScript course by using the coupon code YT2020)
👕👚iScriptNative Gear - nuvio.us/isn
- - - - - - - - -
❤️ SUBSCRIBE TO MY KZread CHANNEL 📺
Click here to subscribe: / alexanderziskind
- - - - - - - - -
🏫 FREE COURSES
NativeScript Core Getting Started Guide (Free Course) - nativescriptin...
NativeScript with Angular Getting Started Guide (Free Course) - nativescriptin...
Upgrading Cordova Applications to NativeScript (Free Course) - nativescriptin...
- - - - - - - - -
📱LET'S CONNECT ON SOCIAL MEDIA
ALEX ON TWITTER: / digitalix
NATIVESCRIPTING ON TWITTER: / nativescripting

Пікірлер: 200

  • @AZisk
    @AZisk2 жыл бұрын

    How to properly say nVidia's RTX 3050Ti (for this video I chose the second pronunciation) : kzread.info/dash/bejne/ipyq1dKKg6Sxmbg.html

  • @remigoldbach9608

    @remigoldbach9608

    2 жыл бұрын

    I’m used to hear “T I”

  • @DocuFlow

    @DocuFlow

    Жыл бұрын

    Have you considered redoing this using Llama.cpp? You may have already, and I missed it. But by golly that would be interesting, especially the model sizes that could be run on lots of shared RAM. On my AMD3960/2090 I'm limited to the 7B model, quantized to 4bit.

  • @hugobarros6095
    @hugobarros6095 Жыл бұрын

    I would still like to see a speed comparison with a lower batch size. Because memory is just one aspect of a GPU. If it is still slower then it's not better.

  • @georgioszampoukis1966
    @georgioszampoukis19662 жыл бұрын

    Having access to 64gb of GPU memory is just insane at this price. Theoretically you can even train large GAN models on this. Sure, it will take a very long time, but the fact that you can still do it at that price and with this efficiency is just madness. The unified approach is just brilliant and it seems that both intel and AMD are slowly moving towards this path.

  • @p.z.6712

    @p.z.6712

    2 жыл бұрын

    I agree with your point. Laptops should be primarily used for local development and functionality test. Running less than 5 epochs on an Mac serves this purpose well. If passing the functionality test, we could then push the model to remote servers for long training. In contrast, most nv rtx graphic cards have extremely limited gram and you can only test small models on them, though cruely fast.

  • @mrinmoybanik5598

    @mrinmoybanik5598

    Жыл бұрын

    The m1 max with the 32 core gpu has a whooping 10.4 TFLOPS of computing power that's in the same order of magnitude of a mobile rtx3070 ti with 17.8 TFLOPS. It's insane how apple is progressing in It's efficiency. I hope the upcoming m2 maxes will be able to compete with mighty nvidia cards in terms of raw computing power.😮

  • @trubetskoy4395

    @trubetskoy4395

    Жыл бұрын

    @@p.z.6712 I can run yolov8x inference on mobile 3060 both plugged and unplugged, and both these times will be faster than apple

  • @trubetskoy4395

    @trubetskoy4395

    Жыл бұрын

    @@mrinmoybanik5598 how 10.4 is the same as 17.8? It is on par with lower-end 3060, which comes for 3x less price than m1 max

  • @mrinmoybanik5598

    @mrinmoybanik5598

    Жыл бұрын

    @TRUBETSKOY I said they are of the same order of magnitude, i.e, they both can perform 10^13 times some constant order of floating point operations per second. Sure that constant is 1.04 is case of m1 max and 1.78 in case of 3070ti mobile. And looking at the current pace of development this is just a generation worth of gap, like the rtx2070ti mobile also had similar 10.7tflops of raw power in it's highest tgp variant.

  • @hozayfakhleef1223
    @hozayfakhleef122311 ай бұрын

    this comparison doesn't even make sense. You are comparing a 5000$ laptop to two laptops which don't cost a fraction of what this 64GB Ram Monster cost

  • @Jonathan-ff8tl

    @Jonathan-ff8tl

    7 ай бұрын

    I'm seeing them used for $2000. But you're right you could definitely get a better machine for AI under 2k. Also consider this Mbp is also a great portable machine for everything else.

  • @MalamIbnMalam

    @MalamIbnMalam

    4 ай бұрын

    The new Asus Zephyr G14 with RTX 4070 comes to mind

  • @Marc-mp6lf

    @Marc-mp6lf

    4 ай бұрын

    I agree but GPU and CPU sharing of memory is what he was saying.

  • @WildSenpai

    @WildSenpai

    2 ай бұрын

    Exactly what I said it's like a tank vs pistol

  • @shahmeercr

    @shahmeercr

    Ай бұрын

    @@WildSenpai lol

  • @gabrigamer00skyrim
    @gabrigamer00skyrim Жыл бұрын

    It would be interesting to see the performance with limited batch size on the RTX GPUs versus the M1 max.

  • @keancabigao1461
    @keancabigao14612 жыл бұрын

    Great video! Currently, Im actually quite interested how well the m1(base m1)/m2 chip would perform in basic machine learning tasks implemented in R.

  • @datmesay

    @datmesay

    2 жыл бұрын

    I'm wondering what's the spread for the same exercice between m1 &m2 MacBook Air.

  • @terrordisco2944

    @terrordisco2944

    2 жыл бұрын

    I don’t remember the gpu core numbers between a m1 and m1 max, but its in multiples, 8 vs 32 or something. So gpu based performance varies wildly. And extra memory too - the os snd the programs occupy the same amount of memory, so if thats 10gb, the difference between 6 gb spare and 54 gb spare is tremendous.There’s much less difference between cpus. But the base m2 is a cheap computer, it will compare favorably with other cheap/mid-price computers. So if your tasks are cpu computation and not wildly memory intensive, the difference is little, and the difference is greatest… well, you’re doing stuff in R. You can figure it out :)

  • @keancabigao1461

    @keancabigao1461

    2 жыл бұрын

    @@terrordisco2944 huge thanks to this!

  • @tonglu3699

    @tonglu3699

    Жыл бұрын

    To my knowledge, R cannot do multi-core computing natively. There are R packages out there that allow you to manually manage your computing tasks and send them to different CPU cores. I've never done any GPU acceleration in R, so cannot really speak to that. I switched to an M1 machine earlier this year and noticed a significant performance improvement in R, but I'm pretty sure that's because M1 has great single core performance compared to my old machine. It does allow me to leave R running in the background and multi-task on other things with abandon, knowing how much computing capacity the CPU still has.

  • @noone-dc4uh
    @noone-dc4uh2 жыл бұрын

    But in reality every production grade ML task is being done in a distributed manner on the cloud using spark. Because it's impossible to fit realtime data on a single computer storage. So it doesn't matter which computer you have locally apple or non-apple it is only used for initial development and prototypes.

  • @joaogueifao6468

    @joaogueifao6468

    4 күн бұрын

    This is simply not true. That may be the majority, but saying that all tasks are being done that way is a overstatement. We're deploying production grade ML tasks for Computer Vision on our customer's laptops, both Apple and Windows.

  • @youneslaidoudi8214
    @youneslaidoudi8214 Жыл бұрын

    I trained the VGG16 on a Fully loaded Mac Pro 14" 2023 (M2 max / 96Go of UM) in 16.65 min T.time

  • @wynegs.rhuntar8859
    @wynegs.rhuntar88592 жыл бұрын

    Now make sense shared memory for GPU, good comment ;)

  • @PedroTeixeira
    @PedroTeixeira2 жыл бұрын

    Would love to see the other longer ML comparisons, thank you!

  • @kevinsasso1405
    @kevinsasso1405 Жыл бұрын

    This isnt actually the case if your data loaders are memory intensive (audio loading, etc). Ultimately youll want your own set of dedicated RAM so that your CPU isnt bottlenecked

  • 2 жыл бұрын

    those m1 max laptops are beasts

  • @slothgirl2022
    @slothgirl20222 жыл бұрын

    Is there any word on if pyTorch will ever take advantage of the M-series' Neural Engine? That might well boost the numbers further.

  • @AZisk

    @AZisk

    2 жыл бұрын

    Pytorch is still new to this. Perhaps one day it will be optimized, but for now i suppose we should be happy it works.

  • @FrankBao

    @FrankBao

    2 жыл бұрын

    I don't see it as possible shortly, as officially implemented TensorFlow doesn't support it either. I found ANE has been used on some inference tasks by apps. I'm hoping for XLA support.

  • @47shashank47
    @47shashank472 жыл бұрын

    Thanks a lot Alex for your videos. ... bez of your videos I purchased macbook m1 based ,which has made my work really smooth. Now I can use VS code with many other usefull chrome extensions simultaneously, making my web development work much easier. I think Apple should keep in their marketing team 😀😀. You are doing better their whole expensive marketing campaign. I had no reason to purchase Macbook then I saw your videos which really helped me out.

  • @thedownwardmachine
    @thedownwardmachine2 жыл бұрын

    I'm interested in seeing your personal project benchmarked across systems! But some friendly advice: I think you should be consistent with your use of significant digits across measurements. 0.1m doesn't mean the same thing as 0.10m.

  • @1nspa

    @1nspa

    2 жыл бұрын

    They are both the same

  • @arunvignesh7015

    @arunvignesh7015

    2 жыл бұрын

    @@1nspa No they aren't. 0.1 can translate in to 0 or 0.2 with 0.1 unit tolerance, and 0.10 translates to 0.09 or 0.11 with 0.01 unit tolerance. Now if your tolerance is at 0.01 you won't be representing that case with 0.1 and use 0.10 instead, this has caused so much confusion and errors in engineering over a long period of time which is indeed why we came up with measurement standards.

  • @MHamzaMughal
    @MHamzaMughal2 жыл бұрын

    Loved the video! Please if you can compare rtx 3080ti mobile with M1 Max or M1 Pro. That would be a good comparison considering those rtx cards have more memory

  • @emeukal7683

    @emeukal7683

    Жыл бұрын

    No its a Bad comparison because Apple cant compete then. He is there than nget clicks, typical influencers. Watched some Video now to be sure. Given U are seriously machine learning then U need the best. The best is a Desktop with 4090, threadripper for example. Even the biggest Mac 29 whatever core cant compete. They can compete well in the Low Budget segment and on Laptops. So buy whatever, but dont buy a Mac for your scientific work.

  • @somebrains5431
    @somebrains54312 жыл бұрын

    It’s fine for learning, but the vram limitations when you start dealing with production quality algorithms will make you offload your workloads to something that has multiple A100s. Training time on rigs with dual 3090s is something worth taking a look at how gpu ram is being loaded.

  • @dr.mikeybee
    @dr.mikeybee2 жыл бұрын

    Nice. Some of the Pytorch_light code doesn't seem to run, but the other benchmarks do run. I'm on the 16GB MacMini, and cifar10 runs. I'm up to just under 16GB being used, and it's not grabbing a bunch of swap. It may take forever to finish, but I think it will get to the end. I'll leave it running for a half-hour or so. Two years ago, I bought a K80 because of running out of memory, but the power draw is significant, and mostly I use models and don't train; so I suspect this M1 will be good enough.

  • @AliZamaniam
    @AliZamaniam2 жыл бұрын

    I'm really confused between 14 and 16 Mac book pro. also between windows or macos. plus between silver or space gray 😔🤔

  • @AZisk

    @AZisk

    2 жыл бұрын

    i’m confused sometimes too, sometimes between the kitchen and the bedroom

  • @AliZamaniam

    @AliZamaniam

    2 жыл бұрын

    @@AZisk 😂😂👌🏻

  • @planetnicky11
    @planetnicky112 жыл бұрын

    yes please make a video on that !! Can’t wait to install the pytorch metal. :)

  • @csmac3144a
    @csmac3144a2 жыл бұрын

    I have multiple machines for different purposes. Two things i do absolutely require a mac so it's not even a question for me. iOS development with xcode and final cut pro.

  • @fieryscorpion

    @fieryscorpion

    2 жыл бұрын

    How is iOS development with .NET MAUI?

  • @PsycosisIncarnated

    @PsycosisIncarnated

    2 жыл бұрын

    I wish we could use one machine for all purposes

  • @TheDevarshiShah
    @TheDevarshiShah2 жыл бұрын

    Is it single threaded benchmark? if so, will the results be more or less same on other M1 chips, such as, M1 or M1 Pro?

  • @nasirusanigaladima
    @nasirusanigaladima Жыл бұрын

    I love your machine learning Mac videos

  • @AZisk

    @AZisk

    Жыл бұрын

    thx

  • @stevenhe3462
    @stevenhe34622 жыл бұрын

    If PyTouch can use those Neural Engines, it will be much faster. Now, you can only do that in Swift I guess…

  • @woolfel
    @woolfel2 жыл бұрын

    CIFAR10 is considered a small test, but for a youtube video, it's large. Truly large models have datasets over 10 million images :) On a NVidia video card with 8G or less, you really have to keep the batch sizes small to train with CIFAR10 dataset. With CIFAR100 dataset, you have to decrease batch size to avoid running out of memory. You can also change your model in Tensorflow to use mixed precision.

  • @manoschaniotakis3328

    @manoschaniotakis3328

    2 жыл бұрын

    pytorch also supports mixed precision

  • @bruno_master9844
    @bruno_master98442 жыл бұрын

    Hi, if the test you made at 3:12 was cpu based does that mean that the m1 pro with the 10 core cpu would finish at the same time as the m1 max?

  • @James-hb8qu
    @James-hb8qu Жыл бұрын

    First a very very short test that is basically measuring set up time. The shared memory system doesn't have the serial delay of loading the GPU so it comes out ahead. Then you rail it to the other extreme to find a test that will only run with substantial memory. That seems.... engineered to give a result. Honestly, that appears less than upfront. How about test some real world benchmarks that run on RTX machines of 8-12 GB and compare the performance to the M1. If the M1 comes out ahead then cool.

  • @inwedavid6919
    @inwedavid69192 жыл бұрын

    Of course it is better on the 5000$ over the top M1 Max or will it does the same on the base model?

  • 2 жыл бұрын

    Where is the Schwarzenegger?!?!

  • @AZisk

    @AZisk

    2 жыл бұрын

    He'll be back

  • @ooppitz1

    @ooppitz1

    2 жыл бұрын

    @@AZisk 😂👍🏻

  • @jay8412

    @jay8412

    2 жыл бұрын

    Great video as always, would love to see more of such test related to software developers on apple silicon

  • @droidtang
    @droidtang2 жыл бұрын

    The ram size is still the bottleneck though. On some of my projects it requires easily far over 64 gb alone for data wrangling even before any training. But yeah, normaly you jjust don't do this on a laptop unless some mobile workstations like thinkpad p-series where you can have xeon cpu with up to 128 gb ram and nvidia rtx a5000 gpu.

  • @laden6675

    @laden6675

    2 жыл бұрын

    can't wait for the Mac Pro on Apple Silicon. Imagine, 2TB of memory for the GPU.

  • @droidtang

    @droidtang

    2 жыл бұрын

    @Laden Yes, could be interesting. @Alex Ziskind More interesting would be a comparison between M1 Max GPU vs RTX A5000 on larger data sets where GPU is more efficient and faster vs CPU.

  • @woolfel

    @woolfel

    2 жыл бұрын

    I'm curious how hot does that thinkpad get running A5000 at full load :) I'm guessing it's enough to fry up some eggs for lunch

  • @davout5775

    @davout5775

    2 жыл бұрын

    It could be very interesting if the SSD can fill that purpose. MacBook Pro with M1 Max use the fastest SSDs on the market and they could I believe be used for such operations.

  • @user-wj1ru2xn6q

    @user-wj1ru2xn6q

    Жыл бұрын

    @@laden6675 sadly didn't happen.

  • @arhanahmed8123
    @arhanahmed8123 Жыл бұрын

    Does it mean that Macbook M1 would be better for Machine Learning tasks? Should I buy macbook over dell XPS for ML and coding tasks?

  • @djoanna9606

    @djoanna9606

    Жыл бұрын

    Same question here. Any advice would be appreciated! Thanks!

  • @aniketainapur3315

    @aniketainapur3315

    11 ай бұрын

    @@djoanna9606 Do you get any answer? I m still confused

  • @matus9787

    @matus9787

    3 ай бұрын

    for CPU tasks ofc. for GPU tasks possibly not

  • @lehattori
    @lehattori Жыл бұрын

    great videos it helps me a lot! thanks!

  • @ptruskovsky
    @ptruskovsky2 жыл бұрын

    where is parallels with windows for arm with arm visual studio video, man? waiting for it!

  • @TheCaesarChris
    @TheCaesarChrisАй бұрын

    Potentially dumb question but why didn’t you use an intel/amd laptop with a 4070/4080/4090 and 32/64GB ram?

  • @dorianhill2480
    @dorianhill24802 жыл бұрын

    I’d run the training in the cloud. Free’s up the lap top. Also means you don’t need such an expensive laptop.

  • @gokul.s49ibcomgs22
    @gokul.s49ibcomgs22 Жыл бұрын

    WHICH LAPTOP SHOULD I PREFER FOR MACHINE LEARNING,DL, DATA SCIENCE. MAC AIR M2 OR ROG STRIX G 15!?

  • @tsizzle
    @tsizzle4 ай бұрын

    But don’t you need CUDA to utilize most of the ML python libraries? In that respect, don’t you have to use Nvidia hardware? What if you’re mostly working from the DevOps perspective, trying to setup the proper Conda and Pip environment and simply to test functionality on simple/smaller datasets and test small training sets and then move your code later to cloud to run the full training and inference on Amazon AWS Nvidia A100 or DGX A100 resources?

  • @MachielGroeneveld
    @MachielGroeneveld2 жыл бұрын

    Could you get the actual GPU memory usage? Maybe something like iStats

  • @Ricardo_B.M.
    @Ricardo_B.M. Жыл бұрын

    Hi Alex, I want to get a laptop with i7 1260p, 64gb, intel iris xe, wud work fine for machine learning?

  • @HDRPC

    @HDRPC

    Жыл бұрын

    Wait for meteorlake laptops(coming in September or October 2023) as it has 50% better efficiency than 13 gen laptops with dedicated VPU(AI accelerator) which will be super fast for AI and machine learning. It will also have 7400mhz lpddr5 ram support upto 96GB. 2 times faster iGPU compared to best Intel igpu of 13 gen.

  • @motivsto
    @motivsto4 ай бұрын

    Should I buy a mac mini m2 or a pc? Which one better?

  • @MrRhee16
    @MrRhee162 жыл бұрын

    Good vid. wanna see whether PT on M1Max would be faster than Windows machines with a Nvidia GPU at the similar price range.

  • @vinayak1998th

    @vinayak1998th

    7 ай бұрын

    Who uses windows for PT

  • @joseluisvalerio4006
    @joseluisvalerio4006 Жыл бұрын

    wow, interesting to test in my M1 Pro. Thanks a lot.

  • @hhuseyinbaykal
    @hhuseyinbaykal8 ай бұрын

    We need an updated version of this video please do so 4080 4090 laptops vs m3 series

  • @cfaf-ct9xl
    @cfaf-ct9xl2 жыл бұрын

    That's very amzing result of doing deep learning on m1. But I think no DL engineers will use laptops... But I expect new mac pro to become a DL server.

  • @mohamedouassimbourzama5524
    @mohamedouassimbourzama5524 Жыл бұрын

    what about macbook air m2 8/256 for maching learning ? and who's faster the air m2 or gpu t4 for google collab ? thanks

  • @SinistralEpoch
    @SinistralEpoch2 жыл бұрын

    Dumb question, but did you notice that you'd done "cudo" instead of "cuda" in the pytorch test?

  • @arnauddebroissia8964
    @arnauddebroissia8964 Жыл бұрын

    So you used an incorrect batchsize, nice story, but it means nothing. The important aspect is number of samples per seconds.... You can have a bs of 12, if you do 10 batch a sec, it will be better than having a bs of 64 and doing 1 batch a sec...

  • @awumsuri
    @awumsuri Жыл бұрын

    Yes please make the video on your project

  • @trongnguyenkim3617
    @trongnguyenkim361711 ай бұрын

    Please do more comparing between multi VGA card ( 3060 12GB, 3070, 4060) with M1, M2 apple CPU! WE do need more information about this comparing like. Total time to process A test , Total calculation in a range of time. Avg speed. Pros and Cons Thanks so much Sir!

  • @chen0rama
    @chen0rama2 жыл бұрын

    Love your videos.

  • @zatchbell8112
    @zatchbell8112 Жыл бұрын

    can you teach how to do basic ml in asus g15 advantage edition with windows and rocm use please?

  • @jalexromero
    @jalexromero2 жыл бұрын

    Would be great to see a video of clustering M1 or M2 mac minis to crunch very large CNN projects... But great videos! Look fwd to the pytorch one!

  • @bipinkoirala2962
    @bipinkoirala29622 жыл бұрын

    I'll trim down the batch size, use buffers but still refuse to get a macbook.

  • @mikekaylor1226
    @mikekaylor12262 жыл бұрын

    Great stuff!

  • @muntakim.data.scientist
    @muntakim.data.scientist2 жыл бұрын

    Conclusion: I'm gonna buy m1 max 🥴

  • @akibjawad7447
    @akibjawad7447 Жыл бұрын

    I dont understand. How this is happening? How 64gb?

  • @edmondhung6097
    @edmondhung60972 жыл бұрын

    Sure. Love those real project videos

  • @angrysob7962
    @angrysob79622 жыл бұрын

    I'm a bit disappointed that The Schwarzenegger did not make an appearance.

  • @AZisk

    @AZisk

    2 жыл бұрын

    he’s on vacation. he’ll be back

  • @AndriiKuftachov
    @AndriiKuftachov2 жыл бұрын

    Who will do real ML tasks for business on a laptop?

  • @bikidas2718
    @bikidas2718 Жыл бұрын

    Which MacBook pro is this

  • @depescrystalline3392
    @depescrystalline33922 жыл бұрын

    Can the same test be run using the integrated GPU on the i9-12900? Would be interesting to see since the iGPU also shares system memory, so it might not have the same restrictions as a "discrete" GPU.

  • @RunForPeace-hk1cu

    @RunForPeace-hk1cu

    2 жыл бұрын

    no iGPU is gonna use all of the RAM in your system. That's not how the architecture of the iGPU works on intel

  • @brightboxstudio

    @brightboxstudio

    2 жыл бұрын

    I’m not sure if I found the best source because not many mention how much system memory integrated graphics can use on 12th gen Intel Core, but one source says up to 4GB. If your Intel Mac or PC is more than a few years old, integrated graphics are limited to grabbing 1.5GB from system memory, and only if more than a certain amount of RAM is installed. The difference with Apple unified memory is that it is completely dynamic, there are no walls. That is why any system memory not used by macOS, apps, and background processes is available to graphics, so if your Apple Silicon Mac is using 30GB out of 64GB system memory, the graphics can use all the rest if it wants, which is what Alex showed.

  • @davout5775

    @davout5775

    2 жыл бұрын

    The problem with that it is extremely limited by like 4gb at best. Furthermore the unified memory on M1 has significantly more channels and there is no limitation. The limit is basically the top of your RAM which can be all 64GB if you have that specification on. You also don't have to worry about the RAM because MacBooks use swap and they can have RAM from the SSD which on the M1 Max spec is between 5 and 7+GB/s

  • @EdwardFlores
    @EdwardFlores5 ай бұрын

    In Apple you get a lot more ram right ahead... For LLMs are apple machines better than any AMD/Nvidia solution for home computing. For some of the things I do I need more than 40 GB ram... There is no VCard I can pay for that

  • @datmesay
    @datmesay2 жыл бұрын

    Just for clarification, 0.1 of a minute is 6 seconds, the Mac found a solution in 6 seconds ?

  • @ThomazMartinez
    @ThomazMartinez2 жыл бұрын

    I love that on PC's you are using Linux to test on Windows, please continue more test with Linux vs MacOS

  • @paraggupta3099
    @paraggupta30992 жыл бұрын

    It would be great to know in which test m1 max beats i9 and in which test i9 beats m1 max, So please do make a video regarding the same

  • @lalpremi
    @lalpremi Жыл бұрын

    Thanks 🙂

  • @Tech_Publica
    @Tech_Publica Жыл бұрын

    If the macbook pro is that good for ML than why did you say that normally you use the Asus?

  • @AZisk

    @AZisk

    Жыл бұрын

    in what circumstances is key here - that’s what the vid is about

  • @javascriptes
    @javascriptes6 ай бұрын

    I miss the Schwarzenegger 😂

  • @blackpepper2610
    @blackpepper26102 жыл бұрын

    do a test for the neural core on M1 chip next please

  • @aimanyounis8387
    @aimanyounis8387 Жыл бұрын

    I tried to fine-tune pretrained model using mps device, but I realised that on CPU the training is faster than MPS, and it seems doesn’t make sense to me

  • @vinayak1998th

    @vinayak1998th

    7 ай бұрын

    There are a bunch of odd things here. Especially with the 3050ti outperforming a 3070

  • @kevinmesto608
    @kevinmesto6082 жыл бұрын

    cant wait for mac os ventura for the multitasking benefit of stage manager

  • @mikapeltokorpi7671
    @mikapeltokorpi76712 жыл бұрын

    64 MB shared memory is huge benefit, but you should redesign your AI code to handle smaller lots.

  • @cascito

    @cascito

    Жыл бұрын

    64GB

  • @davidtindell950
    @davidtindell9502 жыл бұрын

    thank you - yet again ! ….

  • @davidtindell950

    @davidtindell950

    2 жыл бұрын

    It is impressive the difference that the integrated M1 memory makes vs. laptop mobile NVidia components.

  • @wynegs.rhuntar8859
    @wynegs.rhuntar88592 жыл бұрын

    The Ziskind-net AI, soon, xD

  • @underatedgamer9939
    @underatedgamer99392 жыл бұрын

    Rtx 3050 is not 90watt,the 90 watt tgb 3050 is almost 2x better

  • @HDRPC

    @HDRPC

    Жыл бұрын

    True

  • @FOOTYAS
    @FOOTYAS2 жыл бұрын

    i died when it took a screenshot 😭

  • @PsycosisIncarnated
    @PsycosisIncarnated2 жыл бұрын

    Why wont apple make some durable damn laptops with loads of ports?? I just want a thinkpad with a roll cage and the m1 chip and linux ffs :((((((

  • @mi7chy
    @mi7chy Жыл бұрын

    Isn't that just demonstrating poor coding? ML workloads like Stable Diffusion is significantly faster with Nvidia and even with chunking for larger generated images to fit within VRAM it's still faster.

  • @inwedavid6919
    @inwedavid69192 жыл бұрын

    Ram is relevant on M1 max if you buy it with plenty of RAM as it is shared. More RAM make price to explode.

  • @WildSenpai
    @WildSenpai2 ай бұрын

    But u do understand that those windows one are cheaper+64gb ram in apple i think i would have to sell my house

  • @DavidGillemo
    @DavidGillemo Жыл бұрын

    Feels weird that a 3070 wouldn't beat the 3050 ti, and doesn't the 3070 have 8Gb of vram?

  • Жыл бұрын

    Yes the 3070 has 8GiB, if his NTB has 6GiB then it's the 3060. It's weird.

  • @cypher5317

    @cypher5317

    Жыл бұрын

    So it’s a 3060 not 3070 , but it should do better then the 3050Ti with 4gb VRAM , no ? Confusing to me

  • @iCore7Gaming

    @iCore7Gaming

    Жыл бұрын

    a 3070 in the laptop is litterally just a 3060. crazy how nvidia get away with that.

  • @whyimustusemyrealname3801
    @whyimustusemyrealname3801 Жыл бұрын

    m1 vs desktop GPU pls

  • @manikmd2888
    @manikmd28882 жыл бұрын

    Machine learning doesn't require Nvidia RTX/ AMD Radeon. You only need Statistics an example of an R book and a Desktop/Laptop.

  • @insenjojo1839
    @insenjojo18392 жыл бұрын

    thank you for pronouncing SILICON and not SILIKEN like most reviewers..

  • @MaxwellHay
    @MaxwellHay Жыл бұрын

    $4000 vs $1500 ??? How about a 3080 laptop?

  • @prakhars962
    @prakhars96211 ай бұрын

    RTX GPU is designed for gaming not machine learning. Of course it will be slower. Its cool that they baked the RAM very close to both CPU and GPU.

  • @DennisBolanos

    @DennisBolanos

    10 ай бұрын

    I think it depends on which RTX card it is. To my knowledge, RTX GeForce cards are meant for gaming while RTX Quadro cards are meant for professional use.

  • @MarkTrudgeonRulez
    @MarkTrudgeonRulez2 жыл бұрын

    To be honest, RISC vs CISC is no comparison. Arm is a RISC based CPU, look back to the Archimedes , same CPU. We have gone through similar scenarios before except this time it is muti-core architecture. Who knows, today RISC is winning, tomorrow CISC will win...again who knows. Disclosure I ordered a M1 Pro to check it out and yes I had the Apple IIe as my first computer and no I'm not an Apple fab boy!!!My favorite computer by far was an Amiga!!!

  • @puticvrtic
    @puticvrtic2 жыл бұрын

    Schwarzenegger has a leg day unfortunately

  • @AZisk

    @AZisk

    2 жыл бұрын

    😂

  • @WhatsInTheName.
    @WhatsInTheName.10 ай бұрын

    Better buy a windows laptop for eveything else other than ML as faster, i.e Programming, virtual machines, docker containers, non-apple software video editing, and gaming. And for ML models, run those on cloud at fraction of cost

  • @iCore7Gaming
    @iCore7Gaming Жыл бұрын

    but who really is going to be machine learning "on the go". if you were you could just remote into a server that had the correct hardware anyway.

  • @2dapoint424
    @2dapoint4242 жыл бұрын

    Mak video using M1 and run your project.

  • @vit.c.195
    @vit.c.1956 ай бұрын

    Actually you are not compare M1 to RTX but MacOS to Shitbuntu. Just because you not able to get Linux compiled for Intel laptop and for dedicated purpose. Actually you can do that, but you not able... whatever reason.

  • @rishikeshdubey8823
    @rishikeshdubey88239 ай бұрын

    why not just buy a windows laptop and then train models on external gpu

  • @eyeshezzy
    @eyeshezzy2 жыл бұрын

    You look much more cheerful

  • @felipe367
    @felipe3672 жыл бұрын

    3:58 erm that’s not quite half a minute more like 3/4 of a minute

  • @AZisk

    @AZisk

    2 жыл бұрын

    0.46 is almost half a minute. half a minute being 0.5

  • @felipe367

    @felipe367

    2 жыл бұрын

    @@AZisk how many seconds in a minute Alex? 60? thus half a Minute would be 30 seconds. 0.5 seconds is 50 seconds.

  • @AZisk

    @AZisk

    2 жыл бұрын

    @@felipe367 should have been more clear, 0.5 is half. there is a decimal point there not a colon. if it looked like this 0:50, then that would be 50 seconds

  • @felipe367

    @felipe367

    2 жыл бұрын

    @@AZisk 0.5 is half of 1 BUT NOT half a minute as that is 0.30 no matter if you put a decimal or colon.

  • @AZisk

    @AZisk

    2 жыл бұрын

    @@felipe367 😂

  • @phucnguyen0110
    @phucnguyen01102 жыл бұрын

    It's called "3050 Ti-ai" not "3050 Tie" Alex :D

  • @ranam
    @ranam Жыл бұрын

    When apples can help newton(ie natural scientist) find gravity why not an apple computer help a datascientist do machine learning correctly

  • @arianashtari5433
    @arianashtari54332 жыл бұрын

    I Want That Video 😃

  • @j.r.stefan202
    @j.r.stefan202 Жыл бұрын

    I wish a apple but really I don't like this f.king notch. Every where is a notch. Cmon....

  • @egnerozo1160
    @egnerozo11602 жыл бұрын

    Once more… power consumption… Mac book pro is excellent….