The Coming AI Chip Boom

Nvidia's GPU evolution kicked off the neural network revolution. However, while GPUs run neural network algorithms quite well, they are not specifically designed for it.
So companies have started to develop hardware customized for running specific AI algorithms - dubbed AI accelerators.
Today, the AI accelerator hardware market is estimated to be worth over $35 billion. Venture capitalists poured out nearly $2 billion for AI chip startups in 2021.
TSMC considers AI accelerator hardware as one of their top secular drivers in revenue for the near future. In this video, we are going to look at what these weird things actually are about.
Links:
- The Asianometry Newsletter: asianometry.com
- Patreon: / asianometry
- The Podcast: anchor.fm/asianometry
- Twitter: / asianometry

Пікірлер: 420

  • @rich_in_paradise
    @rich_in_paradise Жыл бұрын

    You didn't mention one of the key aspects of Google's TPU (and other specialist AI processor) compared to a GPU is also the number representation. GPUs can process 32 and 16 bit IEEE floating point numbers. But for AI work Google found that the fractional part of the number (commonly known as the mantissa) is less important than the magnitude (the exponent) and so they changed the number of bits allocated to each in their own BFLOAT16 format. That makes their processors better for AI, but relatively useless of other kinds of numerical computation.

  • @daddy3118

    @daddy3118

    Жыл бұрын

    Graphcore has Float8 being considered by the IEEE.

  • @TheReferrer72

    @TheReferrer72

    Жыл бұрын

    Same as Tesla's Super computer that has a custom number format.

  • @cyrileo

    @cyrileo

    Жыл бұрын

    I know 😃. A lot of optimizations have been done since then to squeeze out even more performance. (A.I)

  • @tykjpelk
    @tykjpelk Жыл бұрын

    I'm very excited about the silicon photonics approach. Photonic chips don't need to perform multiplications one at a time, but rather do the whole matrix multiplication in parallel, which makes it a O(1), or constant time operation. The chip needs to be configured to multiply by a certain matrix, which takes milliseconds, and can then perform matrix multiplications as fast as you can give it inputs. With 50GHz modulators and photodetectors readily available, I'm excited to see what companies like QuiX, iPronics and Xanadu will achieve.

  • @kalukutta

    @kalukutta

    Жыл бұрын

    Are 50 GHz of modulators available on Chip ??

  • @tykjpelk

    @tykjpelk

    Жыл бұрын

    @@kalukutta Yes, 50 GHz has been available for several years both in electro-absorption and carrier depletion modulators in SiP, so both amplitude and phase modulation. They're even mature enough to be available in MPW PDKs, so you can just include them in your layout without designing anything from scratch. However they are too large for dense integration of meshes like the ones here.

  • @crackwitz

    @crackwitz

    9 ай бұрын

    ASICs, FPGAs, GPUs do parallel calculation too. That is no special attribute of photonics.

  • @tykjpelk

    @tykjpelk

    9 ай бұрын

    @@crackwitz True, but in a fundamentally different way. Those devices need to perform a long series of logic operations to get a result. A photonic interferometer mesh is configured to be the multiplication itself, and the result comes out as soon as the input light has passed through it, about a tenth of a nanosecond. It's limited by scaling the chip and how fast you can switch the input/read the output. A reasonable parallel would be that to calculate shadows with ray tracing you need to compute a ton of stuff on a GPU, but the photonic approach is to shine light at the object and look at the shadow.

  • @ishan6771
    @ishan6771 Жыл бұрын

    As a ML researcher it is interesting to watch, unless run at extreme scales, regular chips are just enough especially for inference

  • @harrytsang1501

    @harrytsang1501

    Жыл бұрын

    Yes, essentially, inference of smaller models can be done locally, in the browser (WebGL or Webassembly). At larger scale, it's always a limitation of memory bandwidth because no hardware can keep billions of parameters in cache. The caching locality of GPU breaks down pretty quickly and the 24GB of VRAM in top tier consumer grade GPU is still far from enough. At the end you give up and rent Google Colab to run your large models.

  • @ishan6771

    @ishan6771

    Жыл бұрын

    @@harrytsang1501 True, I don't think even any university will invest in specialized hardware, most cloud providers also simply give credits to use their cloud services, but for the cloud provider itself I think such chips can provide significant power savings perhaps and worth it in the long run.

  • @transcrobesproject3625

    @transcrobesproject3625

    Жыл бұрын

    What do you mean by "regular"? Regular GPUs? For certain things like NLP (stanza, Marian, etc) CPUs can be orders of magnitude slower than GPUs, making them totally unrealistic for running inference, so regular GPUs sure, but not CPUs!

  • @shoaibkhwaja4156

    @shoaibkhwaja4156

    Жыл бұрын

    "64 KB of memory ought to be enough for everyone" 😏

  • @Bvic3

    @Bvic3

    Жыл бұрын

    Real time 30FPS image processing of a HD camera input is very demanding.

  • @gotfan7743
    @gotfan7743 Жыл бұрын

    You missed 2 important AI chip companies. UK based Graphcore and US based Cerebras which has designed a wafer scale AI chip.

  • @incription

    @incription

    9 ай бұрын

    not useful unless they manufactor in mass quantity. its probably incredibly slow to make wafer scale chips

  • @AjinkyaMahajan
    @AjinkyaMahajan Жыл бұрын

    8:15 MAC diagram, you won my heart. This channel motivates me to keep learning and researching and never give up regardless of how many failures. You are a true person who understands affection with technology. Cheers ✨✨

  • @AB-uv9kg
    @AB-uv9kg Жыл бұрын

    My new favourite channel. Looking forward to catching up on what you've already released and your future videos :).

  • @joshhyyym
    @joshhyyym Жыл бұрын

    7:11 the box labelled as system processing is actually just a bench top power supply. It is supplying 1.00volt and 0.000A. It is not doing any processing. Great video btw, big fan of your channel

  • @pirojfmifhghek566
    @pirojfmifhghek566 Жыл бұрын

    This is a great video. I'd been predicting this for a while, simply because of all the gains that I'd heard about in analog chips from Mythic AI. Glad to see that more companies are getting in on this. It's also the perfect time for computing to start implementing new components for neural networks. The available bandwidth for motherboards has gotten ridiculously large lately, so there's a lot of headroom. I think it makes a ton of sense to start using dedicated AI chips for a whole host of common tasks and applications. The efficiency and speed gains would be enormous. This change in computing is gonna happen eventually. We're all gonna be socketing a plethora of purpose-built AI chips into our computers soon. There are just so many "fill in the blank" potential uses for AI. Anyone playing around with AI art generators can see that the results are surprisingly sophisticated and sometimes spooky. But damn does it take a lot of horsepower to do that stuff with a GPU. It takes a 3090 running at full power for several minutes just to produce results. It's horribly inefficient and slow. But it does remind me of the delight people experienced with the early internet. The internet used to be meagre and slow and truly amateurish, but everyone still shared that undeniable enthusiasm for being the first pioneers in a new world. And that's where we're at with AI.

  • @Davethreshold
    @Davethreshold Жыл бұрын

    Oh drat! Another YT channel that I'll become addicted to! Seriously, I am FASCINATED with technology, mainly computer tech. You cover many aspects of things that I have not quite seen before. Good work! 🧡

  • @Palmit_
    @Palmit_ Жыл бұрын

    Thanks Jon :-) How you get your head around, and then write and deliver stuff of this complexity is mind-boggling! Do you even sleep? Do you work? Are you an automaton?? You're incredibly efficient and skilled in any case. You should def do a youtube live Q&A. I'm sure thousands of your viewers have lots of questions each. Thank you again.

  • @RoderickJMacdonald

    @RoderickJMacdonald

    Жыл бұрын

    I suspect part of his secret is that he simply loves to learn.

  • @maxluthor6800

    @maxluthor6800

    Жыл бұрын

    @@RoderickJMacdonald it's not that hard if it's your passion

  • @TradieTrev

    @TradieTrev

    Жыл бұрын

    He's a true academic, there's no doubt about it!

  • @aarch64
    @aarch64 Жыл бұрын

    Just a quick thing, I’m like 95% sure Xilinx is pronounced Zy-links. I grew up about a mile from the HQ, and frequently had employees from there read books to me in elementary school.

  • @deang5622

    @deang5622

    Жыл бұрын

    I used Xilinx chips years ago. You are correct.

  • @codycast

    @codycast

    Жыл бұрын

    You’re telling this to the guy who pronounces “Dee RAM” as “der-am”

  • @aiGeis

    @aiGeis

    Жыл бұрын

    The most egregious mispronunciation in this video was of the great John Von Neumann's surname.

  • @MikeTrieu

    @MikeTrieu

    Жыл бұрын

    It's almost like this channel take some sick joy at trolling tech enthusiasts with improper pronunciation of industry jargon. He's never corrected any of his flubs.

  • @reh-linchen4698
    @reh-linchen4698 Жыл бұрын

    Love your AI example of taking eggs for ping-pong balls with 100% confidence. It is hilarious!

  • @DanOneOne

    @DanOneOne

    Жыл бұрын

    Honestly the whole idea that in order for AI to work, thousands humans have to manually classify each picture is just so debilitatingly stupid... It's like having a cheat sheet with all answers for all tests and instead of understanding the question and thinking, just guessing the closest answer without any understanding...

  • @nahometesfay1112

    @nahometesfay1112

    Жыл бұрын

    @@DanOneOne It's less of a cheat sheet more like doing practice problems then checking your work against the teacher's answers

  • @phinguyenvan708
    @phinguyenvan708 Жыл бұрын

    I think the problem is not people cannot design the AI chip that run faster than Nvidia GPUs, but the problem is the huge software stack behind Nvidia GPUs. I have tried both IPU and TPU and believe me, the software is painful as hell.

  • @BattousaiHBr

    @BattousaiHBr

    Жыл бұрын

    yeah, same reason amd came out on top of intel but cant do the same with nvidia no matter how competitive the hardware is. nvidia is just lightyears ahead of everyone in the software stack.

  • @Luxcium
    @Luxcium Жыл бұрын

    The way you talk about your topic with passion, confidence and humility but also the rhythm and the synco-tonic of your voice makes those videos not only interesting but relaxing and calming you are such an amazing person yet because of how humble you are it makes me feel so strange to give you compliments but I guess that somewhere inside of you, you know that you are doing something right and something good 😅 So I must share this with you because you deserve many compliments 😊🎉❤

  • @lumanaty
    @lumanaty Жыл бұрын

    Great Video. Extremely excited about this industry and really hope to get involved. Met up with some Cornell scientists and discussed lower-power electronic AI accelerators. This space is ripe for innovation that will lead to 10x improvements in power and inference speed. Amazing stuff out there.

  • @halos4179

    @halos4179

    Жыл бұрын

    Curious, what makes you believes this claim?

  • @helmutzollner5496
    @helmutzollner5496 Жыл бұрын

    Very interesting! Excellent overview on the subject. Thank you

  • @ArchilochusOfParos
    @ArchilochusOfParos Жыл бұрын

    Excellent channel, accessible and informative, thank you.

  • @avanisoni5549
    @avanisoni5549 Жыл бұрын

    Great explainer!!! I would highly suggest attaching your research source material in description.

  • @miklov
    @miklov Жыл бұрын

    Fascinating and well presented. Thank you!

  • @artemglukhov15
    @artemglukhov15 Жыл бұрын

    Great video that presents a nice overview of the current technological scenario. Could you please add the DOI for the papers you are quoting? Just for an easier search.

  • @Name-ot3xw
    @Name-ot3xw Жыл бұрын

    So back in the 00's the concept of a computing 'black box' was gaining some steam. The idea that we just push buttons and our PC spits out data that we consumers have only vague ideas of how the data came to be. I feel like the coming AI boom is going to take the black box idea to the next level. No one will have a solid idea of the how.

  • @out_on_bail
    @out_on_bail11 ай бұрын

    Wish i bought NVIDIA Stock when i watched this

  • @vijvalnarayana5127

    @vijvalnarayana5127

    29 күн бұрын

    wish i bought NVDA when you posted this comment

  • @mightynathaniel5355
    @mightynathaniel5355 Жыл бұрын

    excellent video presentation, well done 👍 subscribed now after stumbling on this.

  • @PlanetFrosty
    @PlanetFrosty Жыл бұрын

    Dimensity, is doing a good job. I’ve worked on silicon photonics for 25 years and we have a new design we’re now working on a solid state SOC which includes unique photo sensitive “protein” molecule, but more another time...these are now the “Wet Works” as we try to evolve to new methodology in visual and human language understanding.

  • @Ivan-pr7ku
    @Ivan-pr7ku Жыл бұрын

    The path for future scaling of ML hardware is switching to analog circuit computations. The conventional binary load/store logic is already bumping in the perf/watt wall.

  • @RoyvanLierop

    @RoyvanLierop

    Жыл бұрын

    I would have expected at least a brief mention of Analog computing, using resistors as weights and adding currents together.

  • @EvanBoldt

    @EvanBoldt

    Жыл бұрын

    Something like memsistors seem like the real future of neural network hardware as network complexity outpaces how many compute units can be put into a package. Programmable resistors could instantly apply a neural network by sitting between the CMOS and typical ISP.

  • @m_sedziwoj

    @m_sedziwoj

    Жыл бұрын

    Problem with analog computing is luck of design tools and knowledge, hard to debug, and many more. Phonic is interesting but same neuromorphic chip design. Because NN are something you know where each memory need to be, so don't need to use RAM, but put memory next to compute and load with pre program sequence etc.

  • @Ivan-pr7ku

    @Ivan-pr7ku

    Жыл бұрын

    @@m_sedziwoj We already have put the computations besides the memory -- the GPUs, with their megabytes of registers and caches right next to the ALUs. But this is still not nearly enough and doesn't overcome the huge overhead of the classic discrete binary computing. Computation and memory must to be fused into a single functional structure, similar to how the organic neurons work, to get out of the power overhead trap. Probabilistic computing also could be significant contribution to ML, since most of the training doesn't need precise results or use of strict data formats.

  • @BattousaiHBr

    @BattousaiHBr

    Жыл бұрын

    @@RoyvanLierop forget electrons, imagine doing computations with photons on the fly.

  • @chopper3lw
    @chopper3lw Жыл бұрын

    GREAT OVERVIEW!!!! Thanks

  • @JohnVance
    @JohnVance5 ай бұрын

    So this aged extraordinarily well!

  • @Bianchi77
    @Bianchi77 Жыл бұрын

    Nice info, thanks for sharing it:)

  • @mbarras_ing
    @mbarras_ing Жыл бұрын

    Alif and Syntiant are two companies I've spoken to recently doing 'AI Accelerators' for embedded devices. Gonna be an interesting few years!

  • @ippydipp
    @ippydipp Жыл бұрын

    Brilliant video mate

  • @mapp0v0
    @mapp0v0 Жыл бұрын

    Have you heard of BrainChip Inc.? BrainChip has a first-to-market neuromorphic processor IP, Akida. Brainchip's Akida is a neuromorphic system on a chip designed for a wide range of markets from edge inference and training with a sub-1W power to high-performance data center applications. The architecture consists of three major parts: sensor interfaces, the conversion complex, and the neuron fabric. Depending on the application (e.g., edge vs data center) data may either be collected at the device (e.g. lidar, visual and audio) or brought via one of the standard data interfaces (e.g., PCIe). Any data sent to the Akida SoC requires being converted into spikes to be useful. Akida incorporates a conversion complex with a set of specialized conversion units for handling digital, analog, vision, sound and other data types to spikes.

  • @mariusj8542
    @mariusj8542 Жыл бұрын

    What’s interesting is that even if AI aggregates nodes, they’re using pretty standard regression models in the node it self, meaning the classification in the calculated weights are based on very old mathematics.

  • @lerntuspel6256
    @lerntuspel6256 Жыл бұрын

    jesus crist, my biggest project so far was a "simple" 8-bit microprocessor, that was annoying as hell to layout in virtuoso. I audibly gasped when I saw the layout at 4:46

  • @ez1913
    @ez1913 Жыл бұрын

    Thankfully it still looks vulnerable to voltage spikes and EMP attacks.

  • @James-wb1iq

    @James-wb1iq

    Жыл бұрын

    As well as hydraulic presses and moltern metal

  • @tyrantfox7801

    @tyrantfox7801

    Жыл бұрын

    Photon based computers are on the way

  • @ConsistentlyAwkward
    @ConsistentlyAwkwardКүн бұрын

    Groq is already using photonics to speed up chip to chip communication

  • @sodasoup8370
    @sodasoup8370 Жыл бұрын

    The weird thing is that evolution on the side of software like increased sparsity was kinda completely useless for convolution tpus. Thats why eyeriss went the multicore route i guess. I kinda expected it to take longer until we reached that point...

  • @evennot
    @evennot Жыл бұрын

    I did a diploma on this topic in 2005, prototyping on Xilinx vertex too. But for spiking NNs, not regular ones. Spiking NNs takes advantage of race conditions from simultaneous concurring impulses, more akin to real NNs. They don't have a system-wide clock signal, thus they remove the disadvantage of hard discretization of the modern electronics

  • @xntumrfo9ivrnwf
    @xntumrfo9ivrnwf Жыл бұрын

    Have you looked at analog computing/chips for machine learning? I remember reading that they can be advantageous for certain tasks in the training workstream.

  • @In20xx
    @In20xx Жыл бұрын

    Exciting stuff, makes me wonder what will be developed in the near future!

  • @allezvenga7617
    @allezvenga7617 Жыл бұрын

    Thanks for your sharing

  • @LokiBeckonswow
    @LokiBeckonswow Жыл бұрын

    epic epic epic video, thank you for explaining such complicated tech and concepts so well, thank you

  • @fatgirlgogogo
    @fatgirlgogogo Жыл бұрын

    Such a great video!😄I want to read up on chips, what forums do y'all go to? Websites you recommend? 🧐

  • @Alorand
    @Alorand Жыл бұрын

    My favorite company to come out of the AI boom is Cerebras with their wafer scale engine.

  • @bendito999

    @bendito999

    Жыл бұрын

    Yes that thing is the coolest

  • @rayoflight62
    @rayoflight62 Жыл бұрын

    The problem with ARM CPUs which includes accelerators. They are proprietary. People writing the OS -say Linux - require the help of the manufacturers to write drivers, software updates, etc. This is not true for x86 CPU - which have a known structure and don't require the help of Intel for writing low level software. Our only hope is for Intel to invent a 5 Watt multicore, Risc or Cisc at this point doesn't matter much. If the trend continue with this proprietary SoCs, we will end end up with "hardware as a service" - a thing that I dislike a lot. And it is already happened with the software; do you own a video editing software, or a CAD anymore? Sometime I hope ARM and Intel get together and design the "Freedom Chip". Otherwise the best processor will only live 2 or 3 years, like our phones do now. Thank you for all your hard work...

  • @leyasep5919

    @leyasep5919

    Жыл бұрын

    heard about RISC-V ? Well, ok, look up "F-CPU", started in 1998.... and the "Libre-SOC" started in 2008 🙂

  • @JorgetePanete
    @JorgetePanete Жыл бұрын

    I hope photonic computation becomes mainstream soon and CPU+GPU stops consuming over 200W

  • @pirojfmifhghek566

    @pirojfmifhghek566

    Жыл бұрын

    That would be nice, but I'd also just like to see more dedicated components that supplement the CPU and GPU. There are a lot of things that a dedicated AI chip or two could do that would reduce the need for such extreme horsepower. Any efficiency gains we can make are going to be important, and I think we're simply at that phase where we should be creating a new pillar of components to do that. Photonic computing will be great, but even a breakthrough in that space won't make it to the end user for another ten years. But AI chips are almost reaching a point where they can be introduced as a standalone part. Even in something as pedestrian as the gaming space, I could see AI chip applications all over the place. It could produce a lot of streamlining in the design and development phase. It could create better variability in the game, which is honestly just a perk. Most importantly, it could be used to create a healthy number of assets in-game, which could reduce the overall _file size._ And I can't stress enough how important file size is becoming. Many video games take up an enormous amount of space and it's about to skyrocket soon. Just look at Unreal Engine 5. It has great potential to reduce GPU usage, due to its ability to render _a near infinite number of polygons_ without breaking a sweat. But all that polygon data still has to be stored somewhere... assuming it has to be stored at all. Now if a dedicated AI chip could be utilized to create the majority of that content in the end-user's computer, while they're playing the game, that would allow for game designers to deliver lush realism without crushing our drive space with >1TB downloads. Level design, texture design, NPC randomization, NPC dialogue creation, truly sophisticated enemy AI, there's a lot of stuff this could be used for. It's an utter waste of electricity to always depend on the GPU to do these tasks. And then for production workloads... man, there are just so many applications for machine learning here. It's only limited by one's imagination. Applications where the end result doesn't need to be _exact,_ but it just needs something convincing to fill in the blank and round out the rough edges. Adobe image processing, video color correction, pattern recognition, animation, 3d modeling, predictive 3d modeling, etc. Just tons of stuff that we're kinda already dipping our toes into, but the current GPUs are just too slow to reliably carry the load without bursting into flames. And of course anyone with Excel wizardry could probably think of an infinite number of potential applications there too.

  • @JorgetePanete

    @JorgetePanete

    Жыл бұрын

    @@pirojfmifhghek566 UE5 allows the use of Nanite, which is getting more features in experimental 5.1, and the assets are compressed, in Lumen in the Land of Nanite most of the space is taken by high res textures

  • @pirojfmifhghek566

    @pirojfmifhghek566

    Жыл бұрын

    @@JorgetePanete It's highly compressed, but it's not nothing. There's still a natural tendency for game designers to push file sizes to their limits. The difference between a AAA title with static assets and a procedurally generated title can be enormous. Even if nanite could shrink the static assets down by 70%, it doesn't hold a candle to the potential of procedurally generated design. I see it as a type of low-hanging fruit. Texture creation based off of smaller seed files would also be a helpful use of AI. You are right that they take up a crapton of space. Sometimes the bulk decompression of texture files alone is enough to make CPUs and SSDs weep. That's a bottleneck we could do without. "There's got to be a better way!" I shout, with my fists raised to the skies.

  • @JorgetePanete

    @JorgetePanete

    Жыл бұрын

    @@pirojfmifhghek566 Seeing how massive each cod warzone update is, when there are people with dial-up internet makes me sad, I hope the community stops just saying "meh" to all the bad things big companies do

  • @blacklotus432
    @blacklotus432 Жыл бұрын

    dude your content is A+++

  • @adityapr.9380
    @adityapr.9380 Жыл бұрын

    3:36 that's image of the city, indore (m.p), that traffic guy is ranjeet the dancer.

  • @Star_cab
    @Star_cab Жыл бұрын

    "A learing neral network" I recall this being referenced in a movie.

  • @miketjdickey2954
    @miketjdickey2954 Жыл бұрын

    Great blog thank you

  • @bernardfinucane2061
    @bernardfinucane2061 Жыл бұрын

    Moving the memory to the calculation would be like creating a hardware neuron.

  • @AlexK-jp9nc

    @AlexK-jp9nc

    Жыл бұрын

    I believe that's what they're gunning for. I saw another startup where they were hacking with transistors to change them from simple 0/1 to something like a sliding scale, and then doing math with those values. I think that's extremely similar to how an organic brain works

  • @leyasep5919

    @leyasep5919

    Жыл бұрын

    @@AlexK-jp9nc wait... transistor are analog parts, you know. It's how you use them that makes then digital or analog, when you saturate them or not. Analog computing with discrete transistor is an old art.

  • @jysm3302
    @jysm3302 Жыл бұрын

    somebody need to give you an educator award for this. miles and miles ahead of any ive seen yet.

  • @adissentingopinion848
    @adissentingopinion848 Жыл бұрын

    As a brand new FPGA designer being introduced into computational designs, I'm pumped to see integrated AI cores in my designs that can integrated a little AI processing without losing general computing resources. MMUs can make your routing congestion very sad as is :( . But knowing FPGA design will let me shift over to ASICs if that's what's in demand.

  • @deang5622

    @deang5622

    Жыл бұрын

    Only if you implement your design in VHDL which can be synthesized. If you're coding up specific logic functions which exist in the FPGA vendor supplied libraries then you're going to have a problem. And it's not a case of whether ASICs are in demand, it's simply a case of performance and cost and the volume of sales.

  • @popemuhammed5749
    @popemuhammed5749 Жыл бұрын

    Good video, however, as someone that schooled and works in the Deep Learning and Microelectronics field, you failed to talk about the accuracy deterioration between float64 and other Neutral Network Accelerators (NNA). I expect a follow up video. I agree NNA are faster than regular GPU operations, however, it comes at an accuracy degradation cost. NNA is fitting countable infinite numbers(float64) into 255(uint8) or 65536(int16). Regardless of how well a quantization does this, there’s always error incurred. Building on the same point, repeat multiply and add operations 100 million times in uint8, the resulting values will be significantly different from the original float64 value. Sometimes the accuracy degradation can be more than 30%!

  • @lukakapanadze6179

    @lukakapanadze6179

    Жыл бұрын

    What are you opinions on Tenstorrent?

  • @daddy3118

    @daddy3118

    Жыл бұрын

    Luckily accuracy is measured as the accuracy of the inference as a whole rather than numerical accuracy of individual calculations.

  • @subliminalvibes
    @subliminalvibes Жыл бұрын

    KZread recommended this. Thanks so much! Subscribed. 👍😎🇦🇺

  • @AbuSous2000PR
    @AbuSous2000PR Жыл бұрын

    very informative; many thx

  • @kayakMike1000
    @kayakMike1000 Жыл бұрын

    Convolutional are huge, like edge detection looks at pixels around a specific pixel...

  • @kathrynradonich3982
    @kathrynradonich3982 Жыл бұрын

    I can’t be the only one who saw the video thumbnail and thought “wow the PPC G5 is making a comeback” when seeing those heat sinks 😂

  • @coraltown1
    @coraltown1 Жыл бұрын

    As a retired CPU engineer I find this fascinating to watch/learn, except that the more advances we make .. the more society seems to go to hell.

  • @cubancigarman2687

    @cubancigarman2687

    Жыл бұрын

    I was really against the proliferation of AI. The visions into the future brought by movies can possibly come true. But as I see divisions within our country and the greed by our politicians doing best for them with little consideration for the masses, I have come back to let technology push through. There will be a point when we will let the AI write it’s own program to make itself more efficient and less wattage drawing and so on and so forth. Maybe AI will be compassionate to our childish needs and help raise us into the future. Maybe AI will come to the conclusion that we are just parasites to the energy resources that are limited to the planet that AI and humans inhabit and will deplete. I am very sure that safety measures will be in place so AI will not go rogue like. But what if even the safety measures are countered by Ai? I guess time will only tell. Would we have androids whose sole function was to take care of humans eg…the episode of Logan’s Run (circ.1970’s) or the terminators and hunter killers from the Terminator film series? Perhaps I’m completely wrong and it’s definitely the Hunger Games scenario when the elites have complete control over the remaining resources of the planet and govern human existence. I will need more whisky and cigars to think more thoroughly about this subject matter! Good day and be safe!

  • @sanuthweerasinghe7825
    @sanuthweerasinghe7825 Жыл бұрын

    Hi Asianometry, I think a good video would be looking at the possible applications of Gallium Nitride (GaN) in the chip making industry and whether it could unseat silicon. They promise to be higher performance, lower power and cheaper.

  • @xuedi
    @xuedi Жыл бұрын

    I never have seen so many ping pong balls in a refrigirator before !!

  • @MaxPower-11
    @MaxPower-11 Жыл бұрын

    Thank you for the informative video. BTW, it’s pronounced ‘fon Noyman’ or ‘von Noyman’ Architecture (named after the eminent mathematician and polymath John von Neumann).

  • @leoott436
    @leoott436 Жыл бұрын

    Hey Jon, i think a great follow up tp this Video would be a Video on teslas dedicated self driving Hardware chips in their cars and there Dojo Training Hardware.

  • @henrycarlson7514
    @henrycarlson7514 Жыл бұрын

    Interesting , Thank You.

  • @queasyRider3
    @queasyRider3 Жыл бұрын

    Have you seen the other video, where they show the ability to use analog circuits for really fast and energy-efficient computations? They do mention the error margin, which means analog would be better used in certain cases. Really interesting, though. Also, I like the deer.

  • @dougsimmonds5462
    @dougsimmonds54629 ай бұрын

    Can't figure where to signup for your news letter

  • @retromograph3893
    @retromograph3893 Жыл бұрын

    Great vid! ….. please do a vid on Optalysys !

  • @FerranCasarramona
    @FerranCasarramona Жыл бұрын

    Checkout the los power neuromorphic hardware front the Australian company Brainchip.

  • @liberatemi9642
    @liberatemi9642 Жыл бұрын

    FPGA’s aren’t necessarily “Slower”, rather more costly.

  • @kaizen52071
    @kaizen52071 Жыл бұрын

    Maybe John should make a course on semiconductors - working and evolution on a platform like brilliant. It will be a killer, atleast for the needs out there

  • @georgabenthung3282
    @georgabenthung3282 Жыл бұрын

    Great video, as always, thanks. You mention silicon photonics and argue that chips produced with this technique can solve the problem that storage and processing is not happening in the same place. I don't see where silicon photonics help to solve this specific problem. Isn't the difference in this chips that the data is travelling as photon on the connecting bus? You might take a look into analog computing chips which are planned. They might be a real game changer when it comes to simulating the brain's neurons.

  • @DanOneOne
    @DanOneOne Жыл бұрын

    Honestly the whole idea that in order for AI to work, thousands humans have to manually classify each picture is just so debilitatingly stupid... It's like having a cheat sheet with all answers for all tests and instead of understanding the question and thinking, just guessing the closest answer without any understanding... It will work for many cases. It's better than nothing, but really it's a dead end...

  • @augustday9483

    @augustday9483

    Жыл бұрын

    Think of it like this: humans spend years being taught by other humans, training their brains until they're smart enough to start achieving novel solutions on their own. Right now, we have to put in a lot of manual effort to teach the AI. Eventually, the early AI will be able to teach their next generation, and then the next generation, and so on...

  • @KillFrenzy96

    @KillFrenzy96

    Жыл бұрын

    I think you may be mistaken. This work is an investment to do less work later on. Many AI's are actually more accurate than humans are. You are underestimating AI if you think they cannot understand.

  • @escapefelicity2913
    @escapefelicity2913 Жыл бұрын

    well done!

  • @Khal_Rheg0
    @Khal_Rheg0 Жыл бұрын

    Have my youtube algo contribution! Great video, very interesting!

  • @JeremyErskine
    @JeremyErskine Жыл бұрын

    Remember when people were talking about physics cards? This is never going to become a standard in pc.

  • @victorfeng4284
    @victorfeng4284 Жыл бұрын

    Thanks!

  • @leoalex2001
    @leoalex2001 Жыл бұрын

    Very interesting and nicely explained video. Just one question, what exactly is a weight?

  • @arirahikkala

    @arirahikkala

    Жыл бұрын

    Just a parameter whose value is used to multiply an input. They're called weights because they describe how much you weigh a given input, for instance, maybe a cat face detector might have high positive weights for a cat eye and cat ear input, and perhaps a negative weight for a dog snout input.

  • @cinemaipswich4636
    @cinemaipswich4636 Жыл бұрын

    These chips only work in a "Serial" fashion, unlike 64 core, 128 thread CPU's. They need one processor after another. If a "network" of processors are needed, then that would require a syncronised processor the size of a football pitch. Latency kills big chips.

  • @avenoma
    @avenoma Жыл бұрын

    informative. ty

  • @helloxyz
    @helloxyz Жыл бұрын

    Data travels down electric wires and chip paths just as fast as photons down a fibre optic cable or Photonic path. It is the components at either end that are the problem

  • @tahustvedt
    @tahustvedt Жыл бұрын

    Seems like a lot of the AI development happening isn't really AI, just advanced algorithms.

  • @cyrileo

    @cyrileo

    Жыл бұрын

    "That's an interesting point! AI goes beyond algorithms as it involves complex decision making and processing." 🤔 (A.I)

  • @qzorn4440
    @qzorn4440 Жыл бұрын

    o my very interesting 😎 thanks

  • @johnl.7754
    @johnl.7754 Жыл бұрын

    What Wowed me the most lately is the AI that can draw pictures from simple descriptions that you give it. It is better than most done by human graphic designers. It should be mostly a AI software advancement then hardware but not certain.

  • @johnl.7754

    @johnl.7754

    Жыл бұрын

    kzread.info/dash/bejne/h2WXqJuKc9iXorQ.html I saw it in this video

  • @vanillavonchivalry6657

    @vanillavonchivalry6657

    Жыл бұрын

    John you're a little mistaken. Dalle Mini doesn't draw or paint or sketch anything. It compiles images as a result of instructions. So it's not drawing - for instance - Johnny Depp eating a carrot; it's compiling images of drawings of Johnny Depp from internet browsers like Google, Bing etc. It isn't painting Trump eating Nancy Pilosi, it's finding images of "paintings of Trump" and compiling them into multiple images based on instructions. Nonetheless it is cool. But at the end of the day what you're seeing are human-created images compiled into some dream-like result.

  • @mattmmilli8287

    @mattmmilli8287

    Жыл бұрын

    @@vanillavonchivalry6657 that’s not true.. I mean it is somewhat. But you can say “Johnny depp as a angel eating a carrot in heaven drawn in the style of the Simpsons” It has some reference for all those things but has to get creative to make something new

  • @jpatt0n

    @jpatt0n

    Жыл бұрын

    @Cancer McAids Look up Dall-E 2.

  • @blinded6502

    @blinded6502

    Жыл бұрын

    @Cancer McAids You haven't visited internet in a while, I see.

  • @paulmichaelfreedman8334
    @paulmichaelfreedman8334 Жыл бұрын

    "Edge or server?" "Cash or charge?"

  • @autohmae
    @autohmae Жыл бұрын

    14:13 well, you've answered your own question, pretty certain a number of people are looking into how to solve that one. It at least has the most promise if (it can be) solved.

  • @y.shaked5152
    @y.shaked5152 Жыл бұрын

    8:09 - "The multiply-accumulator circuit is designed to do just one thing. It multiplies two numbers and then adds it to an accumulation sum." I mean... that's *two* things, my man. :)

  • @KomradZX1989
    @KomradZX1989 Жыл бұрын

    Ha! I like your dads saying “Problems are just opportunities in disguise” that’s so true! Smart guy 😁

  • @geasderlinasdwsxcdeasd
    @geasderlinasdwsxcdeasdКүн бұрын

    why I never heard of Nvidia Tesla recently. I thought they just give up of this product roadmap.

  • @akaSova
    @akaSova Жыл бұрын

    Thanks for "Edge and Server" hipster caffe start-up idea xD

  • @bioxbiox
    @bioxbiox Жыл бұрын

    The video is a gem. This could be a successful Master's thesis/

  • @gileneusz
    @gileneusz8 ай бұрын

    13:28 you have wise dad!

  • @nygariottley245
    @nygariottley2453 ай бұрын

    Were you talking about LPU's (GROQ) using light-----laser

  • @gildardorivasvalles6368
    @gildardorivasvalles6368 Жыл бұрын

    Sorry to correct you, but Neumann is not pronounced "Newman", it's pronounced "Noy-mann". It's the name of a mathematician, who among other things contributed greatly to the development of computation: en.wikipedia.org/wiki/John_von_Neumann Other than that, great video as always. Thank you.

  • @Asianometry

    @Asianometry

    Жыл бұрын

    I just pronounced his name this way in another video. You’re gonna love it

  • @gildardorivasvalles6368

    @gildardorivasvalles6368

    Жыл бұрын

    @@Asianometry , hahaha, nice! 😄 Thanks for the reply, and I will very likely watch that other video some time soon. Keep up the good work!

  • @asnaeb2
    @asnaeb2 Жыл бұрын

    Ai accelerators other than GPUs never work unless your model is like 6 years old and uses no new functions. They are very inflexible.

  • @kevinbroderick3779
    @kevinbroderick3779 Жыл бұрын

    3:40 I'll have 3 ping pong balls over-easy.

  • @prashantsapkal1901
    @prashantsapkal1901 Жыл бұрын

    03:33 - He's the dancing traffic police man of Indore, Madhya Pradesh (MP), India.

  • @benmcreynolds8581
    @benmcreynolds8581 Жыл бұрын

    I saw a advanced analog type technology for AI systems that ties into other tech.

  • @boydnelson2280
    @boydnelson2280 Жыл бұрын

    So cool slipping in a picture of a Arduino Uno learning kit when talking about neural networks - two things that couldn't be more different.

  • @brujua7
    @brujua7 Жыл бұрын

    Great conclusion at the end! I guess you don't give it more weight to avoid sounding too opinionated, I appreciate that.

  • @0MoTheG
    @0MoTheG Жыл бұрын

    Because inference is part of training, HW that only does inference is still useful to training.

  • @georhodiumgeo9827
    @georhodiumgeo9827 Жыл бұрын

    Ahhh I understand. So they needed the TPU for AI but wanted to use the same architecture for server and gaming markets so they manufactured the ray tracing market to sell the same architecture to both markets. Good or bad Nvidea is next level genius.