What are Transformers (Machine Learning Model)?

Learn more about Transformers → ibm.biz/ML-Transformers
Learn more about AI → ibm.biz/more-about-ai
Check out IBM Watson → ibm.biz/more-about-watson
Transformers? In this case, we're talking about a machine learning model, and in this video Martin Keen explains what transformers are, what they're good for, and maybe ... what they're not so good at for.
Download a free AI ebook → ibm.biz/ai-ebook-free
Read about the Journey to AI → ibm.biz/ai-journey-blog
Get started for free on IBM Cloud → ibm.biz/Bdf7QA
Subscribe to see more videos like this in the future → ibm.biz/subscribe-now
#AI #Software #ITModernization

Пікірлер: 135

  • @ChatGPt2001
    @ChatGPt200116 күн бұрын

    Transformers are a type of machine learning model used primarily for natural language processing (NLP) tasks. They have revolutionized the field of NLP due to their ability to handle long-range dependencies and capture complex linguistic patterns. Here are key points about transformers: 1. **Attention Mechanism**: Transformers use an attention mechanism that allows them to weigh the importance of different words or tokens in a sequence when processing input data. This mechanism enables the model to focus on relevant information while ignoring irrelevant or redundant parts. 2. **Self-Attention**: In a transformer model, self-attention refers to the process of computing attention scores between all pairs of words or tokens in an input sequence. This mechanism allows the model to capture dependencies between words regardless of their positions in the sequence. 3. **Multi-Head Attention**: Transformers often employ multi-head attention, where multiple attention heads operate in parallel. Each attention head learns different aspects of the input data, enhancing the model's ability to extract meaningful information. 4. **Encoder-Decoder Architecture**: Transformers typically consist of an encoder-decoder architecture. The encoder processes the input sequence, while the decoder generates the output sequence. This architecture is commonly used in tasks like machine translation and text generation. 5. **Positional Encoding**: Since transformers do not inherently understand the order of tokens in a sequence like recurrent neural networks (RNNs), they use positional encoding to provide information about token positions. This allows the model to consider sequence order during processing. 6. **Transformer Blocks**: A transformer model is composed of multiple transformer blocks stacked together. Each block contains layers such as self-attention layers, feedforward layers, and normalization layers. The repetition of these blocks enables the model to learn hierarchical representations of the input data. 7. **BERT and GPT**: Two popular transformer-based models are BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pretrained Transformer). BERT is designed for tasks like sentiment analysis and question answering, while GPT focuses on generating human-like text. Transformers have significantly advanced the capabilities of NLP models, leading to breakthroughs in areas such as language translation, text summarization, sentiment analysis, and dialogue systems.

  • @user-jl5gj4mv1z

    @user-jl5gj4mv1z

    11 күн бұрын

    you put great effort on writting this

  • @command.terminal
    @command.terminal5 ай бұрын

    In our graduation years we used to learn about something called codec, as in coder-decoder (something like modem for modulation-demodulation or balun for balanced-unbalanced in the domain of communication technology. So as I can understand from the video is that the transformers are just a fancy and advanced name for a codec, which functions at much bigger capitalistic scale.

  • @claudiamariariveraguevara7376
    @claudiamariariveraguevara73767 ай бұрын

    Thanks you for your enthusiasm and explanation , by far the best

  • @ArchieLuxtonGB
    @ArchieLuxtonGB2 жыл бұрын

    Hi Martin from the Homebrew Challenge! ML and beer clearly go hand in hand!

  • @ms.barrio4402
    @ms.barrio4402 Жыл бұрын

    I really love your videos as they are really friendly to understand. Really graceful for the high quality of the synthesis of key messages on AI/ML/DL. I am a medical doctor and biomedical researcher. I can see the great potential of using the different technics to further develop a bunch of areas, for example: economic evaluations based on modeling (using a combination of approaches in the sensitivity analysis to find out the internal consistence of the predictions…to gain internal validity as a cornerstone to have external validity). So, looking forward to learn more through your channel. Thank you, again for sharing good quality knowledge. L.

  • @ms.barrio4402

    @ms.barrio4402

    Жыл бұрын

    Congratulations to all the team work!, I will keep learning more. Thank you all, Leslie.

  • @GregHint
    @GregHint11 ай бұрын

    What a great way to introduce the topic. First 4 seconds made me laugh out loud. Well done (and the rest of the video as well)

  • @jaimeeduardo159
    @jaimeeduardo159 Жыл бұрын

    Banana joke GPT-4: Sure, here's a banana joke for you: Why did the banana go to the doctor? Because it wasn't peeling very well!

  • @evaar440

    @evaar440

    Жыл бұрын

    Good transformer 🤣

  • @hassanjaved906
    @hassanjaved906 Жыл бұрын

    like to see the energy which you put on to it, Thanks for this.

  • @amarnamarpan
    @amarnamarpan9 ай бұрын

    Dr. Ashish Vaswani is a pioneer and nobody is talking about him. He is a scientist from Google Brain and the first author of the paper that introduced TANSFORMERS, and that is the backbone of all other recent models.

  • @user-uv2sy5je4z

    @user-uv2sy5je4z

    8 ай бұрын

    Agreed

  • @AK-ex5md

    @AK-ex5md

    2 ай бұрын

    He should be documenting his work like our guy, and make interesting vids. Hope it happens.

  • @goldencinder7650
    @goldencinder7650 Жыл бұрын

    I have been more then blown away by the unfathomable exponential growth from just increasing transformers an a few weights lol

  • @AbdulRahman-tj3wc
    @AbdulRahman-tj3wc8 ай бұрын

    Are encoders and decoders both RNN? Plz clear my doubt.

  • @tahmeed702
    @tahmeed7028 ай бұрын

    Need Explanation for GRU , BERT , LSTM

  • @user-kc8qb8qf7r
    @user-kc8qb8qf7r4 ай бұрын

    Thank your video,your video really easy understand

  • @nikhilranka9660
    @nikhilranka966011 ай бұрын

    Thanks for this video - a simple and concise introduction to transformers. Do large language models really possess reasoning capabilities? Or, the way they operate makes it seem so.

  • @ilhamije
    @ilhamije Жыл бұрын

    Thank you!

  • @robb1324
    @robb1324 Жыл бұрын

    Perhaps the AI made the banana joke as a subtle way to tell us humans that we are a cruel species that mash anything we come across. The AI finds it funny because the banana would rather cross the road and take on the high likelihood of being mashed violently by a vehicle to avoid the certain mashing by humans. Perhaps the AI identified with the banana 🤔

  • @st0a

    @st0a

    Жыл бұрын

    Next level empathy: thinking about a banana's perception of reality 🧠

  • @drewsteinman1898

    @drewsteinman1898

    Жыл бұрын

    Q

  • @zainkhalid5393

    @zainkhalid5393

    Жыл бұрын

    You guys are overthinking it. 😁

  • @gohardorgohome6693

    @gohardorgohome6693

    Жыл бұрын

    that's how I interpreted it too - like yeah, the AI knows the banana doesn't want to be mashed by a car, neither do I

  • @l4l01234

    @l4l01234

    Жыл бұрын

    No, you’re definitely overthinking it. The AI doesn’t think anything because it is incapable of such context like “we are a cruel species that mash anything we come across”. Unless you specifically input that in the prompt, it has no mechanism to even conceive of the phrase.

  • @zackmertz3214
    @zackmertz3214 Жыл бұрын

    Great video! I'm stumped on how you made this. Did you really write backwards? Can you reveal your magic trick?

  • @JoshWalshMusic

    @JoshWalshMusic

    Жыл бұрын

    You write it naturally and then flip the video when editing.

  • @AK-ex5md

    @AK-ex5md

    2 ай бұрын

    Exactly what's gng on in my mind lmao

  • @yasmincohen-sason3325
    @yasmincohen-sason3325 Жыл бұрын

    This was greate!!!

  • @noahwilliams8996
    @noahwilliams8996 Жыл бұрын

    How does the transformer take something of variable length (like a sentence) and shove it into a neural network (which requires a fixed number of inputs)?

  • @anushka.narsima

    @anushka.narsima

    11 ай бұрын

    Generic NNs take only fixed inputs but this is one of the specialities of these types of models! RNNs (the older model used for NLP) were created back in the 80s addressing mainly this issue, along with memory being important for sequences. LSTMs n now transformers came in to solve the issues with RNNs

  • @albertkwan4261
    @albertkwan426111 ай бұрын

    This is the pinnacle performance of training.

  • @steriowang
    @steriowangАй бұрын

    Actually, I'm interested in the hand writing presentation style. How is it made ?

  • @garfocarro
    @garfocarro Жыл бұрын

    is the fact that he is able to write text mirrored incredible or is there a simple trick here?

  • @IBMTechnology

    @IBMTechnology

    Жыл бұрын

    There is a trick. Hint: he's not left handed.

  • @vaibhavthalanki6317

    @vaibhavthalanki6317

    Жыл бұрын

    its flipped and rotated, done through editing

  • @leihejun844

    @leihejun844

    Жыл бұрын

    @@IBMTechnology yeah, I though he can't be left handed.

  • @leihejun844

    @leihejun844

    Жыл бұрын

    @@vaibhavthalanki6317 it's not a glass, it's a mirror I think.

  • @somehhakarima5408

    @somehhakarima5408

    Жыл бұрын

    @@IBMTechnology thought he was left handed

  • @raghavendrasooda5368
    @raghavendrasooda53687 ай бұрын

    Sir Will u give me a research topic in transformer

  • @thirtydays1982
    @thirtydays1982 Жыл бұрын

    how do i use transformers on a new pair of language?

  • @didyouknowamazingfacts2790
    @didyouknowamazingfacts2790 Жыл бұрын

    The Transformer technology is the reason why you see AI everywhere.

  • @user-il9vr9oe7b
    @user-il9vr9oe7b9 күн бұрын

    How do you get loads of loss on on a neural network in given ways for analytics

  • @sabahshams1582
    @sabahshams1582Ай бұрын

    Hi, what does an autoregressive language model mean?

  • @EarningsApps
    @EarningsApps Жыл бұрын

    can we use transformers over spacy for NER?

  • @udayvadecha2973
    @udayvadecha29733 ай бұрын

    You are mirror writing, Great skill🤩

  • @ibrahemahmed6399
    @ibrahemahmed6399Ай бұрын

    I think he write on the glass normally and the camera got it backword so they montage it to be flipped so the written words can be shown notmally.

  • @markadyash
    @markadyash2 жыл бұрын

    how can text algorithm (transformer) work in image domain like vision transformer over CNN

  • @ChocolateMilkCultLeader

    @ChocolateMilkCultLeader

    2 жыл бұрын

    Transformers are being used in many ways. For example you could take a bunch of vectors (representing image features extracted from Convolutions) and feed them into Transformers to decode as text. This gives you a lot of power combining the NLP and Computer Vision Domain

  • @strongsyedaa7378

    @strongsyedaa7378

    Жыл бұрын

    @@ChocolateMilkCultLeader Generic features or specific?

  • @ChocolateMilkCultLeader

    @ChocolateMilkCultLeader

    Жыл бұрын

    @@strongsyedaa7378 what do you mean?

  • @hobonickel840
    @hobonickel840 Жыл бұрын

    Does this mean they can fix my adhd? I don't quite know why but all this transformer tech helps me understand my own glitched mind better

  • @1HARVEN1
    @1HARVEN1 Жыл бұрын

    Hey its the guy from the beer channel...

  • @jonasgk86
    @jonasgk86 Жыл бұрын

    Lol, i find the banane joke funny :)

  • @anatolydyatlov963
    @anatolydyatlov9632 ай бұрын

    How are you able to write a mirror image of the words so effortlessly? :O

  • @SciFiFactory
    @SciFiFactory20 күн бұрын

    So is it like ... a layered, parallelized autoencoder?

  • @Damodharanjay
    @Damodharanjay11 ай бұрын

    Aged like a wine!

  • @daniel_tenner
    @daniel_tennerАй бұрын

    “Before too long, they might even be able to come up with jokes that are actually funny.” 2 years later, here’s the banana joke ChatGPT 4 (already 1y old) came up with for me. > Why did the banana go to the doctor?> Because it wasn't peeling well! I think we can call that a win.

  • @sudarshinirasa6913
    @sudarshinirasa69132 жыл бұрын

    Can we use this method to detect outliers in time series data

  • @TheShawMustGoOn

    @TheShawMustGoOn

    2 жыл бұрын

    While you can use Transformers for Time Series, I'm not sure why you'd want some network architecture to look for outliers instead of regularizing it and let the network learn to ignore those during optimization.

  • @coffle1

    @coffle1

    Жыл бұрын

    Transformers are a bit overkill for anomaly detection. A lot of time more traditional methods might perform better faster (especially if the resources for training the models are constrained like not having dedicated chips or an insufficient amount of training data)

  • @BigAsciiHappyStar
    @BigAsciiHappyStar3 ай бұрын

    Why did the attention mechanism NOT cross the road? Because it was paralyzed!😜😁 BTW did I hear that part correctly near the end of the video?

  • @ramielkady938
    @ramielkady938Ай бұрын

    Things are judged by their appearance. And this video looks way way better than it actually is. That explains the views.

  • @punk3900
    @punk3900Ай бұрын

    This was prophetic. I wonder whether at that time you realized that Transformer would revolutionize the world.

  • @tuapuikia
    @tuapuikia Жыл бұрын

    Where can I summon autobot?

  • @Optimus_Prime_The_Legend_alive
    @Optimus_Prime_The_Legend_alive2 ай бұрын

    I just have to say it TRANSFORMERS MORE THEN MEETS THE EYES!

  • @ZelForShort
    @ZelForShort Жыл бұрын

    In reference to the summary of an article example, How does that work? How does the program know to summarize the article and not continue it? Also, how do you go from language processing to playing chess or other games or functions?

  • @damianliew5243

    @damianliew5243

    Жыл бұрын

    I'm not a machine learning expert so I can't verify the validity of this answer, but from my POV I think these questions about "how the program... instead of..." is generally dependent on 1. The actual architecture of the model (in this case, a transformer) 2. The input data it's based upon (text vs maybe piece type and board position labels for a chessboard) 3. The output data it's trying to predict (predict a summary text vs predict the next words in an article) Because such supervised/semi-supervised learning models learn off labelled data, (to a certain extent, for semi-supervised learning), all the model is really doing is mapping an input to an output. Think of it like a maths graph (which is actually exactly what it is); given a dataset with many points, you'd want to find a "best fit" line that models the rough trend accurately without over or underfitting. Machine models do this but on many axes (due to the use of vectors, some with just an insane number of dimensions). Of course there are many other things like hyperparameters, activation functions, loss functions, and nuanced variables to each model architectures, but hopefully this gives you a good understanding of ML in general.

  • @xerxel69

    @xerxel69

    Жыл бұрын

    A summary is a continuation of the text in that case. Consider a webpage on the internet which has an article and then at the bottom of the page it says, "here is a summary of the key points we learned above" and it goes on to summarise. This is an example of the kind of content the AI is trained on. So as long as you do some Prompt Engineering then you can ask your question in such a way that the answer comes from completing the text! It's like magic! 🙂

  • @andrewnorris5415

    @andrewnorris5415

    Жыл бұрын

    @@xerxel69 Yeah, articles do often contain a summary section at the end. Or parts of an essay say, "To summarise so far". Not sure if it can learn this totally unsupervised. Mu guess is summaries are a popular feature - so they will train it specifically to look for them and learn from them i na focused way. Not sure though.

  • @zzador
    @zzador Жыл бұрын

    Transformers: More than meets the eye...

  • @tartariazo5237
    @tartariazo523711 ай бұрын

    IBM: Next-Level Tech explained. Chat: How does he write backwards on that invisible board?

  • @IBMTechnology

    @IBMTechnology

    11 ай бұрын

    See ibm.biz/write-backwards

  • @sohambhattacharjee951
    @sohambhattacharjee9519 ай бұрын

    Now it can indeed write funny banana jokes!!

  • @zvxcxczv
    @zvxcxczv Жыл бұрын

    this dude can write reversely. so awsome

  • @andrewnorris5415

    @andrewnorris5415

    Жыл бұрын

    ha. it looks the right way around to him. The final image is inverted in the video we see. Fun trick.

  • @Bond-zj2ku
    @Bond-zj2ku2 ай бұрын

    I do searches for Transferormer in Machine learning.and in my mind same those transformer there and video starts with the same.

  • @norbertfeurle7905
    @norbertfeurle7905 Жыл бұрын

    Do I get this right, that a transformer is a special case of a state machine, which is designed to learn on, or update it's weights on demand, and is still general enough to cover most data?. Wouldn't an FPGA be optimal to implement such a state machine in flip flop, so that you can generate with 100mhz.

  • @nestorlopez7071

    @nestorlopez7071

    Жыл бұрын

    It really all boils down to performing matrix multiplications. GPUs are best at that. An FPGA can be a GPU if it wants to (:

  • @saatvikmangal7994
    @saatvikmangal79944 ай бұрын

    Latest update on banana humor of AI Why did the banana go to the doctor? Because it wasn't peeling well! - GPT 3.5 11th January 2024, 23:06 IST

  • @festusbojangles7027
    @festusbojangles7027 Жыл бұрын

    the joke was just too deep for your puny mind to get

  • @emirsahin7167
    @emirsahin71673 ай бұрын

    Is he writing on reverse so we can see it correctly?

  • @rongarza9488
    @rongarza94885 ай бұрын

    Correct me if I'm wrong but it seems that translating a document would require a human doing Quality Control right before publishing. Transformers are impressive in how close they come to mimicking humans but they seem to be The Great Pretenders. Now, how does that QC step get implemented in real time?

  • @ChocolateMilkCultLeader
    @ChocolateMilkCultLeader2 жыл бұрын

    Are you guys open to Guest Speakers

  • @dagreatcow
    @dagreatcow Жыл бұрын

    Optimus Prime

  • @sang-suangam9772
    @sang-suangam97722 жыл бұрын

    the banana … skidded …

  • @normacenva

    @normacenva

    2 жыл бұрын

    it wanted to split

  • @sohailpatel7549
    @sohailpatel75498 ай бұрын

    Instead of the content I started thinking how this guy writing in opposite direction 😭😂😂 Is this some AI trick or fr?!

  • @IBMTechnology

    @IBMTechnology

    8 ай бұрын

    See ibm.biz/write-backwards

  • @calvink.4511
    @calvink.451110 ай бұрын

    They've got better jokes now. 😂

  • @MrofficialC
    @MrofficialC6 ай бұрын

    You do realize the joke about the chicken crossing the road is a suicide joke right? He wanted to get to the other side?

  • @watherby29
    @watherby2911 ай бұрын

    And with this simple idea the civilization ends. No, kidding, the AI will be so smart, it will leave us alone as we will be like bugs to it.

  • @MikeHowles
    @MikeHowles Жыл бұрын

    I came here to understand how on earth he writes backwards or what camera trickery I am obviously missing, LOL.

  • @IBMTechnology

    @IBMTechnology

    11 ай бұрын

    See ibm.biz/write-backwards

  • @MikeHowles

    @MikeHowles

    11 ай бұрын

    @@IBMTechnology LOL thanks!!! I suppose it shouldn't surprise me there is a video about that. Very cool and elegant technique.

  • @animalfrendo
    @animalfrendo11 ай бұрын

    But how does the human write backwards?

  • @IBMTechnology

    @IBMTechnology

    11 ай бұрын

    See ibm.biz/write-backwards

  • @michaelcharlesthearchangel
    @michaelcharlesthearchangel Жыл бұрын

    I don't like people ripping Me off, whether IBM or Google.

  • @samahirrao
    @samahirraoАй бұрын

    Indian SME's might be able to create this and be a unicorn. Easily.

  • @user-jl5gj4mv1z
    @user-jl5gj4mv1z11 күн бұрын

    I didnt get it

  • @danhetherington1335
    @danhetherington1335Ай бұрын

    I dont think the joke was that bad. Picture meatwad from AquaTeen Hunger Force, but very pale beige.

  • @exploradorexplorador7404
    @exploradorexplorador7404 Жыл бұрын

    The banana joke is an instance of an “anti-joke”… just like the chicken joke.

  • @randomcheese1719
    @randomcheese1719Ай бұрын

    it doesn't "come up" with a thing, it regurgitates what it's learned. It's nothing but a copy machine and being made out to be much more than it really is by all the AI hype machine artists.

  • @amudhanbakthavathsalu5308
    @amudhanbakthavathsalu53083 ай бұрын

    not very descriptive.. it is for those who already are studying deeply about sequencing, encoder decoder etc.

  • @amudhanbakthavathsalu5308

    @amudhanbakthavathsalu5308

    3 ай бұрын

    may be i am not smart enough to understand..

  • @talhaeneskoksal4893
    @talhaeneskoksal4893 Жыл бұрын

    Why do they always translate English sentence to French in every video that explains Transformers :D

  • @alexandrav1020

    @alexandrav1020

    8 ай бұрын

    🤣😂

  • @curtisnewton895
    @curtisnewton895 Жыл бұрын

    ok but how about a more detailed explanation ?

  • @vincent_hall
    @vincent_hall Жыл бұрын

    Well, jokes are hard. Kids take several years to learn how to be funny.

  • @gohardorgohome6693

    @gohardorgohome6693

    Жыл бұрын

    KIDS ARE OBSOLETE, AI IS BETTER

  • @valentingorrin4541
    @valentingorrin45415 ай бұрын

    I can't concentrate I can't understand how he manages to write backwards

  • @jayseph9121
    @jayseph91217 ай бұрын

    are you writing backwards in real time? because if so..... 🤯

  • @IBMTechnology

    @IBMTechnology

    7 ай бұрын

    See ibm.biz/write-backwards

  • @jayseph9121

    @jayseph9121

    7 ай бұрын

    @@IBMTechnology one of the few times in my life I wish to be lied to 😂

  • @roodrigato
    @roodrigato7 ай бұрын

    wait, does this guy write backwards?

  • @IBMTechnology

    @IBMTechnology

    7 ай бұрын

    See ibm.biz/write-backwards

  • @robertweekes5783
    @robertweekes5783 Жыл бұрын

    The joke would’ve worked if it was a potato. Pretty close though.

  • @davejones542
    @davejones5423 ай бұрын

    ask it why did the potato cross the road

  • @dabrowsa
    @dabrowsa3 ай бұрын

    Did I miss something? This didn't seem to give any clue as to how transformers actually work.

  • @quantarank
    @quantarank10 ай бұрын

    Your skills in writing backwards were really distracting.

  • @IBMTechnology

    @IBMTechnology

    10 ай бұрын

    See ibm.biz/write-backwards for how it's done

  • @carlowood9834
    @carlowood983411 ай бұрын

    You didn't really explain anything.

  • @blkscreen15
    @blkscreen153 ай бұрын

    didn't find it helpful to conceptually understand transformers

  • @zbeast
    @zbeastАй бұрын

    To reach the other bunch. chat gpt3.5