homevideotutor

homevideotutor

In this channel you will find sample videos that may help understanding basic concepts of Mathematics, Signal Processing, Communications etc. using verys short video clips. Our emphasis here is to give the reader the best and the most essential information during the shortest possible video time we can.

G007 A E IKIMASU

G007 A E IKIMASU

G005 OMITTING SUBJECTS

G005 OMITTING SUBJECTS

G002 A WA B DESU KA.

G002 A WA B DESU KA.

G009 ARE AND DORE

G009 ARE AND DORE

G008 KORE AND SORE

G008 KORE AND SORE

G006 A NI KIMASU

G006 A NI KIMASU

G004 A NO B DESU

G004 A NO B DESU

G003 A WA B DEWA ARIMASEN

G003 A WA B DEWA ARIMASEN

G001 A WA B DESU

G001 A WA B DESU

G002 Scholas tic toc

G002 Scholas tic toc

G001 Scholas tic toc

G001 Scholas tic toc

Bernoulli Random Variable

Bernoulli Random Variable

Пікірлер

  • @misterbreze
    @misterbreze2 ай бұрын

    Hidden Markov Models (HMMs) are employed in both weather forecasting and speech recognition to model systems where the true state is not directly observable but can be inferred from observable data. Here's how the parallels between using HMMs in these two applications unfold: States and Observations: In weather forecasting, the states might be the actual weather conditions (e.g., sunny, rainy, cloudy), which are not directly observable if you're indoors. The observations could be indirect indicators of the weather, such as someone bringing an umbrella inside (indicating it might be raining). In speech recognition, the states are the phonemes or words that are being spoken, which you try to infer. The observations are the acoustic signals or features extracted from the speech waveform. Transition Probabilities: Both applications rely on transition probabilities between states, which in the context of weather, might be the likelihood of transitioning from a sunny day to a rainy day, and in speech, the likelihood of one phoneme following another in a given language. Observable Probabilities: These are the probabilities of making a certain observation given the current state. For weather, it could be the probability of seeing someone with an umbrella given that it's raining. In speech recognition, it's the probability of observing a certain acoustic feature given the current phoneme or word. Sequential Data: Both applications deal with sequential data where the goal is to make sense of a sequence of observations over time. In weather forecasting, you might observe a sequence of days with different weather indicators, and in speech recognition, a sequence of acoustic signals over the duration of the speech. Inference: The core of using HMMs in both fields is to infer the most likely sequence of hidden states (weather conditions or spoken words/phonemes) based on the observed data (umbrella usage or acoustic signals)*. This involves computations like the forward-backward algorithm for calculating probabilities across the sequence. Model Training: For both applications, HMMs need to be trained on historical data to learn the transition and observation probabilities. In weather forecasting, this might involve historical weather data and observations, while in speech recognition, it would require a dataset of spoken words or phonemes and their corresponding acoustic features. Application of Bayes' Theorem: Both use cases often employ Bayes' Theorem to update the probability estimates for the states based on new observations, allowing the model to make more accurate predictions as more data becomes available. * In the context of speech recognition and the use of Hidden Markov Models (HMMs), spoken words are considered "hidden" for a few key reasons: Indirect Observation: When we process speech, what we directly observe are the acoustic signals, such as sound waves or their digital representations through features like spectral coefficients. These signals are influenced by a multitude of factors including the speaker's articulation, accent, the surrounding environment's acoustics, and background noise. The actual words that these sounds represent are not directly observable in the signal; instead, they have to be inferred from the complex patterns within these acoustic features. Variability in Speech: There is a high degree of variability in how words are pronounced, not just among different speakers but also by the same speaker under different conditions (e.g., emotions, speaking rate, health). This variability makes it difficult to directly map a specific acoustic pattern to a specific word without the use of sophisticated models that can account for such differences. Phonetic Overlapping: Many phonemes (the smallest units of sound in speech) can sound similar and can be produced in slightly different ways depending on the context within which they are spoken (i.e., the surrounding phonemes). This coarticulation effect means that the boundary between phonemes and hence words is not always clear-cut in the acoustic signal, making the words 'hidden' in the sense that they must be decoded from overlapping and interdependent sound units. Sequence and Context Dependency: The meaning and identification of spoken words often depend on the context and the sequence in which sounds appear. A sound snippet that could be interpreted as one word in a certain context might be part of a different word in another. The sequential nature of speech and the context-dependent interpretation make the actual spoken words a hidden state that needs to be decoded from the sequence of observed sounds. Cognitive Processing: Finally, the process of understanding spoken language involves complex cognitive processing where the brain interprets the sounds based on a multitude of linguistic and non-linguistic cues. This cognitive processing aspect is another layer that separates the physical observation of sound from the 'hidden' comprehension of words.

  • @misterbreze
    @misterbreze2 ай бұрын

    🎯 Key Takeaways for quick navigation: 00:00 *🎙️ Overview of Speech Recognition Basics* - Speech recognition traditionally employs template-based pattern recognition. - Statistical techniques, like Linear Predictive Coding (LPC), are utilized, but may be insufficient for certain applications due to neglect of time dependency. - Hidden Markov Models (HMMs) offer a more sophisticated statistical method for speech recognition, accommodating time dependency effectively. 01:42 *🔄 Understanding Discrete-Time Markov Process* - Discrete-time Markov chains involve transitions between states based on probabilities. - First-order chains depend solely on the previous state, simplifying probability calculations. - Transition probabilities must sum to 1 for each node in the Markov chain. 06:11 *🌧️ Weather Prediction Example with Markov Model* - Illustrated weather state transitions with probabilities constitute a basic Markov model. - Probability calculations determine the likelihood of specific weather sequences. - Markov models excel when states are observable and events are deterministic. 10:15 *🔍 Introduction to Hidden Markov Models (HMMs)* - HMMs extend Markov models to handle scenarios where observations are probabilistic functions of hidden states. - They involve a doubly embedded stochastic process, where the underlying process is hidden and observed through another stochastic process. - Useful for scenarios like speech recognition, where observations are deterministic (sounds) but states (words) are hidden. 17:26 *🌂 Applying HMMs to Weather Prediction* - Analogizing the problem of inferring outside weather based on observed umbrella events to a hidden Markov model scenario. - Utilizing modified Bayes' rule to calculate the probability of hidden states given observations. - Solving a specific weather prediction problem using hidden Markov models and probability calculations. 26:41 *📊 Bayesian Inference and Markov Models* - Understanding Bayesian inference and its application in simplifying complex expressions. - Utilizing the first-order Markov assumption to simplify calculations. - Applying Bayes' rule variations to solve probability expressions effectively. 32:44 *🔍 Application of Hidden Markov Models (HMMs) in Speech Recognition* - Drawing parallels between HMMs used in weather forecasting and speech recognition. - Describing speech waveform as observations and phonemes as hidden states in HMMs. - Explaining the process of using acoustic input vectors to predict words and sentences. 45:01 *🗣️ Building Isolated Word Recognizers with HMMs* - Outlining the methodology of building isolated word recognizers usingdistinct HMM models for each word. - Discussing the training process involving observation sequences and model parameter estimation. - Highlighting the recognition process where model likelihoods are calculated for all possible words.

  • @venkat157reddy
    @venkat157reddy11 ай бұрын

    Super explanation..Thank you so much

  • @venkat157reddy
    @venkat157reddy11 ай бұрын

    very nice explanation..

  • @mohsintufail5334
    @mohsintufail5334 Жыл бұрын

    any body tell me how we find the valued of alpha's?

  • @ahnafsamin3777
    @ahnafsamin3777 Жыл бұрын

    Thanks for the effort! But you are literally reading out the slides which is a bit annoying. The video can be improved.

  • @prasadadavi6618
    @prasadadavi6618 Жыл бұрын

    Kudos to you sir

  • @toutankhadance
    @toutankhadance Жыл бұрын

    Excellent !

  • @aritraroy3220
    @aritraroy3220 Жыл бұрын

    at 13:46 if w=(1,0) then why it's vertical line ???

  • @ibrahimalotaibi2399
    @ibrahimalotaibi2399 Жыл бұрын

    The smart grid term is stiffed in the title without any reason.

  • @asmitakumari5439
    @asmitakumari5439 Жыл бұрын

    This factor tree is roung solve

  • @sanjaytiwari685
    @sanjaytiwari685 Жыл бұрын

    👍👍

  • @joshuac9142
    @joshuac9142 Жыл бұрын

    How do you mathematically find the support vectors?

  • @victor_ajadi
    @victor_ajadi Жыл бұрын

    pls make a video of how you derived the mapping function

  • @yahiagamal937
    @yahiagamal937 Жыл бұрын

    Hi sir, Thank you for making life easy. I have two questions if u don't mind: is the formula 6- x1+(x1-x2) greater or equal to 2 fixed? i.e. in our example, the read points make a boundary of max 2. Is that the reason you chose greater or equal to 2 or this is a fixed equation regardless of the coordinates of the points?

  • @rainclouds4346
    @rainclouds43462 жыл бұрын

    Can you please explain where that equation a1S1S1 + a2S2S2 etc was derived from? I haven't found anyone who could explain that

  • @imranimmu4714
    @imranimmu47142 жыл бұрын

    try hearing urself!!

  • @maged4087
    @maged40872 жыл бұрын

    why u added the bias 1?

  • @moeezranksol6925
    @moeezranksol69252 жыл бұрын

    From where the 6 comes?

  • @dragonball-dragonatoraa8395
    @dragonball-dragonatoraa83952 жыл бұрын

    please show me how to get the weights how is it calculated if not given

  • @lvuindia3507
    @lvuindia35072 жыл бұрын

    Please make video how we calculate alpha values of each sample in trainig set.

  • @hamidawan687
    @hamidawan6872 жыл бұрын

    The fundamental question is still unanswered by every tutor I have found so far. The question is how to assume/specify/determine the support vectors? Iy is of course unbelievable to assume support vectors just by visualizing the data points. How can it be true in multi-dimensional space? Please guide me. I need to program SVMs from scratch without using any library in Matlab or .Net.

  • @armylove9733
    @armylove97332 жыл бұрын

    As per my understanding I guess support vector/s is/are one which is on the hyper-plane so they will first create one plane in between two datasets and then whichever point lies on that plane's equation will be called a Support vector.

  • @chandersital513
    @chandersital5132 жыл бұрын

    Thank you very much

  • @alexlo7708
    @alexlo77082 жыл бұрын

    Clip didn't tell how come those parameter value w are from.

  • @ambarishphysics
    @ambarishphysics2 жыл бұрын

    Here is my step by step proof of Thevenin's Theorem: kzread.info/dash/bejne/lX2alrihp7uWgc4.html

  • @revaronsdemise5366
    @revaronsdemise53662 жыл бұрын

    Thanks

  • @lordmegatron5695
    @lordmegatron56952 жыл бұрын

    First comment after 9 years 😱😱😱

  • @mathsbymathi5714
    @mathsbymathi57143 жыл бұрын

    Really great

  • @mohammedshaker13
    @mohammedshaker133 жыл бұрын

    I see that: classifying these new points must use the bias offset as follow : W.X+b ... is this effect on the final result?.

  • @saeidreza6736
    @saeidreza67363 жыл бұрын

    It emphasizes too much on HMM but less on Speech recognition part. I guess it has been left for Part II which is not available. The quality of voice especially at the second part of the clip is not good.

  • @RabindranathBhattacharya
    @RabindranathBhattacharya3 жыл бұрын

    Why bias has been taken as 1? Will not the result change if bias is changed?

  • @maged4087
    @maged40872 жыл бұрын

    i have the same question, do you find the answer?

  • @guoyixu5793
    @guoyixu57933 жыл бұрын

    Only a few special cases are mentioned where the solution is a line parallel to either the x or the y axis. I don't know if this solution works for other more general cases. Actually I think the solution form is not correct, not the right formulation if you derive the optimization solution from KKT. The w vector is the linear combination of support vectors, but the augmented w vector is not the linear combination of augmented support vectors. At least I think so now, and I can't prove that they are equivalent, so I think the solution provided in this video is wrong. It just happens to work for those special cases in the video. If someone can prove that this solution in the video is correct, please correct me. The correct solution is from the MIT video here: kzread.info/dash/bejne/kYSrysuQqKuxaNI.html

  • @j.s.nithyashree6843
    @j.s.nithyashree68433 жыл бұрын

    It's perfect

  • @j.s.nithyashree6843
    @j.s.nithyashree68433 жыл бұрын

    Thank you.This is perfect

  • @RahulKumar-dj8bt
    @RahulKumar-dj8bt3 жыл бұрын

    This was great please provide more solutions on Naive Bayes, Linear and Logistics Regression very helpful

  • @congphuocphan
    @congphuocphan3 жыл бұрын

    At 2:16, could you explain how to select the support vector S1, S2, S3 in computer approach? We can recognized it by our observation, but automatically, I think we should have a way to define which points should be consider the support vectors among many data points.

  • @SwavimanKumar
    @SwavimanKumar3 жыл бұрын

    did you find any answer to this elsewhere? Even I have this doubt. Can you help please?

  • @nafassaadat8326
    @nafassaadat83263 жыл бұрын

    Thank you Sir. One thing, how to choose the mapping function ???

  • @brandonwarfield5611
    @brandonwarfield56113 жыл бұрын

    Great videos, but if you don't have headphones you can't hear a thing.

  • @homevideotutor
    @homevideotutor3 жыл бұрын

    Thank you, yes best to use a headphone

  • @parasiansimanungkalit9876
    @parasiansimanungkalit98763 жыл бұрын

    i hope this will end my thesis revision 😭😭😭

  • @maxmustermann2707
    @maxmustermann27073 жыл бұрын

    What, if we use a third voltmeter? Would it improve the result? How can we proove it mathematically?

  • @asrarilwan4636
    @asrarilwan46363 жыл бұрын

    Here you only used 2 variables(x1,x2)..but if there are more than 2 variable.how we can plot in 2D?

  • @dr.md.atiqurrahman2748
    @dr.md.atiqurrahman27483 жыл бұрын

    No comments. Just Wow.

  • @sitinurfatiha8731
    @sitinurfatiha87313 жыл бұрын

    This really helped me! Thank you!!

  • @venkataramaiahmusnuuri7266
    @venkataramaiahmusnuuri72663 жыл бұрын

    Excellent explanation in simple language and math

  • @CEDMAYAKUNTLAPRASANNAKUMAR
    @CEDMAYAKUNTLAPRASANNAKUMAR3 жыл бұрын

    sir u took 1 and -1 because there are two classes, suppose if we have three classes then what values we should take

  • @justateenager9773
    @justateenager97733 жыл бұрын

    Thank you so much Sir, u made my Day..

  • @kelixoderamirez
    @kelixoderamirez3 жыл бұрын

    Permission to Learn Sir

  • @nikhilphadtare7662
    @nikhilphadtare76624 жыл бұрын

    It's Tilda not Tidle. Good video

  • @trexmidnite
    @trexmidnite4 жыл бұрын

    very stupid teacher..

  • @ravindrasengar2897
    @ravindrasengar28974 жыл бұрын

    Galat he bhai