Luis R. Izquierdo

Luis R. Izquierdo

Пікірлер

  • @galileodelcurto9300
    @galileodelcurto930023 күн бұрын

    Excelent video!! Super clear explanation!

  • @LuisRIzquierdo
    @LuisRIzquierdo19 күн бұрын

    Very glad you found it useful! thanks a lot for letting me know!

  • @mateuszmotyl7217
    @mateuszmotyl7217Ай бұрын

    Could you provide insertion neighborhood size proof please?

  • @LuisRIzquierdo
    @LuisRIzquierdoАй бұрын

    HI, did you see the note in the description? "Erratum: In the slide shown at 12:30, the neighborhood size using the insertion operator is (n-1)^2. Sorry about it!"

  • @mateuszmotyl7217
    @mateuszmotyl7217Ай бұрын

    @@LuisRIzquierdo Oh, I apologize, just noticed. Anyway, I was searching the internet for a justification of this neighborhood size and I came across your material. By the way, there is a lot of useful information, I am currently watching the entire series.

  • @LuisRIzquierdo
    @LuisRIzquierdoАй бұрын

    @@mateuszmotyl7217 No need to apologize at all, it was my mistake :D. Glad you find the videos useful!

  • @rohitrmohanty
    @rohitrmohanty6 ай бұрын

    Hi, I just finished your series and it was great! Really cleared a lot of concepts for me so I can't thank you enough. I just had a question, how come some lectures (like the Andrew NG ones) use the cost function with the term 1/2 instead of 1/2m? Is it because here it's taking the average? So can this term vary from algorithm to algorithm? (sorry if it's a basic question but I couldn't find information on the difference)

  • @LuisRIzquierdo
    @LuisRIzquierdo6 ай бұрын

    Thanks a lot for your kind words. Your question is actually very good. The most important thing to realize is that the solution to the cost minimization problem is the same regardless of whether you use 1/2, 1/(2m), or 1, since these are all constants. Thus, the value of the parameters that minimize one of those versions will also minimize the other versions. It’s just a change of scale. Having said that, the usual convention is to use 1/(2m), and I’m pretty sure Andrew Ng uses it too. The reason for dividing by m is to have an average, as you anticipated, so the value of the cost function is not affected much by the number of data points we have. The reason for, in addition, dividing by 2 is to have no constants in the gradient (since that ½ cancels with the 2 that comes from the exponent in the cost function). Since we can choose any constant, we use those that have some “benefit”, even if it is small. Hope this is clear.

  • @rohitrmohanty
    @rohitrmohanty6 ай бұрын

    @@LuisRIzquierdo yes now it makes sense. Thanks so much again! Muchas gracias!

  • @carmencristea9680
    @carmencristea96806 ай бұрын

    Great set of videos. Thanks a lot!

  • @4AlexeyR
    @4AlexeyR7 ай бұрын

    First of all, I have to thank you for the clear explanation and the references to books and articles. Excellent work. I'm going deeper into the domain and want to dive in from the beginning to fill some gaps I'm sure I have as a practitioner.

  • @SpinCrash
    @SpinCrash7 ай бұрын

    thank you so much for these! they really helped.

  • @ThaoNguyen-qh2gx
    @ThaoNguyen-qh2gx7 ай бұрын

    Thank you so much for your video. It really helps me to understand my lession more

  • @LuisRIzquierdo
    @LuisRIzquierdo7 ай бұрын

    Glad it was helpful!

  • @ankoosh
    @ankoosh8 ай бұрын

    Preparing for my end semester paper on Machine Learning and I really have to say, this whole playlist is awesome. Thank you for explaining each and every thing in lucid language.

  • @LuisRIzquierdo
    @LuisRIzquierdo8 ай бұрын

    Thank you so much for such a nice comment. I'm very glad you found them useful. You made my day!

  • @americafilmsandentertaim-lf6zx
    @americafilmsandentertaim-lf6zx8 ай бұрын

    Thanks for this very informative video!

  • @Supachokk
    @Supachokk9 ай бұрын

    Thank you so much. Amazing!

  • @LuisRIzquierdo
    @LuisRIzquierdo9 ай бұрын

    Thanks! Glad you liked it! :D

  • @toast4726
    @toast472610 ай бұрын

    This was a fantastic series and helped improve my understanding of meta-heuristics and have some of the concepts click that I wasn't quite grasping. Thank you very much, Luis!

  • @LuisRIzquierdo
    @LuisRIzquierdo10 ай бұрын

    Thank you for such a beautiful message! I really appreciate it and I am extremely glad you found the videos useful!

  • @algorithmo134
    @algorithmo13410 ай бұрын

    Excellent video!

  • @LuisRIzquierdo
    @LuisRIzquierdo10 ай бұрын

    Thanks a lot! :D

  • @avijitdey992
    @avijitdey99211 ай бұрын

    exceptional. to the point and precise. I was so confused after watching a very deep explanation of this topic. I was confused and felt very like an idiot. thank you a lot

  • @LuisRIzquierdo
    @LuisRIzquierdo11 ай бұрын

    Your comment made my week, thanks so much!!!

  • @the-ghost-in-the-machine1108
    @the-ghost-in-the-machine1108 Жыл бұрын

    thanks, appreciate your work!

  • @tonathiualfonsovelazquezmo1661
    @tonathiualfonsovelazquezmo1661 Жыл бұрын

    Hola Como se obtiene la pendiente y la ordenada al origen del modelo con la variable lag_12?

  • @LuisRIzquierdo
    @LuisRIzquierdo Жыл бұрын

    Hola, el modelo con lag_12 no tiene pendiente constante en este gráfico (como puedes ver en la imagen) y la ordenada en el origen no tiene sentido porque sólo puedes calcular lag_12 desde el periodo 13

  • @tonathiualfonsovelazquezmo1661
    @tonathiualfonsovelazquezmo1661 Жыл бұрын

    @@LuisRIzquierdo en efecto. Planteé mal la pregunta. Como obtuvo, por ejemplo, el valor de 1.07?

  • @tarikabaraka2251
    @tarikabaraka2251 Жыл бұрын

    En estadística, la regresión logística es un tipo de análisis de regresión utilizado para predecir el resultado de una variable categórica en función de las variables independientes o predictoras. Es útil para modelar la probabilidad de un evento ocurriendo en función de otros factores.

  • @carlosmunoz4774
    @carlosmunoz4774 Жыл бұрын

    Que buena explicación !

  • @lazytocook
    @lazytocook Жыл бұрын

    why would u expect the test error to be very high when the variance is high and the training error is low?

  • @LuisRIzquierdo
    @LuisRIzquierdo Жыл бұрын

    The short answer would be that variance is a component of the test error, so when variance is high, generally you can expect the test error to be high. A more elaborate, intuitive and informal explanation is the following. Recall that if the variance is high, this means that our fit is very sensitive to small fluctuations in the training set. This means that if we changed the training set, we would generally get a very different model that would predict differently for any specific instance. Variance in this context is actually the variance of our estimations. If we have high variance, our estimations (for the same input) vary a lot if we change the training set. And any of these training sets is, in principle, equally valid, so any of our (different) estimations are equally legitimate... but they are widely different. This suggests high test error.

  • @lazytocook
    @lazytocook Жыл бұрын

    thank you. very well explained.

  • @user-gc3bl9nd9k
    @user-gc3bl9nd9k Жыл бұрын

    hello sir, can brute force method be referred to as a "conventional" method?

  • @LuisRIzquierdo
    @LuisRIzquierdo Жыл бұрын

    For the purpose of this video, I would say that brute force is the simplest conventional method, yes.

  • @mdfarhantasnimoshim1641
    @mdfarhantasnimoshim1641 Жыл бұрын

    Great explanation!

  • @ponime
    @ponime Жыл бұрын

    Hi, this is super helpful. Do you have a link to the template that we could use?

  • @LuisRIzquierdo
    @LuisRIzquierdo Жыл бұрын

    Hi, is this what you mean? www.dropbox.com/s/vd6v6cxlhtyxac3/template.xlsx?dl=0

  • @sinus_hiphop
    @sinus_hiphop Жыл бұрын

    You brought enlightenment to my mind, inner peace to my soul and relief to my sanity with this video. Thank you so much for this excellent explanation. God bless from Poland!

  • @THAMIZHMANIM
    @THAMIZHMANIM Жыл бұрын

    How to download this software

  • @LuisRIzquierdo
    @LuisRIzquierdo Жыл бұрын

    luis-r-izquierdo.github.io/EvoDyn-3s/

  • @THAMIZHMANIM
    @THAMIZHMANIM Жыл бұрын

    Thank you bro ❤

  • @CarlosEnrique84
    @CarlosEnrique84 Жыл бұрын

    Hi. Thank you very much for your video. The link with the Mark Hall information is down... can you share it again with another link?

  • @LuisRIzquierdo
    @LuisRIzquierdo Жыл бұрын

    Hi Carlos, thanks for noticing that. It seems like the whole wiki.pentaho.com is down. I guess they'll fix it soon. In the meantime, you may want to start reading this summary of Mark's excellent material: www.dropbox.com/s/zudgm1ko5yog72j/time-series.zip?dl=0

  • @LuisRIzquierdo
    @LuisRIzquierdo Жыл бұрын

    Got it! pentaho-public.atlassian.net/wiki/spaces/DATAMINING/pages/293700841/Time+Series+Analysis+and+Forecasting+with+Weka

  • @CarlosEnrique84
    @CarlosEnrique84 Жыл бұрын

    @@LuisRIzquierdo Thank you so much, Luis! The information is most valuable!!!

  • @deeper_soundfy5528
    @deeper_soundfy5528 Жыл бұрын

    Hola muy interesante la manera de enseñar. Sin embargo vengo investigando por varios lados y no consigo obtener respuesta con el siguiente problema. Tengo una red entrenada con datos de ventas (por poner un ejemplo) diarias, que abarca un historico de enero de 2010 a diciembre de 2019. OK! Ahora bien, se supone que guardo mi modelo. Yo quiero saber, como podria hacer proyecciones o predicciones para febrero de 2020?. supongamos que yo , tengo los datos de ventas de enero de 2020. Pero quiero que, en base a esos datos, mi modelo (ya que se supone que detecta patrones), me haga una proyeccion para el mes de febrero. En otro lado lei que el modelo en produccion si o si necesitara de toooodo el historico para hacer la prediccion, lo cual me parece poco util y costoso computacionalmente, pasarle al modelo que ya esta "listo", un historico de 2010 a enero de 2020... No se si logre hacerme entender.. Saludos!

  • @nirvanamendivil
    @nirvanamendivil Жыл бұрын

    This is great! Thank you so much!

  • @hayatadairatv653
    @hayatadairatv653 Жыл бұрын

    Is it silver heuristic same with meta heuristic?

  • @deepL0
    @deepL0 Жыл бұрын

    I got it. Thanks

  • @johnferace2534
    @johnferace2534 Жыл бұрын

    Que buena explicacion, muchas gracias!!!

  • @zupay1
    @zupay1 Жыл бұрын

    excelente

  • @fernandofreire3671
    @fernandofreire3671 Жыл бұрын

    Parabéns pelo vídeos, diretamente do Brasil.

  • @dimaiyassou5898
    @dimaiyassou5898 Жыл бұрын

    Very well organized and sufficiently explained.

  • @mukhtarisah334
    @mukhtarisah334 Жыл бұрын

    All your videos are awesome, sweet explanation of complex theories, thank you

  • @whom4751
    @whom4751 Жыл бұрын

    wowwww

  • @rollychairs
    @rollychairs Жыл бұрын

    Very concise and good explanation!

  • @iankhoojiaern
    @iankhoojiaern Жыл бұрын

    thank u Phil Dunphy

  • @galalatreasures28
    @galalatreasures28 Жыл бұрын

    hello sir, great presentation, thanks a lot, i have a question. is this slide available for public or not?

  • @LuisRIzquierdo
    @LuisRIzquierdo Жыл бұрын

    Hi, thanks for the kind words. Yes, the link to the presentation is on the description of the playlist: www.dropbox.com/s/zl0kfxqmmmlvhss/Introduction%20to%20metaheuristics.pdf

  • @assemshaker2528
    @assemshaker2528 Жыл бұрын

    @@LuisRIzquierdo sorry but this is not the scheduling problem , we dont need the metaheuristic , we need the scheduling problem pdf

  • @LuisRIzquierdo
    @LuisRIzquierdo Жыл бұрын

    @@assemshaker2528 My bad, sorry! www.dropbox.com/s/9v7lyh9n5hftxnn/The%20scheduling%20problem.pdf?dl=0

  • @assemshaker2528
    @assemshaker2528 Жыл бұрын

    @@LuisRIzquierdo not at all thank you very much

  • @dr.m.venkatanarayanadeancr1287
    @dr.m.venkatanarayanadeancr1287 Жыл бұрын

    can you suggest me in case of rig scheduling problem

  • @dr.m.venkatanarayanadeancr1287
    @dr.m.venkatanarayanadeancr1287 Жыл бұрын

    can you help me if machines perform same operation, then what can i do

  • @dr.m.venkatanarayanadeancr1287
    @dr.m.venkatanarayanadeancr1287 Жыл бұрын

    thank you so much for your support

  • @dr.m.venkatanarayanadeancr1287
    @dr.m.venkatanarayanadeancr1287 Жыл бұрын

    informative lectures, great contribution. thank you

  • @zevan4b
    @zevan4b Жыл бұрын

    Thanks - not sure if you explain it in a later video. Would be interesting to see how to deal when there is more than 1 machine of the same type. i.e. Job X can be done on Machine M1 or M2

  • @EmilioGarcia_
    @EmilioGarcia_2 жыл бұрын

    Great lecture, thanks for sharing it. Quick question, in the IF loop when comparing “Best” and “S” what does Quality measure? I thought about it as a route for routing problem, then I pick one route randomly and set it as “Best” and then another route as “S”, then the Quality I see it as the most economical one (if we look for the route with the min cost) Am I right with this interpretation of Quality(*)? Thanks for your feedback!

  • @LuisRIzquierdo
    @LuisRIzquierdo2 жыл бұрын

    Indeed. Quality measures how good a candidate solution is. The better the solution, the higher the quality. Thus, in your problem, the lower the cost of the route, the greater its quality. In any case, you can interpret the piece of code "If Quality(S) > Quality(Best)" simply as "If S is better than Best". Thus, in your routing problem, you can replace that piece of code with "If Cost(S) < Cost(Best)" (note I replaced > with <). Use quality only if it helps you. The important point is that you update the value of "best" iff you find a solution better than the current "best".

  • @portiseremacunix
    @portiseremacunix2 жыл бұрын

    Thanks! It is the only series of videos about metaheuristics on youtube!

  • @sunerise6045
    @sunerise60452 жыл бұрын

    thank you

  • @Cozalo09
    @Cozalo092 жыл бұрын

    Muy buenos videos!! se agradecen mucho! Saludos desde Chile!

  • @LuisRIzquierdo
    @LuisRIzquierdo2 жыл бұрын

    Muchas gracias :D. Me alegro mucho de que sean útiles!

  • @lit22006
    @lit220062 жыл бұрын

    Does this mean that if we get the train-error close to test-error we could achieve model robustness?

  • @LuisRIzquierdo
    @LuisRIzquierdo2 жыл бұрын

    I'm not sure I understand the question...you would have to define model robustness. In any case, I don't think so. Extremely simple models (e.g. predicting always the same number for every instance) will have a training error similar to the test error (both very high), but that would not be a good model at all, in general

  • @cansuvural359
    @cansuvural3592 жыл бұрын

    Best playlist on youtube for this subject. Thank you so much, Professor!

  • @LuisRIzquierdo
    @LuisRIzquierdo2 жыл бұрын

    Thank you, Cansu! Glad you found it useful!

  • @SkN097
    @SkN0972 жыл бұрын

    Explicas muy bien, en mi universidad he tenido muy pocos profesores que enseñan tan bien !