Introduction To Optimization: Gradient Free Algorithms (2/2) Simulated Annealing, Nelder-Mead

Ғылым және технология

A brief overview of Simulated Annealing, the Nelder-Mead method, and a survey of various metaphor and biological inspired optimization algorithms. This video is part of an introductory series on optimization.
TRANSCRIPT:
Simulated Annealing
Annealing is a process in which metal or glass is heated, and then allowed to slowly cool at a controlled rate. Annealing changes the properties of a metal, making it more ductile and workable. Simulated annealing uses the idea of slowly cooling metal in an optimization algorithm. An initial guess is taken. Then another solution is randomly guessed near the previous solution. This new solution is evaluated to see if it is better than the previous solution. Initially, some bad solutions are accepted to allow the algorithm to explore, perhaps finding its way out of a valley towards a better solution. As time goes on, the “temperature” is reduced, and fewer bad solutions are accepted. Eventually, only solutions that are better than the previous one are accepted. It’s a bit like a ball bouncing on a vibrating surface. As the vibration decreases, the ball bounces less and less. Eventually it will come to a stop, hopefully at the minimum point.
Nelder-Mead
The Nelder-Mead downhill simplex algorithm is another commonly used gradient free algorithm. It uses a triangular shape, or simplex, to search for an optimal solution. The simplex shape flip flops towards its goal, growing, shrinking, and changing its shape according to a set of rules.
Other Metaphor Inspired Algorithms
Along with those already mentioned, many other gradient free optimization algorithms exist. A large number of these are based on analogies to natural processes, such as:
Ant colony optimization (Dorigo, 1992)
Particle swarm optimization (Kennedy & Eberhart 1995)
Harmony search (Geem, Kim & Loganathan 2001)
Artificial bee colony algorithm (Karaboga 2005)
Bees algorithm (Pham 2005)
Glowworm swarm optimization (Krishnanand & Ghose 2005)
Shuffled frog leaping algorithm (Eusuff, Lansey & Pasha 2006)
Imperialist competitive algorithm (Atashpaz-Gargari & Lucas 2007)
River formation dynamics (Rabanal, Rodríguez & Rubio 2007)
Intelligent water drops algorithm (Shah-Hosseini 2007)
Gravitational search algorithm (Rashedi, Nezamabadi-pour & Saryazdi 2009)
Cuckoo search (Yang & Deb 2009)
Bat algorithm (Yang 2010)
Flower pollination algorithm (Yang 2012)
Cuttlefish optimization algorithm (Eesa, Mohsin, Brifcani & Orman 2013)
Artificial swarm intelligence (Rosenberg 2014)
Many of these algorithms contain similar mathematical ideas concealed by differing analogies.
Conclusion
In summary, a large variety of gradient free optimization methods exist, each with their own strengths and weaknesses. Some of the most commonly used methods are genetic algorithms, particle swarm, simulated annealing, and the Nelder-Mead simplex algorithm.
In general, gradient free methods are easier to implement, and have the advantage of not requiring derivatives. This means that they can be applied to problems that are discrete, discontinuous, or otherwise nondifferentiable. Many of the algorithms also do a good job of searching for a global solution rather than a local solution.
On the downside, gradient free algorithms tend to be very slow for large problems. Many of the algorithms are also stochastic, meaning that they are based on chance, and will not always find the same solution. Finally, there is no guarantee that these algorithms will return an optimal solution, meaning that while the solution found might be better that what you started with, you won’t know if it is the best solution possible.

Пікірлер: 17

  • @jayktharwani9822
    @jayktharwani98223 жыл бұрын

    This is the best series of videos to actually understand the optimization procedure.

  • @anniehamalian3077

    @anniehamalian3077

    Жыл бұрын

    you can check www.youtube.com/@decisionbrain to know more about optimization

  • @grimonce
    @grimonce6 жыл бұрын

    Really helpful and neat explanation, thanks for the video :)

  • @twisties.seeker
    @twisties.seeker4 жыл бұрын

    Thank you for an amazing explanation.

  • @alvarorodriguez8575
    @alvarorodriguez85756 жыл бұрын

    Hello, thank you for the video, I have a question, for multi objective optimization the same classification applies or it is different? Especially looking forward to Buildings Multi-disciplinary and multi-objective optimization problems, Thank you !

  • @domaminakoi5630
    @domaminakoi56302 жыл бұрын

    Do you have experience on when to you which of the gradient free algorithms? PSO has worked best for me in the past. Haven't been succesful implementing a simulated annealing with good results yet.

  • @adelsayyahi9665
    @adelsayyahi9665 Жыл бұрын

    Thank you, what is the name of the algoodo tolbox you used for simulated annealing?

  • @vi5hnupradeep
    @vi5hnupradeep3 жыл бұрын

    Thank you so much 💯

  • @mitjadrab6529
    @mitjadrab65295 жыл бұрын

    What is the simulation program shown at 1:30?

  • @alphaopt2024

    @alphaopt2024

    5 жыл бұрын

    Hi Mitja, I used Algodoo for the simulation: www.algodoo.com/

  • @andrea-mj9ce
    @andrea-mj9ce Жыл бұрын

    The Nelder-Mead method is not explained long enough to understand it.

  • @metaprog46and2
    @metaprog46and23 жыл бұрын

    Nice video. Your explanation synchs well with the graphics (which are awesome themselves - which design / video maker software did you use?)

  • @alphaopt2024

    @alphaopt2024

    3 жыл бұрын

    Powerpoint if you can believe it. You can do a lot with the morph transition.

  • @metaprog46and2

    @metaprog46and2

    3 жыл бұрын

    @@alphaopt2024 Wow. Color me surprised. I'll have to get over my natural disdain for PPT lol. Thanks for the response!

  • @where-is-my-mind.
    @where-is-my-mind.5 жыл бұрын

    gradient-based optimisation also doesn't guarantee an optimal solution.

  • @where-is-my-mind.

    @where-is-my-mind.

    5 жыл бұрын

    @Dat Boi When you say "guarantee an optimal solution", I presume you mean global optimum. I don't know where you've learnt that but that's not correct. If you have a paper to back that up please reference it so I can take a look too. To start with, there are infinite number of optimisation problems and gradient-based optimisation can only solve a handful of it. Because real world problems are hardly differentiable, hence why the derivative-free or non-gradient optimisation algorithms emerged. Now going back to what you said about the second-order optimisation, it is more "efficient" in terms of convergence in comparison to first-order optimisation however, optimality of the solution has nothing to do with the speed of convergence. Just like first-order methods, second-order methods are also very likely to be stuck in local minimas so it doesn't guarantee an optimal solution. In fact, I've seen many studies where they've obtained better result optimising a specific problem with gradient descent instead of a 2nd order method. So to wrap up, it's not as simple as you've stated.

  • @parg2244

    @parg2244

    4 жыл бұрын

    @@where-is-my-mind. Hi! could you recommend a book about optimization?

Келесі