SciPy Beginner's Guide for Optimization

Ғылым және технология

Scipy.Optimize.Minimize is demonstrated for solving a nonlinear objective function subject to general inequality and equality constraints. Source code is available at apmonitor.com/che263/index.php...
Correction: The "product" at 0:30 should be "summation". The code is correct.

Пікірлер: 300

  • @markkustelski8113
    @markkustelski81134 жыл бұрын

    This is soooooo awesome. Thank you! I have been learning python for months....this is the first time that I have solved an optimization problem

  • @apm

    @apm

    4 жыл бұрын

    I'm glad it helped. Here are additional tutorials and code: apmonitor.com/che263/index.php/Main/PythonOptimization You may also be interested in this online course: apmonitor.github.io/data_science/

  • @pabloalbarran7572
    @pabloalbarran75725 жыл бұрын

    I've been looking a lot of your videos and they have help me a lot, thank you very much.

  • @Krath1988
    @Krath19883 жыл бұрын

    You just saved me like a century trying to figure out how to unroll these summed values. Thanks!

  • @apm

    @apm

    3 жыл бұрын

    Great to hear!

  • @sriharshakodati
    @sriharshakodati4 жыл бұрын

    Thanks for the short and clear explanation. Thanks for clearing the fear I have of equations and coding. Thanks for being so kind to reply to every query in the comments section . Thanks for designing the course and uploading the videos Thanks for helping many students like me. You increased my confidence and changed my perception..that I can do more. I wish you get what you deserve. You will be remembered :) Thank you, Sir!!!!

  • @first.samuel
    @first.samuel6 жыл бұрын

    Wow!Best Intro Ever..looking forward to more stuff like this

  • @MrSkizzzy
    @MrSkizzzy6 жыл бұрын

    This gave me hope for my thesis. Thank you sir :D

  • @anjalabotejue1730

    @anjalabotejue1730

    4 жыл бұрын

    I wish I can have some of that for mine lol

  • @MrSkizzzy

    @MrSkizzzy

    4 жыл бұрын

    @@anjalabotejue1730 everything worked out in the end 😊 Wish you much success

  • @TopAhmed1

    @TopAhmed1

    3 жыл бұрын

    Last time I visited his channel for control theory lectures in my undergraduate, now I finished my MSc.

  • @MrSkizzzy

    @MrSkizzzy

    3 жыл бұрын

    @@TopAhmed1 Im so close haha 😁

  • @daboude8332
    @daboude83325 жыл бұрын

    Thank you for your help. We have an assignment next week and the last question is exactly the same ! :)

  • @apm

    @apm

    5 жыл бұрын

    I'm glad that it helped.

  • @LittlePharma
    @LittlePharma3 жыл бұрын

    This video saved me a big headache, thank you!

  • @SolvingOptimizationProblems
    @SolvingOptimizationProblems4 жыл бұрын

    Many thanks sir for such a great and clear instruction on solving LP in python.

  • @prabhatmishra5667
    @prabhatmishra56672 жыл бұрын

    I have no words to thank you, love from India

  • @johnjung-studywithme
    @johnjung-studywithme Жыл бұрын

    quick and to the point. Thank you!

  • @mobi02
    @mobi02 Жыл бұрын

    Thank you for clear and simple explanation

  • @JanMan37
    @JanMan373 жыл бұрын

    Thanks so much. So much clearer now!

  • @galaxymariosuper
    @galaxymariosuper3 жыл бұрын

    absolutely amazing tutorial!

  • @AJ-et3vf
    @AJ-et3vf2 жыл бұрын

    Very helpful and educational sir. Thank you so much for your videos

  • @jiansenxmu
    @jiansenxmu6 жыл бұрын

    It’s very clear, thanks!

  • @amirkahinpour547
    @amirkahinpour5476 жыл бұрын

    Thank you veryy much for this. Awesome one

  • @yudongzhang5324
    @yudongzhang5324 Жыл бұрын

    Awesome Video! Very clear and helpful!!! Thanks!

  • @polcasacubertagil1967
    @polcasacubertagil1967 Жыл бұрын

    Short and concise, thanks

  • @user-mk3yl5fe4m
    @user-mk3yl5fe4m2 жыл бұрын

    I wish I would have found you 1-hour earlier. Thank you! So clear and helpful. Ahhhh

  • @franciscoangel5975
    @franciscoangel59757 жыл бұрын

    At 0:39 I think you must write a "sumation" of terms squared sum x_i^2=40 not a product.

  • @apm

    @apm

    7 жыл бұрын

    +Francisco Angel, you are correct. At 0:39 it is the summation of the squares that equals 40, not the product. The constraint above (x1*x2*x3*x4

  • @apm

    @apm

    7 жыл бұрын

    ...the product constraint is >=25.

  • @franciscoangel5975

    @franciscoangel5975

    7 жыл бұрын

    APMonitor.com yeah, but you write product of squared variables is equal to 40, and the contraint is sum of variables squared is 40, check time 0:39 at the video. You used big pi notation (product) instead big sigma (sumation).

  • @lazypunk794

    @lazypunk794

    5 жыл бұрын

    He already admitted his mistake bruh

  • @tigrayrimey6418

    @tigrayrimey6418

    3 жыл бұрын

    @@franciscoangel5975 But, he did it correctly in the implementation.

  • @mahletalem
    @mahletalem4 жыл бұрын

    Thank you! This helped a lot.

  • @eso907
    @eso9075 жыл бұрын

    I'm new to the optimization field and found the video very useful. Is the example mentioned in the video QCQP (quadratically constraint quadratic programming?)

  • @apm

    @apm

    5 жыл бұрын

    The objective function isn't quadratic and neither is the product constraint so it would just be classified as a general nonlinear programming problem.

  • @hayat.tech.academy
    @hayat.tech.academy2 жыл бұрын

    Thank you for the explanation. There is a slip of tongue at 0:30 where you said the product. It's the summation of x_i.squared. Thanks

  • @apm

    @apm

    2 жыл бұрын

    Thanks for catching that!

  • @cristobalgarces1675
    @cristobalgarces16752 жыл бұрын

    Great video. Very helpful for the implementation of the code. I had a quick question though. How would one extract the design point history from each iteration? I ask because I have to do a tracking of the history as it converges to the minimizer. Thanks a ton!

  • @apm

    @apm

    2 жыл бұрын

    You can set the maximum iterations to 1, 2, 3, 4... and then record the x values as it proceeds to the optimum. If you code the optimization algorithm yourself then you can track the progress as well: apmonitor.com/me575/index.php/Main/QuasiNewton

  • @AK-nw2fg
    @AK-nw2fg3 ай бұрын

    It's great 👍 Thanks for simple learning.

  • @sayanchakrabarti5080
    @sayanchakrabarti50804 жыл бұрын

    Thank you sir for this demo. Could you please let me know if there is any implementation of finding pareto optimal points for multi object optimization problems (eg two variable) using Scipy.

  • @apm

    @apm

    4 жыл бұрын

    I don't have source code for this but there is a book chapter here: apmonitor.com/me575/index.php/Main/BookChapters (see Section 6.5)

  • @jlearman
    @jlearman5 жыл бұрын

    Thank you, very helpful!

  • @constanceyang4886
    @constanceyang48866 жыл бұрын

    Great tutorial!

  • @mohammed-taharlaraba1494
    @mohammed-taharlaraba14943 жыл бұрын

    Thx a lot for this tutorial! There is something puzzling me with the result though. The solution x you're getting is not a minimizer since the objective funtion evaluated at this point is 17, which is greater than the initial guess' objective 16. Why is that ?

  • @apm

    @apm

    3 жыл бұрын

    The initial guess wasn't a feasible solution because it violated a constraint. The final solution is the best objective that also satisfies the constraints. Great question!

  • @24Ship

    @24Ship

    2 жыл бұрын

    i had the same question, thanks for asking

  • @adamhartman8361
    @adamhartman83612 жыл бұрын

    Thanks for this very approachable intro to using Python to do optimization. What book(s) can you recommend specifically for Python and optimization?

  • @apm

    @apm

    2 жыл бұрын

    Here is a free online book on optimization: apmonitor.com/me575/index.php/Main/BookChapters with an accompanying online course.

  • @ps27ps27
    @ps27ps273 жыл бұрын

    Thanks for the video! Does it matter if the constraint is for the method? Should it be in the standard form like for LP?

  • @apm

    @apm

    3 жыл бұрын

    I think it is just the way I showed in the video. You can always reverse the sign of the inequality with a negative sign or there may be an option in the solver to switch it.

  • @LeeJohn206
    @LeeJohn2065 жыл бұрын

    Awesome! thank you.

  • @sepidehsa5707
    @sepidehsa57076 жыл бұрын

    How can we compute the errors associated to the fitted parameters?

  • @danielj.rodriguez8621
    @danielj.rodriguez86212 жыл бұрын

    Thank you, Prof. Hedengren. Noting that this video was posted five years ago, do you think in 2022 there is still a case for using SciPy for optimization tasks instead of using GEKKO? In particular, GEKKO’s MINLP does not have an equivalent within SciPy. Is there any optimization task where SciPy would still be required instead of using GEKKO?

  • @apm

    @apm

    2 жыл бұрын

    Yes, solvers such as scipy.optimize.minimize are very useful when the equations are black-box, meaning that there are function evaluations but there is no access to get the equations for automatic differentiation.

  • @danielj.rodriguez8621

    @danielj.rodriguez8621

    2 жыл бұрын

    @@apm Thank for that valuable tip. Currently dealing with in-libe multiphase flow measurement (MPFM) calculations, in particular, trying to optimize the system configuration per competing criteria (e.g. minimize measurement uncertainty, minimize acceptable pressure loss ( both OPEX concerns), and minimize COGS (CAPEX and Gross Margin concerns). MPFM systems quickly result in de-facto black box objective functions.

  • @fajarariefpermatanatha2584
    @fajarariefpermatanatha25848 ай бұрын

    thank you sir, god bless you

  • @ishgho
    @ishgho4 жыл бұрын

    Is there any way to keep some of the variables constant? Like those variables are in the objective function, but we don't want to solve for those values? Or is the only way is to keep them with same values for lb and ub in bounds

  • @apm

    @apm

    4 жыл бұрын

    If they are constants then you can define them as floating point numbers that are not part of the solution vector. Otherwise, you can set the lb equal to the ub to force it not to change.

  • @ilaydacolakoglu329
    @ilaydacolakoglu3293 жыл бұрын

    Hi thanks for the video. For example like this problem which algorithm (genetic, simulated annealing, particle swarm) can we apply and how?

  • @apm

    @apm

    3 жыл бұрын

    This one is solved with a gradient descent method. Other methods are discussed at apmonitor.com/me575

  • @paulb5044
    @paulb50446 жыл бұрын

    Hello Jhon. I am wondering if you have any material related to python and OR. I am learning myself OR for one side and python for another side. I am wondering if there is any book in which I can combine them. Do you have any material that I can access to or course? Looking forward to hearing from you. Paul.

  • @apm

    @apm

    6 жыл бұрын

    Sorry, no Operations Research specific material but I do have a couple courses that may be of interest to you including engineering optimization and dynamic optimization. apm.byu.edu

  • @paulb5044

    @paulb5044

    6 жыл бұрын

    APMonitor.com Thank you so much. I will check it. Regards Paúl

  • @ghostwhowalks5623
    @ghostwhowalks56234 жыл бұрын

    Fantastic video. What is the best way to add bounds for multiple variables; say more than 24 (optimizing across time, 12 months)? I definitely don't want to list them all......perhaps loop through the variable matrix....but hopefully a more efficient way? Thanks!

  • @apm

    @apm

    4 жыл бұрын

    Scipy.optimize.minimize is going to struggle with a problem that size. I'd recommend Python gekko for your problem, especially if you have differential and algebraic equations. apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization

  • @ghostwhowalks5623

    @ghostwhowalks5623

    4 жыл бұрын

    @@apm - yeah I've been looking at that! I'll dig in further....I hope there's an easier way to specify bounds for all variables...vs. literally listing them as we see in most of the examples. I'll def be exploring Gekko Thanks for the very prompt response!

  • @saeedbaig4249
    @saeedbaig42497 жыл бұрын

    Great video; it helped me a lot. Just wondering tho, if u wanted your bounds to be, say, STRICTLY greater than 1 and STRICTLY less than 5 (i.e. 1

  • @rrc

    @rrc

    7 жыл бұрын

    Saeed Baig, the numbers 4.9999999 and 5.0000000 are treated almost equivalently when we use continuous values versus integer or discrete variables. That means that

  • @saeedbaig4249

    @saeedbaig4249

    7 жыл бұрын

    O ok. Thanks for the quick reply!

  • @luigiares3444
    @luigiares34443 жыл бұрын

    what happens if there are multiple solutions? I mean there is more than 1 set of x1, x2, x3, x4 for solution...are they all stock into the final array (which will become multi dimentional)? thanks

  • @apm

    @apm

    3 жыл бұрын

    The solver reports only the local minimum. Try different initial guesses to get other local minima.

  • @sriharshakodati
    @sriharshakodati4 жыл бұрын

    Great video. May I know if the minimize function is basically finding the root of x1x4(x1+x2+x3)+x3=0 equation? This would help a lot. Sorry for dumb question. Thanks!

  • @apm

    @apm

    4 жыл бұрын

    I think the lowest that it can find the objective value is around 17 so it would be x1x4(x1+x2+x3)+x3=17. The constraints prevent it from going zero or into a negative region. The lowest value for each variable is 1 so the minimum that it could be with no other constraints is 1(3)+1=4.

  • @sriharshakodati

    @sriharshakodati

    4 жыл бұрын

    ​@@apm Thanks for the reply. Got it. Can we give a bunch of non linear equations to minimize function, and get the roots, given the only constraint is the roots being positive? Thanks!

  • @sriharshakodati

    @sriharshakodati

    4 жыл бұрын

    I think we should have 2 constraints in this case. One is to make sure the function doesn't end up being negative. Other is to make sure the roots are not negative. @APMonitor.com any thoughts would be helpful. Thanks!

  • @apm

    @apm

    4 жыл бұрын

    @@sriharshakodati yes that is possible.

  • @user-gt2th3wz9c
    @user-gt2th3wz9c4 жыл бұрын

    Note for someone, who has less or equal inequality constraint. Docs: Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be non-negative. You need to multiply constraint by -1 to reverse the sign.

  • @apm

    @apm

    4 жыл бұрын

    That is a very good explanation!

  • @zsa208

    @zsa208

    Жыл бұрын

    I was wondering the same. Thanks for the supplementary information.

  • @aliguzel8463
    @aliguzel84635 жыл бұрын

    Great video..

  • @m1a2tank
    @m1a2tank6 жыл бұрын

    can you get solution of "maximum ellipse on certain polygon?" using optimize library? for example A(x-x0)^2 + B(y-y0)^2 = 1 st. set of linear constraints which describe polygon on X-Y space. And I need to get best x0,y0 which minimize A*B

  • @apm

    @apm

    6 жыл бұрын

    Here is a discussion thread in the APMonitor support group that is similar to your question: groups.google.com/d/msg/apmonitor/MET1rx3Cr4s/iQN9kiPlBgAJ I also recommend this link as a starting point: apmonitor.com/me575/index.php/Main/CircleChallenge

  • @jeolee2022
    @jeolee20227 жыл бұрын

    Thanks a lot.

  • @victorl.mercado5838
    @victorl.mercado58387 жыл бұрын

    Nice job!

  • @danielmarshal2532
    @danielmarshal25323 жыл бұрын

    Hi, what i noticed, if you put the x0 you`ll get 16 and if you use the `minimize` function, it will give you an array that result around 17. But isn`t that mean our goal with `minimize` function hasn`t been fulfilled yet?

  • @apm

    @apm

    3 жыл бұрын

    The initial guess violates the constraints so it isn't a feasible solution. The optimal solution is the minimum of all the feasible solutions.

  • @AmusiclouderH
    @AmusiclouderH5 жыл бұрын

    Hello APMonitor, thank you for making such wonderfull videos. I have a little problem, i tried adapting the optimization with Nested List but scipy doesnt seem to understand the structure so changed my model to Numpy array which it understands but gives me the error : "indexing a scalar" error. If u can show me how to implement the optimization using arrays that would be really helpfull.

  • @apm

    @apm

    5 жыл бұрын

    Check out the Python GEKKO package. You can use list comprehensions to implement arrays. apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization

  • @stakhanov.
    @stakhanov.5 ай бұрын

    hi, why is the min objective with the initial guess numbers lower than the min objective at the end? shouldn't the solver return the lowest value of the function?

  • @rrc

    @rrc

    5 ай бұрын

    The initial guess doesn't satisfy the constraints so it isn't a potential solution. The final objective is higher because the solver finds the best combination of decision variables that are the minimum objective and satisfy the constraints.

  • @C2toC4
    @C2toC44 жыл бұрын

    Why do you use x1, x2, x3, x4 in returning the objective function but then switch to using x[0], x[1], x[2], x[3] in the constraint functions? I'm not trying to be pedantic, but it would help me if it was consistent either one way or the other, unless there's a reason that they're like that? e.g. could the constraint1(x) function be written as return x1 * x2 * x3 * x4 - 25 instead? (purely to be consistent) and/or could you write constraint1 in the same way you did for constraint2: using a for loop? e.g. def constraint1(x): product = 1 for i in range(4): product = product * x[i] return product - 25 or instead write constraint2(x) as you did with the objective function return (and as I did with constraint1(x) above): using x1, x2, x3, x4 instead of x[i]. This way it is consistent and may be easier for people who aren't good coders or mathematicians to follow along perhaps.. (I apologise if these are bad questions/suggestions - I am not a programmer, just trying to learn, and in this case understand why you wrote things out in 3 different ways instead of 1..) Thank you kindly for any help in understanding (and of course for the video itself - cheers)!

  • @apm

    @apm

    4 жыл бұрын

    Thanks for the helpful suggestions. I used x1, x2, etc because that is how the mathematicians write the optimization problem. Python begins with index 0 so I needed to create an array for solving the problem so x[0] maps to x1. I can see how this is very confusing and I'll try to avoid this difference in future videos.

  • @isaacpemberthy507
    @isaacpemberthy5075 жыл бұрын

    Hello ApMonitor, great video, a question, how much is the capacity of Scipy? has it a restriction about number of variables or constraints? to solve, for example, the Vehicle routing problem, has it capacity?

  • @apm

    @apm

    5 жыл бұрын

    You can typically do 50-100 variables and constraints with Scipy.optimize.minimize and potentially larger if the problem is mostly linear. If you need something much larger (100000+ variables) then check out Python Gekko (gekko.readthedocs.io/en/latest/).

  • @apm

    @apm

    5 жыл бұрын

    There is no restriction with Scipy but the problem may take a long time. Here is a different scale-up analysis on ODEINT: kzread.info/dash/bejne/ap-smNh8acq6fNI.html

  • @mingyan8081
    @mingyan80817 жыл бұрын

    Does the initial set of x minimize the function? the result of your code shows the f(x)=17.01..., which is larger than 16 why is that?

  • @rrc

    @rrc

    7 жыл бұрын

    Ming Yan, good observation! If you check the initial guess, you'll see that the equality constraint (=40) is not satisfied so it isn't a feasible candidate solution. There is a good contour plot visualization of this problem here: apmonitor.com/pdc/index.php/Main/NonlinearProgramming

  • @mingyan8081

    @mingyan8081

    7 жыл бұрын

    thank you for your explanation and references. I will be more careful next time.

  • @rrc

    @rrc

    7 жыл бұрын

    Ming Yan, I think this was a great question because many others have the same concern and your question will likely help others too.

  • @YawoviDodziMotchon

    @YawoviDodziMotchon

    5 жыл бұрын

    definitely

  • @saicharangarrepalli9590

    @saicharangarrepalli9590

    3 жыл бұрын

    I was wondering the same. Thanks for this.

  • @alexandrudavid2103
    @alexandrudavid21037 жыл бұрын

    Hi, do you have any idea why i'm getting this: "C:\Python34\lib\site-packages\scipy\optimize\slsqp.py:341: RuntimeWarning: invalid value encountered in greater bnderr = where(bnds[:, 0] > bnds[:, 1])[0]"? I'm getting the result, but i'm afraid this error is messing up the results. Edit: seems it has something to do with the infinte bound. I read somewhere that to specify a infinite bound you had to input None(ex: b = (0,None))

  • @apm

    @apm

    7 жыл бұрын

    +Alexandru David, I had this same problem with infinite bounds and 'None'. I get around this by specifying a large upper or lower bound such as 1e10 or -1e10.

  • @Pruthvikajaykumar
    @Pruthvikajaykumar2 жыл бұрын

    In the end fun value turned out to be greater than the initial guess? Is it optimized? What am I missing?

  • @apm

    @apm

    2 жыл бұрын

    The initial guess is infeasible. It doesn't satisfy the constraints.

  • @levuthi
    @levuthi2 жыл бұрын

    excellent, thanks

  • @LL-lb7ur
    @LL-lb7ur5 жыл бұрын

    Thank you very much for your tutorial. Does Scipy minimize function require data?

  • @apm

    @apm

    5 жыл бұрын

    You can use Scipy minimize without data. In this tutorial, there is no data. There is only an objective function and two constraints.

  • @LL-lb7ur

    @LL-lb7ur

    5 жыл бұрын

    Many thanks!

  • @pallavbakshi612
    @pallavbakshi6126 жыл бұрын

    Great! Thanks.

  • @anuragsharma1953
    @anuragsharma19535 жыл бұрын

    Hello @APMonitor, thanks for this video. I have one doubt. I am trying to use SciPy.Optimize with Pandas dataframe. I want to minimize my target columns based on other columns. Can you please tell how to do that?

  • @apm

    @apm

    5 жыл бұрын

    The objective function always needs to have input parameters and return a scalar objective. You can do that with Pandas but you may need to condense your target minimized column with something like np.sum(my_pandas_df['my_column']) or np.sum(my_pandas_df['my_column']**2).

  • @anuragsharma1953

    @anuragsharma1953

    5 жыл бұрын

    Thanks. I tried the same thing.

  • @anuragsharma1953

    @anuragsharma1953

    5 жыл бұрын

    But I am stuck with one problem. There are some constraints in my problem. (i.e. Sum of few variables must be equal to a certain value) but The scipy optimize function is distributing them almost equally.

  • @ivanmerino3692
    @ivanmerino36924 жыл бұрын

    I have a question. If we change the guess, for instance 1 for all values, the script gives also 1 values as results for the variables and that´s totally wrong. So what could be the problem here?

  • @apm

    @apm

    4 жыл бұрын

    Scipy.minimize.optimize isn't the best solver in Python. Here are some other options: scicomp.stackexchange.com/questions/83/is-there-a-high-quality-nonlinear-programming-solver-for-python You may also want to try Gekko: apmonitor.com/che263/index.php/Main/PythonOptimization (see Method #3)

  • @ivanmerino3692

    @ivanmerino3692

    4 жыл бұрын

    @@apm thanks! Very useful

  • @jayashreebehera3197
    @jayashreebehera31974 жыл бұрын

    But how how come the minimization gave an objective function higher than before. You see initially your guess gave this to be equal to 16. After minimization, it was 17.014....... Shouldnt minimization minimize the objective function and return the points that give a value, even smaller than that of the guess?

  • @apm

    @apm

    4 жыл бұрын

    The initial guess didn't satisfy the inequality and equality constraints. It had a better objective function but it was infeasible.

  • @mohammedraza100

    @mohammedraza100

    3 жыл бұрын

    Thanks I was wondering the same.

  • @mukunthans3441
    @mukunthans34413 жыл бұрын

    Sir is there any advantages or disadvantages in using existing methodologies such as artificial neural networks, fuzzy logic and genetic algorithms to optimization of a linear objective function, subject to linear equality and linear inequality constraints over the Scipy method? I think Scipy method will provide accurate results in short time compared to other methods, since complexity of the linear problem is less.

  • @apm

    @apm

    3 жыл бұрын

    Gradient based methods like scipy find a local solution and are typically much faster than the other methods.

  • @mukunthans3441

    @mukunthans3441

    3 жыл бұрын

    @@apm Thankyou sir

  • @javonfair
    @javonfair4 жыл бұрын

    What's the difference between the "ineq' and 'eq' designations on the constraints?

  • @apm

    @apm

    4 жыл бұрын

    They are inequality (x+y

  • @davidtorgesen2037
    @davidtorgesen20375 жыл бұрын

    I have a question. I attempted to maximize the function but couldn't get it to work right. What do I need to change? Thank you for your help! This is what I did: def objective(x, sign=1.0): x1 = x[0] x2 = x[1] x3 = x[2] x4 = x[3] return sign*(x1*x4*(x1+x2+x3)+x3) def constraint1(x): return x[0]*x[1]*x[2]*x[3]-25.0 def constraint2(x): sum_sq = 40 for i in range(4): sum_sq = sum_sq - x[i]**2 return sum_sq sol2 = minimize(objective,x0,args=(-1.0,),method='SLSQP',\ bounds=bnds,constraints=cons) But it gave me this (it went beyond constraint1) print(sol2) fun: -134.733824524366 jac: array([-45.75387001, -16.64193916, -17.64193916, -36.49635124]) message: 'Optimization terminated successfully.' nfev: 68 nit: 11 njev: 11 status: 0 success: True x: array([4.56763313, 1.66137361, 1.76120447, 3.64344948]) print(objective(sol2.x,1)) print(constraint1(sol2.x)) print(constraint2(sol2.x)) 134.733824524366 23.69462805071327 -1.0361844715589541e-10

  • @apm

    @apm

    5 жыл бұрын

    You are missing a few lines. You may also want to check out the Gekko solution and syntax: apmonitor.com/che263/index.php/Main/PythonOptimization Here is a solution to that problem with scipy.optimize.minimize: import numpy as np from scipy.optimize import minimize def objective(x): return x[0]*x[3]*(x[0]+x[1]+x[2])+x[2] def constraint1(x): return x[0]*x[1]*x[2]*x[3]-25.0 def constraint2(x): sum_eq = 40.0 for i in range(4): sum_eq = sum_eq - x[i]**2 return sum_eq # initial guesses n = 4 x0 = np.zeros(n) x0[0] = 1.0 x0[1] = 5.0 x0[2] = 5.0 x0[3] = 1.0 # show initial objective print('Initial Objective: ' + str(objective(x0))) # optimize b = (1.0,5.0) bnds = (b, b, b, b) con1 = {'type': 'ineq', 'fun': constraint1} con2 = {'type': 'eq', 'fun': constraint2} cons = ([con1,con2]) solution = minimize(objective,x0,method='SLSQP',\ bounds=bnds,constraints=cons) x = solution.x # show final objective print('Final Objective: ' + str(objective(x))) # print solution print('Solution') print('x1 = ' + str(x[0])) print('x2 = ' + str(x[1])) print('x3 = ' + str(x[2])) print('x4 = ' + str(x[3]))

  • @amanjangid6375
    @amanjangid63754 жыл бұрын

    Thanks for the video, but how do l do if my objective is maximizing

  • @apm

    @apm

    4 жыл бұрын

    You can multiply your objective by -1 to switch from minimizing to maximizing.

  • @djenaihielhani5677
    @djenaihielhani56775 жыл бұрын

    is the minimization function (function) based on the descent gradient?

  • @apm

    @apm

    5 жыл бұрын

    The optimizer will work better if you have a continuous objective function and gradients. The Scipy.optimize.minimize package obtains gradients through finite difference. It calculates the gradients to determine the next trial point.

  • @gretabaumann6231
    @gretabaumann62314 жыл бұрын

    Is there a way to use scipy.optimize.minimize with only integer values?

  • @apm

    @apm

    4 жыл бұрын

    No, it only optimizes with continuous values. Here is a solution with a mixed integer solver (see GEKKO solutions): apmonitor.com/wiki/index.php/Main/IntegerBinaryVariables

  • @manoocgegr1364
    @manoocgegr13646 жыл бұрын

    hello thank you for your video I am wondering how it is possible to solve this problem without initial guess. I tried to remove initial guess but, looks like, initial guess should be provided as an argument to minimize(). Any advise will be highly appreciated

  • @apm

    @apm

    6 жыл бұрын

    +Manoochehr Akhlaghinia, an initial starting point for the optimizer is always required. If you don't have a good initial guess then I'd recommend starting with a vector of 1s or 0s.

  • @manoocgegr1364

    @manoocgegr1364

    6 жыл бұрын

    APMonitor.com thank you so much for your reply. Looks like if initial guess is not good then optimizer can not converge to an acceptable solution. For instance, the example you covered in your video does not throw a right answer with [0,0,0,0] and [1,1,1,1]. I have immigrated from MATLAB to Python. In MATLAB, Genetic Algorithm can solve any constrained bounded optimization problem even with no initial guess. So robust. I have also noticed there is Differential Evolution in python which optimizes without initial guess but it cant work with constraints. Im wondering if there is no such tool in Python at all (constrained bounded optimizer with no initial guess) and I better stop searching for it and focus on finding a robust algorithm to provide initial guess for current Python optimizers. Any advise will be highly appreciated.

  • @apm

    @apm

    6 жыл бұрын

    +Manoochehr Akhlaghinia, there may be a GA package for Python such as pypi.python.org/pypi/deap but I haven't tried it. I don't know of a package that doesn't require initial guesses. You could also try simulated annealing to generate initial guesses such as apmonitor.com/me575/index.php/Main/SimulatedAnnealing

  • @manoocgegr1364

    @manoocgegr1364

    6 жыл бұрын

    Thank you. Really appreciate it

  • @muhammadsarimmehdi
    @muhammadsarimmehdi5 жыл бұрын

    This is for just two constraints. What if we have N constraints and, due to the way the problem is posed, we cannot use matrices. How do we use a generic constraint function N times in this case?

  • @apm

    @apm

    5 жыл бұрын

    You would add them as additional constraints in the tuple (separated with commas) that you give as an input to the optimizer.

  • @forecenterforcustomermanag7715
    @forecenterforcustomermanag77153 жыл бұрын

    Easy and lucid style

  • @prof.goutamdutta4346
    @prof.goutamdutta43464 жыл бұрын

    How does Scripy compare with AMPL, GAMS , AIIMS ?

  • @apm

    @apm

    4 жыл бұрын

    Scipy doesn't provide exact derivatives with automatic differentiation while the others do provide gradients for faster solutions. A modeling language can also help with common modeling constructs. A couple alternatives for Python are pyomo and Gekko. There is a gekko tutorial and comparison at apmonitor.com/che263/index.php/Main/PythonOptimization Additional gekko tutorials are here: apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization pyomo and gekko are both freely available while the others that you listed have a licensing fee. pip install gekko

  • @SatishAnnigeri
    @SatishAnnigeri3 жыл бұрын

    Am I getting this right? Objective function was 16 with the initial values (1,5,5,1) was 16 and after the final solution it was 17.0141. Am I interpreting something incorrectly?

  • @apm

    @apm

    3 жыл бұрын

    The constraints aren't satisfied with the initial guess so it isn't a feasible solution.

  • @SatishAnnigeri

    @SatishAnnigeri

    3 жыл бұрын

    @@apm Oh yes, that is right. I was only looking at the bounds and not at the equality constraint. Sorry, my mistake. Thanks for the tutorial and the reply.

  • @jamesang7861
    @jamesang78613 жыл бұрын

    can we use Scipy.Optimize.Minimize for integer LP?

  • @apm

    @apm

    3 жыл бұрын

    No, it doesn't do integer variables. Try Gekko: apmonitor.com/wiki/index.php/Main/IntegerProgramming

  • @fffppp8762
    @fffppp87627 жыл бұрын

    Thanks

  • @lizizhu1843
    @lizizhu18434 жыл бұрын

    Could you point out how to possibly write a set of thousands of constraints using matrices and get them ready for scipy.optimize.minimize? In real world, the number of variables can be easily thousands. I'm dealing with a portfolio construction problem and I'm faced with thousands of variables. Thank you!

  • @apm

    @apm

    4 жыл бұрын

    Scipy.optimize.minimize is efficient for problems up to a hundred variables. If you have more then you probably want to try optimization software that is built for large-scale problems. Here are arrays in Python Gekko: stackoverflow.com/questions/52944970/how-to-use-arrays-in-gekko-optimizer-for-python The gradient based optimizers can solve problems with 1,000,000+ variables.

  • @lizizhu1843

    @lizizhu1843

    4 жыл бұрын

    @@apm Thank you so much! I just found out that your website actually has a whole section/department devoted to Gekko Python, which can deal with large scale non-linear optimization. I'll look into it and come back with some feedback.

  • @tracywang1
    @tracywang15 ай бұрын

    Question: x1*x2*x3*x4 >= 25 why we take 25 to the left not all the x to the right? x1^2+x2^2+x3^2+x4^2 = 40 why we take all the x to the right not 40 to the left?

  • @rrc

    @rrc

    5 ай бұрын

    It just needs to be in the form f(x)>0 for inequality constraints and f(x)=0 for equality constraints. You could go to either side for the equality constraint. For the inequality constraint, you could also go to either side but then just multiply by -1 to reverse the sign to get f(x)>0.

  • @user-ou6nw4wm4h
    @user-ou6nw4wm4h Жыл бұрын

    Why the optimized value 17.014 is even larger than the initial guess, 16?

  • @apm

    @apm

    Жыл бұрын

    The constraints are not satisfied with the initial guess. The initial guess is infeasible.

  • @Thimiosath13
    @Thimiosath135 жыл бұрын

    Nice!!! Any example for Multi-Objective Linear Programming? ???

  • @apm

    @apm

    5 жыл бұрын

    Here's some linear programming examples apmonitor.com/me575/index.php/Main/LinearProgramming multiple objectives are typically combined into one objective function. If you can't combine them then you may need to create a Pareto Frontier. More information on this is available at the optimization class website.

  • @cramermoraes
    @cramermoraes4 жыл бұрын

    Método dos mínimos quadrados esse!

  • @easonshih4818
    @easonshih48185 жыл бұрын

    at 4:15, you said to move everything to the right side for the second constraint, but what if I move everything to the left side, then write the program according to that. I did that, and the answer is different from yours. whys that...? x = [1,5,5,1] constraint2(x) returns -12 that is what your program is doing but with mine, it return 12

  • @apm

    @apm

    5 жыл бұрын

    If it is an equality constraint then it doesn't matter which side you put it on. If your constraint returns 12 then it appears that you are using the solver initial guess and not the optimized solution. For the optimized solution, it should return 0. Please see apmonitor.com/che263/index.php/Main/PythonOptimization for additional source code and examples.

  • @tigrayrimey6418
    @tigrayrimey64183 жыл бұрын

    Sir, I wrote some comments here in response to the recent materials you recommended me to read. but, seem omitted? Am I right?

  • @apm

    @apm

    3 жыл бұрын

    I think you repeated your same question 3 times in the comments. KZread may have deleted them if it thought the repeated comments were someone spamming the discussion thread.

  • @tigrayrimey6418

    @tigrayrimey6418

    3 жыл бұрын

    @@apm okay, my case is different. I will repost it. I only mention the term that Prof. and maybe private issues if not possible @YT, I don't know. But, it is not an uncommon practice to express an appreciation for an individual.

  • @dinodino3887
    @dinodino38874 жыл бұрын

    Hi, shouldn't equation for sum_sq be sum_sq = 40 - x[i]**2 instead of what you wrote. I got different results when I compared them.

  • @apm

    @apm

    4 жыл бұрын

    You need to do a summation on x[i]**2 as 40-sum([x[i]**2 for i in range(n)]).

  • @dinodino3887

    @dinodino3887

    4 жыл бұрын

    ​@@apm Thank you. Make sense :) Now seconde question. Can you explain to me how does exactly constraint1 satisfies the rule? How would it be if it was just >? If it was

  • @apm

    @apm

    4 жыл бұрын

    @@dinodino3887 it is not an integer programming problem so > and >= are the same. The optimizer can move 1e-16 away from that bound and that is the same solution with machine precision. If you change it from >= to

  • @dinodino3887

    @dinodino3887

    4 жыл бұрын

    @@apm Hi, thank you

  • @abcde1261
    @abcde12616 жыл бұрын

    Thanks! But what about maximum? Can you explain?

  • @apm

    @apm

    6 жыл бұрын

    Any maximization problem can be transformed into a minimization problem by multiplying the objective function by negative one. Most solvers require a minimized objective function.

  • @abcde1261

    @abcde1261

    6 жыл бұрын

    APMonitor.com thanks for help, I understand it yet )

  • @BiscuitZombies
    @BiscuitZombies4 жыл бұрын

    How would you do this for integers only?

  • @apm

    @apm

    4 жыл бұрын

    Here is information on solving binary, integer, and special ordered sets in Python: apmonitor.com/wiki/index.php/Main/IntegerBinaryVariables

  • @ismaelben-yelun4243
    @ismaelben-yelun42432 жыл бұрын

    I'm sorry, but it only undertakes how this method is implemented in Python rather than digging into the method itself (i.e. what does *exactly* a SLSQP method do?)

  • @apm

    @apm

    2 жыл бұрын

    Please see my Design Optimization course and textbook for details on how solvers work: apmonitor.com/me575

  • @ivankontra3446
    @ivankontra34464 жыл бұрын

    what if it was strictly larger than 25, what would you type then

  • @apm

    @apm

    4 жыл бұрын

    It is the same if you are dealing with continuous variables. The value 25.00000000 and 25.00000001 are considered to be the same if the solver tolerance is 1e-8.

  • @amitshiuly3125
    @amitshiuly31254 жыл бұрын

    can any one help me to carry out multi objective optimisation using PYTHON?

  • @apm

    @apm

    4 жыл бұрын

    Here is some help: apmonitor.com/do/index.php/Main/MultiObjectiveOptimization There is also more information in the optimization course: apmonitor.com/me575

  • @jingyiwang4931
    @jingyiwang49315 жыл бұрын

    if x[i] is a large array x[100] for example, do we need to write down all the numbers in the array for the initial guess x0???

  • @jingyiwang4931

    @jingyiwang4931

    5 жыл бұрын

    in the video is x0=[1,5,5,1], if the array is very large, what to do with the initial​ guess?

  • @apm

    @apm

    5 жыл бұрын

    You could just set the initial guess to zero: x0 = np.zeros(0) or any other number such as (2):x0 = np.ones(4) * 2.0 I recommend putting the initial guess internal to the constraints. You can initialize to any value, especially if you don't have a good idea of the initial guess.

  • @jingyiwang4931

    @jingyiwang4931

    5 жыл бұрын

    Thank you, professor. But does the value of initial value have an influence on the optimal result we search for? if not what's the point of setting an initial value?

  • @apm

    @apm

    5 жыл бұрын

    For convex optimization if there is no problem setting any initial guess because a local Optimizer should find the solution. However some problems have multiple local Minima and the initial guess is very important. Also for some problems that are convex the non-linearity is a problem for the solver and it may not find a solution without a good guess.

  • @jingyiwang4931

    @jingyiwang4931

    5 жыл бұрын

    APMonitor.com Thank you very much!!!Really help a lot!

  • @azad8upt
    @azad8upt6 жыл бұрын

    How can we implement "strict less than" constraint?

  • @apm

    @apm

    6 жыл бұрын

    Because you have continuous variables, the

  • @MrStudent1978
    @MrStudent19785 жыл бұрын

    Sir, can you please demonstrate trust region algorithm implementation?

  • @apm

    @apm

    5 жыл бұрын

    Here is some relevant information on the trust region approach: neos-guide.org/content/trust-region-methods

  • @viniciusbotelho9574
    @viniciusbotelho95745 жыл бұрын

    which are Lagrange multipliers at the solution?

  • @apm

    @apm

    5 жыл бұрын

    I don't see those reported, at least in these examples: docs.scipy.org/doc/scipy/reference/tutorial/optimize.html They are reported in APMonitor or GEKKO as the file apm_lam.txt when option m.options.diaglevel >=2.

  • @houdalmayahi3538
    @houdalmayahi35384 жыл бұрын

    Hi, does anyone know how the SLSQP work? What is the math behind it? How does it pick one solution out of an infinite number of solutions? I'd really appreciate it. I couldn't find an explanation of the method in google. Thanks

  • @apm

    @apm

    4 жыл бұрын

    Here is help on quasi-Newton methods: apmonitor.com/me575/index.php/Main/QuasiNewton This doesn't cover the SLSQP (Sequential Least SQuares Programming) method but it should give you an idea of the types of how local approximations and search directions are calculated.

  • @houdalmayahi3538

    @houdalmayahi3538

    4 жыл бұрын

    @@apm Thank you, professor! Do you think that the SLSQP has the same approach as in the Quasi-Newton methods? In other words, would the SLSQP and the Quasi-Newton methods give similar solutions?

  • @apm

    @apm

    4 жыл бұрын

    @@houdalmayahi3538 yes, they should all give the same solution if they satisfy the Karush-Kuhn-Tucker conditions (converge to a local solution): apmonitor.com/me575/index.php/Main/KuhnTucker

  • @houdalmayahi3538

    @houdalmayahi3538

    4 жыл бұрын

    ​@@apm Thank you so much!

  • @marcosmetalmind
    @marcosmetalmind4 жыл бұрын

    APMonitor.com How did you choose your starting shot for x0?

  • @apm

    @apm

    4 жыл бұрын

    It is just an initial guess. For convex problems, the solution will always be the same if it converges. For non-convex problems, the initial guess can lead to a different solution.

  • @marcosmetalmind

    @marcosmetalmind

    4 жыл бұрын

    @@apm thanks !

  • @pythonscienceanddatascienc4351
    @pythonscienceanddatascienc43513 жыл бұрын

    Hello! I studied your program a lot in Python and a question arose. If by chance, sum_sq = 400.0 and not 40, constraint2 is not satisfied because constraint2 ~ 300.024 and should be 0. Why does the program continue to provide a solution for x1 up to x4 if constraint 2 is not met? Please, I would like your help to interpret this. Thanks, Luciana from Brazil.

  • @apm

    @apm

    3 жыл бұрын

    You need to check the solution status. The solver may have reported that it couldn't find a solution. You can see this by printing the result.success Boolean flag. res = minimize( ) print(res.success) The optimization result is represented as a OptimizeResult object. Important attributes are: "x" the solution array, "success" a Boolean flag indicating if the optimizer exited successfully and message which describes the cause of the termination.

  • @pythonscienceanddatascienc4351

    @pythonscienceanddatascienc4351

    3 жыл бұрын

    ​@@apm first, thanks by your so fast answer. I followed your directions and used the function you sent me. However, even placing the value of sum_sq = 400.0 in constraint2, res.success = True. I didn't understand why the answer is True if it should be False. I wrote the following code: res = minimize(objective, x0, method='SLSQP', tol=1e-6,constraints=cons) print(res.success) I'm sorry to have to question you again, but I really need your help. Greetings from Brazil. Luciana

  • @apm

    @apm

    3 жыл бұрын

    @@pythonscienceanddatascienc4351 Try this (if gives False): import numpy as np from scipy.optimize import minimize def objective(x): return x[0]*x[3]*(x[0]+x[1]+x[2])+x[2] def constraint1(x): return x[0]*x[1]*x[2]*x[3]-25.0 def constraint2(x): sum_eq = 400.0 for i in range(4): sum_eq = sum_eq - x[i]**2 return sum_eq # initial guesses n = 4 x0 = np.zeros(n) x0[0] = 1.0 x0[1] = 5.0 x0[2] = 5.0 x0[3] = 1.0 # show initial objective print('Initial SSE Objective: ' + str(objective(x0))) # optimize b = (1.0,5.0) bnds = (b, b, b, b) con1 = {'type': 'ineq', 'fun': constraint1} con2 = {'type': 'eq', 'fun': constraint2} cons = ([con1,con2]) solution = minimize(objective,x0,method='SLSQP',\ bounds=bnds,constraints=cons) x = solution.x print(solution.success) # show final objective print('Final SSE Objective: ' + str(objective(x))) # print solution print('Solution') print('x1 = ' + str(x[0])) print('x2 = ' + str(x[1])) print('x3 = ' + str(x[2])) print('x4 = ' + str(x[3]))

  • @pythonscienceanddatascienc4351

    @pythonscienceanddatascienc4351

    3 жыл бұрын

    @@apm I tested it now and it worked. Thank you, so much!!! However, I realized that I did not enter the boundary conditions in the minimize command. I forgot. Just a question of interpretation: why is it True if I put the bondary conditions and False if I don't (only with the restrictions, would it already give False in both cases)? Thanks again.

  • @apm

    @apm

    3 жыл бұрын

    @@pythonscienceanddatascienc4351 your solution may be unbounded (variables go to infinity) if you don't include the boundary conditions.

  • @alfajarnugraha5741
    @alfajarnugraha57413 жыл бұрын

    Sir i have alot of questions about optimize with python, and i have some model that i shuld to finish. So can i get your contact sir?

  • @apm

    @apm

    3 жыл бұрын

    Unfortunately I can't help with the many specific questions each week and the personal requests for support. Here are some resources: apmonitor.com/me575

  • @MrG0CE
    @MrG0CE6 жыл бұрын

    HOW DOES THE PROGRAM KNOW IF THE INEQUALITY IS >= OR

  • @apm

    @apm

    6 жыл бұрын

    It is always >=. This is from the documentation at docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html minimize f(x) subject to g_i(x) >= 0, i = 1,...,m h_j(x) = 0, j = 1,...,p If you have an inequality constraint that is

  • @MrG0CE

    @MrG0CE

    6 жыл бұрын

    THANKS ! KEEP UP THE GOOD WORK ! NICE VIDEOS ;)

  • @senbigas9534
    @senbigas95344 жыл бұрын

    How to define the constraint, if the constraint is like -10>=x1*x2*x3*x4>=25

  • @apm

    @apm

    4 жыл бұрын

    You'll need to define two inequality constraints. You can flip the sign by multiplying by -1.

  • @senbigas9534

    @senbigas9534

    4 жыл бұрын

    @@apm ,Thank you sir

  • @HarikrushnaDodiya
    @HarikrushnaDodiya4 жыл бұрын

    If i changing initial value, the minimization value is change, It may not effect results (Initial value).

  • @apm

    @apm

    4 жыл бұрын

    Yes, it may be finding a different local solution. An initial guess is important because it helps the local minimizer to find the global optimal solution.

  • @HarikrushnaDodiya

    @HarikrushnaDodiya

    4 жыл бұрын

    @@apm thanks for replying, how to select good intial value than ? If I am trying with higher intial value than it is giving error. In actual problem may or may not have known or goof intial value

  • @apm

    @apm

    4 жыл бұрын

    @@HarikrushnaDodiya a better solver such as those in Python gekko can help if you don't a good initial guess or your problem is large. Try Python gekko with this same example problem: apmonitor.com/che263/index.php/Main/PythonOptimization (Method #3)

  • @HarikrushnaDodiya

    @HarikrushnaDodiya

    4 жыл бұрын

    @@apm I want to output as int value. i had wrote code but condition is not satisfied with answer. def objective(x): return 14*int(x[0])+13*int(x[1])+11*int(x[2])+13*int(x[3])+13*int(x[4])+12*int(x[5]) def constraint1(x): return (int(x[0])+int(x[1])+int(x[2]))-1200 def constraint2(x): return (int(x[3])+int(x[4])+int(x[5]))-1000 def constraint3(x): return (int(x[0])+int(x[3]))-1000 def constraint4(x): return (int(x[1])+int(x[4]))-700 def constraint5(x): return (int(x[2])+int(x[5]))-500 # optimize con1 = {'type': 'ineq', 'fun': constraint1} con2 = {'type': 'ineq', 'fun': constraint2} con3 = {'type': 'ineq', 'fun': constraint3} con4 = {'type': 'ineq', 'fun': constraint4} con5 = {'type': 'ineq', 'fun': constraint5} cons = ([con1,con2,con3,con4,con5]) solution = minimize(objective,(1,1,1,1,11),method='SLSQP',constraints=cons)

  • @apm

    @apm

    4 жыл бұрын

    @@HarikrushnaDodiya the Scipy.optimize.minimize solver can only handle continuous variables, not integer variables. You'll need to use a tool such as Python Gekko (apmonitor.com/wiki/index.php/Main/GekkoPythonOptimization). See example problem #10 for a mixed integer programming optimization problem.

  • @10a3asd
    @10a3asd3 жыл бұрын

    I got lost around 2:00 and now I'm mot sure where or even who I am anymore.

  • @apm

    @apm

    3 жыл бұрын

    Yup, I can see how that is confusing. I hope you can kick your amnesia.

Келесі