Reinforcement Learning for Trading Practical Examples and Lessons Learned by Dr. Tom Starke

Ғылым және технология

This talk, titled, “Reinforcement Learning for Trading Practical Examples and Lessons Learned” was given by Dr. Tom Starke at QuantCon 2018.
Description:
Since AlphaGo beat the world Go champion, reinforcement learning has received considerable attention and seems like an attractive choice for completely autonomous trading systems. This talk shows practical aspects and examples of deep reinforcement learning applied to trading and discusses the pros and cons of this technology.
The slides for this talk can be viewed at: www.slideshare.net/secret/1qo....
About the Speaker:
Dr. Tom Starke has a Ph.D. in Physics and works as an algorithmic trader at a proprietary trading company in Sydney. He has a keen interest in mathematical modeling and machine learning in the financial markets. He has previously lectured computer simulation at Oxford University and lead strategic research projects for Rolls-Royce Plc.
Tom is very active in the quantitative trading community, running workshops for Quantopian, teaching people quantitative analysis techniques, and organizing algorithmic trading meetup groups such as Cybertraders Syd.
To learn more about Quantopian, visit www.quantopian.com.
Disclaimer
Quantopian provides this presentation to help people write trading algorithms - it is not intended to provide investment advice.
More specifically, the material is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory or other services by Quantopian.
In addition, the content neither constitutes investment advice nor offers any opinion with respect to the suitability of any security or any specific investment. Quantopian makes no guarantees as to accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Пікірлер: 56

  • @AlexeyMatushevsky
    @AlexeyMatushevsky3 жыл бұрын

    Thank you for great presentation!

  • @Bill0102
    @Bill01024 ай бұрын

    I'm immersed in this. I read a book with a similar theme, and I was completely immersed. "The Art of Saying No: Mastering Boundaries for a Fulfilling Life" by Samuel Dawn

  • @Otvazhnii
    @Otvazhnii2 жыл бұрын

    Yet another improvement. You cannot rely on one attempt of training. Initial weights are random. You have to implement a multiprocessing logic and run 10 attempts at a time and print the 10 pictures of profit and loss curves at the end.

  • @michelletadmor8642
    @michelletadmor86424 жыл бұрын

    I wonder how he smoothes the data - perhaps "now" timestamp was already including partial info of the next data point. If it was smoothly only backwards then the next timestamp at exit might be completely off than the real exit price.

  • @Otvazhnii
    @Otvazhnii2 жыл бұрын

    And yet another improvement. You regularize a state data by substracting mean and dividing by standard deviation. It is a good thing to regularize but a state data is made of 141 values, including ohlc prices for M5, H1, D1 bars and different indicators ranging from -10 to 100. I do not think you can merge quantities of products with prices for products and then regularize, as they say, altogether.

  • @attilasarkany6123

    @attilasarkany6123

    11 ай бұрын

    yep, you are right, that part of the code is wrong.

  • @scalbylasusjim2780
    @scalbylasusjim27802 жыл бұрын

    I think neural networks start to truly outperform SVM’s as the decision boundary becomes more and more nonlinear. Kernel tricks would have to become more and more complex.

  • @AtillaYurtseven
    @AtillaYurtseven4 жыл бұрын

    30:14 you are updating state and applying the action. When we chose an action, first we need to apply than we need to update the state and get the reward. Let's say current price is 100.20. When agent decides to buy, it's has to buy from the price 100.20 (excluding spread/slippage and commission). In your example, it's buying with the next price. Am I wrong?

  • @hanwantshekhawat4314

    @hanwantshekhawat4314

    3 жыл бұрын

    Executing at the next price sample (tick/bar) is a common way to model time delay. Simplistic but adds a some noise which may be more realistic than assuming execution happens at exactly the price where the decision was made

  • @AlexeyMatushevsky
    @AlexeyMatushevsky3 жыл бұрын

    On 19:53 you mentions the Right Regime - 'does it end up choosing some training process ' It's easy to understand what is Mean Reversion process, but what does it mean 'training process' ?

  • @blackprinze
    @blackprinze4 жыл бұрын

    SVM has some type of geometry element which responds well to any freely traded market

  • @alrey72
    @alrey722 жыл бұрын

    but the technical indicators are from past prices also. isnt it better for the nn/rl interpret the prices themselves?

  • @Otvazhnii
    @Otvazhnii2 жыл бұрын

    I come up with a number of improvements to the code. Firstly, the epsilon calculation runs out to zero after trade 5, while 99% of random numbers fall between 0,1 and 0,9. So no exploration after trade 5. Secondly, H1 and D1 bars are made from M5 bars by choosing only the 2 left and right M5 bars. This is correct for open and close prices, but not for high and low prices which move very noisily during every hour and still more so during the day. Thirdly, the way the code is built, it may take well over a month to run 11500 games (trades) as is indicated in your code. By converting the pandas data to numpy data and by then building a numpy array of states before training, you can speed up the code literally 10,000 times. And finally, the Apple stock goes bluntly up at some point so your strategy which drops exploration after trade 5 and starts learning the replay memory of the last trades, does it not fit so nicely to the growing trend of the stock?

  • @zhibindeng8723

    @zhibindeng8723

    2 жыл бұрын

    why not share your code for us to backtest? Thank you.

  • @harendrasingh_22
    @harendrasingh_224 жыл бұрын

    3:32 guys please upload his talk too !

  • @andy.robinson

    @andy.robinson

    4 жыл бұрын

    There's quite a few Tucker Balch vids on YT. You can probably pick up a lot from those 👍

  • @user-qj1ij5dv8s
    @user-qj1ij5dv8s Жыл бұрын

    does this really work for stock trading? any trackrecord to check for last 5 years?

  • @chrisminnoy3637
    @chrisminnoy36374 жыл бұрын

    Just as with any other AI algorithm, you need to clean your data before you give it to your reinforcement learner. But you can make a neural net that cleans that data for you, with relative succes. Noise is also an issue in other domains, not just finance. Ofcourse you are creating a feedback loop. When you buy/sell with succes your competitors will adapt, and so the problem shifts to a more difficult state...adding overal noise (randomness) to the system.

  • @niallmurray2915

    @niallmurray2915

    4 жыл бұрын

    How do your competitors know you are successful?

  • @polonezu9576
    @polonezu95762 жыл бұрын

    i get errors on mt 4

  • @aricanto1764
    @aricanto1764 Жыл бұрын

    NLP is the way forward 💪

  • @a_machiniac
    @a_machiniac Жыл бұрын

    09:37

  • @alute5532
    @alute55324 жыл бұрын

    I am doing deep learning but now I'm thinking of integrating it with a reinforcement learning as an ensemble on the outside. (with a money management system on the side) Is there Anyone in California interested in my project?

  • @Scrathzerz

    @Scrathzerz

    3 жыл бұрын

    How’s it going?

  • @suecheng3755

    @suecheng3755

    3 жыл бұрын

    Hi there, my research area is reinforcement learning, maybe I can give you some ideas. May I have your contact information?

  • @joysahoo7470
    @joysahoo74703 жыл бұрын

    Thank you sir for good explanation! Please help me to solve this error - ImportError: cannot import name 'sgd' from 'keras.optimizers' am not able to fix this error and if anyone to fix this error please help me

  • @eliastheis5265

    @eliastheis5265

    3 жыл бұрын

    from keras.optimizers import SGD not sgd

  • @eliastheis5265

    @eliastheis5265

    3 жыл бұрын

    @@joysahoo7470 Try from tensorflow.keras.optimizers import SGD

  • @joysahoo7470

    @joysahoo7470

    3 жыл бұрын

    @@eliastheis5265 Thank you

  • @polonezu9576
    @polonezu95762 жыл бұрын

    can we use this on MT 4 platform?

  • @andresg297

    @andresg297

    2 жыл бұрын

    MT5

  • @polonezu9576

    @polonezu9576

    2 жыл бұрын

    @@andresg297 is not posible

  • @monanica7331
    @monanica73312 жыл бұрын

    BTC for $75K by end of this year& Control of The Currency is already Decentralised And now the China disruption would simply Decentralise the Mining setup for the better

  • @gogae22
    @gogae223 жыл бұрын

    Why can't we give reward in every time step?

  • @randomdude79404

    @randomdude79404

    3 жыл бұрын

    From my basic knowledge the reward only comes in when a decision is made whether that be a buy or a sell. I may be wrong but this is just from my basic understanding.

  • @norabelrose198
    @norabelrose1983 жыл бұрын

    "LSTMs, they're somewhat new" they've been around since 1997 lol

  • @block1086

    @block1086

    2 жыл бұрын

    attention is all you need

  • @oliverli9630
    @oliverli96304 жыл бұрын

    Siri was triggered at 1:40, hahaha. Time to rethink about ML?

  • @user-nn8ne5we3r
    @user-nn8ne5we3r9 ай бұрын

    Results are good be because the train data is trending.

  • @theappliedcoder9824
    @theappliedcoder98244 жыл бұрын

    Hi, I work on Reinforcement Learning too, anyone hiring reply !

  • @Tradinginthezen

    @Tradinginthezen

    4 жыл бұрын

    how much salary are you expecting?

  • @theappliedcoder9824

    @theappliedcoder9824

    4 жыл бұрын

    @@Tradinginthezen depends on the work flow, sir

  • @henrifritsmaarseveen6260
    @henrifritsmaarseveen62602 жыл бұрын

    95% of the trades are made by big money they can hire and build the most advance systems and programmers .. and they fail . so it probably will not .. will not ever work ..

  • @guardtank4877
    @guardtank48774 жыл бұрын

    I thought reinforcement learning is shit for trading

  • @100xspaceai9

    @100xspaceai9

    Жыл бұрын

    dont think do , test and validate

  • @redcabinstudios7248
    @redcabinstudios72484 жыл бұрын

    If AI works, Quant Trading will go down I guess, beware!

  • @alute5532
    @alute55324 жыл бұрын

    23:55 when you said random walk most of the time I just realized you're not a real Quant my dear one.

  • @rshsrhserhserh1268

    @rshsrhserhserh1268

    3 жыл бұрын

    why

  • @williamqh

    @williamqh

    3 жыл бұрын

    @@rshsrhserhserh1268 the market is moved by big institutions usually, so they will make it appeared to be random but not really.

Келесі