Machine Learning in ProOrder ProRealTime

Forums ProRealTime English forum ProOrder support Machine Learning in ProOrder ProRealTime

Viewing 15 posts - 106 through 120 (of 455 total)
  • #125722

    I figured it out, if I just put all my code and added variables to the imported GraHal “version” of my Ehler’s Univ Osc Supersmoother, I don’t need to rename any old systems that use the indicator.  I’m still not sure why that new version of the indicator caused issues with renaming as described above considering as I can’t find any trace of it in the Bard itf !?

    #125724

    Cheers @GraHal, all sorted, I eventually decided to use your indicator as the template to put my old indicator and variables in to. Still unsure why there was an indicator name issue if the Bard.itf never had that Univ Osc code in it and your system only referenced the Ehler’s Univ Osc Supersmoother?

    Been ML optimising both the long and short thresholds and the will start on the Bandedge too later and tomorrow. I found it better setting Reps1 and Reps2 to 10 instead of 3. How is 3 enough to determine that a parameter would even need adjusting?

    I also changed the increments from 10 to 20 so it can go from -1 to 1 in increments of 0.1.

    If we’re happy leaving the oscillator thresholds at -0.8 and 0.8 and want to self optimise the Bandedge how is that coded I wonder? I note the rem’d out 100 bandedge in the strategy. You’d need to have added it as a variable with the spanner in the Univ Osc indicator code. I think I would code it like this:

    but I am just not sure how the ML code handles added variables? I’ll try in a bit as I’m still looking at ML’ing the thresholds on different timeframes/instruments.

    Great job! Cheers again @GraHal

    #125726

    I only have two words to say @Juanj, F%*$ Me!
    Thank you so much for this Machine Learning code and thanks @GraHal, @Vonasi,

    This was my first test using ML. I double optimised the ML code to take care of both the Long and the Short side of Ehler’s Univ Osc Supersmoother oscillator thresholds (range from -1 to 1) and compared it to the basic non ML version using Buy at -0.8 and Sell at 0.8: Pls see image.

    I upped the Reps from 3 to 5, and the MaxIncrements to 20 (from 10) to allow it to calculate from -1 to 1 on the oscillator in steps of 0.1 and choose a random date range.
    Spread was 3.8 Daily Dow Jones. Note if anyone can figure out why, when set on all available data with 10,000 units, the ML version doesn’t trade between 1982 and 2003 that would be good to know?
    (Don’t worry about the -£114k, it’s the demo which was up 60% to £160k… before the Covid19 market rout…

    Here’s the full system and indicator code for anyone how wants to experiment with it:

    Indicator :

     

     

     

    2 users thanked author for this post.
    #125735

    Here I’ve added 3 lots of Machine Learning Algos in the same System. (Please check coding, I wasn’t sure how to code graphs for Positionperf?). I’m still learning about the way this algo works and will probably be asking more questions soon, but I have to say this is a major breakthrough.

    The first two identical setting ML algos (ValueX and ValueY) deal with the Ehler’s Universal  Oscillator Supersmoother’s Long and Short Threshold entries, between -1 and +1. in steps of 0.1.
    The added third ML algo (ValueZ) then looks at the Bandedge setting for the Oscillator (it’s like a sensitivity setting, we’ve all seen RSI’s in strong up moves where the RSI hogs the upper range and flatlines giving no useful info, this setting if set to lower form 100 down 25 will allow for more definition, if set to 100 it’ll look like that overloaded RSI but still be more accurate).

    I compared the standard Ehler’s Universal  Oscillator Supersmoother system (last equity curve at the bottom), as a base test, compared to ML systems with one ML algo, two ML’s algos and finally three ML’s algos, see the very top equity chart.
    The Bandedge from experience can be increased in increments of 25, but over 125 there’s not much difference in performance as I recall going much higher.  The performance is reasonable (with 20% Drawdown) and I’m sure the Exits could be improved instead of using mean zero level or a trailing stop (set at a generic 100 point but not optimised or tested), maybe with a Kase Dev Stop or Kaufman Volatility Stop…  but anyhow, the point of this post is to notice how the equity curve goes nicely from the bottom left to top right of the graph.

    Code below, (indicator code already been posted in comments before this one).

    Blue Skies!! @Gabri! Lol.

     

    #125761

    Wow! … you been burning the midnight oil!? 🙂

    So glad you got the Issue sorted re Indicator and your other Systems!

    Well done and Thank You for Sharing.

    #125766

    Good job!!!!

    1 user thanked author for this post.
    #125787

    HELLO Bard and thank you for your work I tried today Sunday to make the code work, but I can’t do it
    If I understood correctly, the code works with 1 indicator code and 1 strategy code, right? Maybe there is an error and sending two ITF files would be better? cordially

    #125922

    Yeah, I kind of did @GraHal, so many things to look at and test! I know it’s not a “pancea” as you put it and in fact, one of my generic Univ Osc systems (set at -0.8/0.8, bandedge =100) performed far better than the one, two or three ML algo system versions, (I might post that example and see if anyone can figure out why), but it is definitely a highly useful addition to the existing PRC codes that are on this site.

    I had a lot of thoughts about it, even as writing this now I just thought, what if it was applied to Money Management position sizing? Would anyone like to have a look at that?

    So on Friday night, having run the ML system with two algos (for Universal Oscillator threshold, individual Long and Short entries (the Bandedge was static and set to 100)) and the ML system with 3 algos which also optimised the Bandedge too as well as the entry thresholds, and then tested them together on Coffee, pls see image, this time ML x2 performed far better that the ML x3.

    So…. my thought was… @Juanj, is there a way to put both ML2 and ML3 codes in the same system and then have them “fight it out,” or self optimise and find the better of the two systems (ML2 and ML3)? Perhaps the two systems switch and alternate back and forth, much like the self optimised variables selected in these ML algos? It would kind of be like having a backup plan in case one of the two systems performs poorly. If Juanj is busy, can anyone else think of a way that could be done?

    As for performance metrics that Juanj mentioned, can anyone code Sharp or Sortino Ratios to be included in the ML code? Here’s a copy and paste from Van Tharp’s page on he performance metrics subject:

    System Performance, Part IV

    A ratio I like to use is the average annual percentage gain divided by the maximum draw down. This gives us a ratio of how much we make per year divided by how much we would be down at any time during the year. Or in simple terms: How much will I have to risk losing in order to generate my average returns? Any ratio of that is less than 2:1 is suspect (do you really want to risk a 50 percent draw down to make a 50% gain?).

    Industry standard performance measures. Let’s close by looking at two composite numbers that many money managers use to measure their performance:

    1. Sharpe Ratio: (system rate of return – risk-free rate of return) / standard deviation of system returns.

    The Sharp Ratio measures risk to reward by giving the returns of the system as a ratio to its standard deviation. If the system has very constant returns, it will have a high Sharpe Ratio. A system with returns that vary greatly period-to-period will have a lower Sharpe Ratio.

    2. Sortino Ratio: One problem with the Sharpe Ratio is that it penalizes a system for a big up month or “good volatility”. The Sortino Ratio attempts to overcome this issue by dividing the same risk-adjusted rate of return used in the Sharpe Ratio by only the negative deviation or “bad volatility” (the downside semi-variance).

    https://www.vantharp.com/trading/system-performance-2/

     

    #125926

    Sure @bertrandpinoy, the next time I launch PRT, which maybe later today or tomorrow.

    #125929
    I am commenting as I see them in your post … else I will forget! 🙂

    this time ML x2 performed far better that the ML x3.

    Are you 100% sure HAlgo3 didn’t have any HAlgo 2 references remaining / cross referring?

    have them “fight it out,” or self optimise

    Re above … did you see / understand below?  I never got it to work, but you may have more success??

    The HeuristicsCycleLimit parameter will determine how many ‘cycles’ each algorithm gets.

     

     

    #125980

    Are you 100% sure HAlgo3 didn’t have any HAlgo 2 references remaining / cross referring?

    ML 3 has the added self optimisation of the ValueZ bandedge setting as well as the original ValueX Long and ValueY Short thresholds (which is all ML 2 is determining, just thresholds). Is that what you mean?

    Re above … did you see / understand below?  I never got it to work, but you may have more success??

    Sorry, not sure I’m following you? “understand below”? What didn’t you get to work?

     

    So the ML2 “fighting it out” with ML3 (from the coffee trade image above), can be achieved with the HeuristicsCycleLimit code below: ?

     

     So, if my triple ML3 version above was to include this  HeuristicsCycleLimit code are you’re saying that the system can then be self optimising the ML2 and ML3 separate systems but now all under the cover of one system? What would the code look like specifically relating to the parts concerning the Cycle Limit? I was looking at https://www.prorealcode.com/topic/machine-learning-in-proorder/page/5/#post-121436 for ideas:

     

    I can see that “Heuristics Algorithm 1″ and “Heuristics Algorithm 2” are used in the ML2 system and “Heuristics Algorithm 1,” “Heuristics Algorithm 2” and “Heuristics Algorithm 3” are used in the ML3 system, it’s just that I haven’t used the Heuristicyclelimit in either ML2 or ML3 and can’t quite figure the coding in my head to get the New System to self optimise between the ML2 and ML3 components? Hope that makes sense!

    #125984

    Is that what you mean?

    No! 🙂

    I was asking if you are 100% sure that your HAlgo 3 is 100% unique coded  / separate from HAlgo 2 … no duplicated  commands or variables?

    not sure I’m following you? “understand below”?

    Did you click on the link that was below the statement I made referring to ‘below’?

    Here is the link again, it provides a code snippet that JuanJ provided for use when a System includes more than one HAlgo in the same System.

    The HeuristicsCycleLimit parameter will determine how many ‘cycles’ each algorithm gets.

     

     

    #125986

    Maybe we should Keep it Simple and only use 1 x HAlgo in each System?

    OR

    We try and get ‘JuanJ Heuristic Cycle Limit’ code working in Systems using 2 or more HAlgos??

    #126005

    Ha ha, the “joys” of non verbal communication 😀 What is your concern with cross referencing?

    If HAlgo 3, as you call the ML3 system is nothing but HAlgo2/ML2 but with an added Bandedge optimisation too — as described above and which can be seen by looking at the code and seeing the extra ValueZ (bandedge) optimisaton — the coffee example/image is still showing what happens when you set these two systems in a race with each other. ML2 wins. The outcome of ML2 and ML 3 are still independent.

    Below, right I get you.
    Three algos in one system… I just got really curious to see what would happen! I’d already read Juanj’s comment about optimising two variables and the  potential issue of the algo not being able to determine which one (ValueX or ValueY) is causing the performance increase or decrease: (I copied and pasted all the best points from this thread into a text edit page, it makes getting to grips with the code easier if you can just scroll up and down fast using the highlight tool for bits of code or ideas/phrases like “cyclelimit”).

    Francesco asked: Could this be applied to more then a value in the same system?

     Juanj replied: Depends on what you mean. If it is the same target variable just used in different sections of the code then yes else no. The reason being that if you apply it to 2 or more different variables the algorithm will not be able to tell which one is causing the performance increase or decrease. I have experimented with literally duplicating the algorithm (slightly changing the name of each variable in the algorithm) and assigning it’s output to a second variable. And although this works to some extent it definitely reduces the accuracy and efficacy of the algorithm for the same reasons mentioned above.

    https://www.prorealcode.com/topic/machine-learning-in-proorder/page/2/#post-121090

    Juan: “The HeuristicsCycleLimit parameter will determine how many ‘cycles’ each algorithm gets. “ and… “Best is to code it so that each algo gets it’s own evaluation period. Although it would require at least 2 evaluation periods to really know if performance is increasing or decreasing. So perhaps an additional ‘wait counter’ should be built into the strategy giving each algo 2 or more evaluation periods before giving the next algo a turn.”

    https://www.prorealcode.com/topic/machine-learning-in-proorder/page/2/#post-121129

    “We try and get ‘JuanJ Heuristic Cycle Limit’ code working in Systems using 2 or more HAlgos??“

    It kind of already is, there is ValueX, ValueY and now ValueZ all in within one ML3 system (I put all the settings for each at the top of the code). It may not be optimal, but it does appear to be increasing performance, unless it’s coffee, which worked better with ML2, (the system with two algos, ValueX and ValueY for long and short entry)… which is why I thought why not have ML2 system and ML3 system, (with it’s long (ValueX) and short (ValueY) + Bandedge (ValueZ)) “fight it out” under one roof (in a new system). 

    My question is how to now code ML2 and ML3 – two formerly separate systems, one which just optimised the long and short entries and the other that did that and also the bandedge setting – to have them now housed under one roof and use the heuristiccyclelimit to let those two (soon to be, if we can code it) sub-systems compete with each other? 

    Do you or anyone else have any ideas how to code that heuristiccyclelimit for this to work under “one roof?”

    #126009

    “We try and get ‘JuanJ Heuristic Cycle Limit’ code working in Systems using 2 or more HAlgos??“

    Sorry I see what you mean (I was only looking at the second half of your sentence). I’ve been looking at the Heuristic Cycle Limit code and wondered how to add it without getting floating code from the left margin?

    Adding Juanj’s heuristic code from https://www.prorealcode.com/topic/machine-learning-in-proorder/page/3/#post-121130
    (and including the heuristic initialisation code at the beginning_ also does the same, like the code isn’t syntax-ing correctly because of the float away from the left margin?
    Pls see images. I’m sure it’s really a simple fix for a coder but closing open statements by adding Endif’s hasn’t fixed it.

    I also wondered if this is correct if using 3 algos in one system?:

    If HeuristicsAlgo1 = 1 , then why doesn’t this need to be defined as to what the conditions for this algo is going to be to start its process?

    This is the Ehler’s Universal Oscillator ML3 system now with Heuristics cycle code for ValueX and ValueY (Long/Short entry thresholds), but not for Value  Z (bandedge) as I just need to confirm the Heuristic for 3 scenarios code (see my attempt above in this post) and it’s getting late..

    Pls see images for performance.

    Thanks once again @Juanj, excellent job.

Viewing 15 posts - 106 through 120 (of 455 total)

Create your free account now and post your request to benefit from the help of the community
Register or Login