Neural networks programming with prorealtime

Viewing 15 posts - 31 through 45 (of 127 total)
  • Author
    Posts
  • #79210 quote
    Leo
    Participant
    Veteran

    It will be great if some of you can help me with this project.

    I only wish I could Leo and I feel sad that only Nicolas is helping you, but you also need folks with more time than Nicolas can spare.

    Your work is ground breaking as far as most of us are concerned. I did read the reference you posted and broadly understood, but it’s the coding I struggle with.

    Tomorrow I will see if I can contact @Maz as he is a capable member who sticks in my mind as one who similarly tried to move forward with complex collaborative projects on here.

    thanks a lot, the little issue i have is m daughter is 2 weeks old 🙂

    #79227 quote
    GraHal
    Participant
    Master

    thanks a lot, the little issue i have is m daughter is 2 weeks old

    Oh Wow! You are a hero managing to achieve anything other than ‘survive’ … with a new precious little bundle to care for ! 🙂 Awww

    I’ve just emailed Maz as promised, his profile mentions … developing neural networks and machine-learning … so I feel sure any input from Maz will benefit this Thread.

    Cheers

    #79229 quote
    Nicolas
    Keymaster
    Master

    Condition to BUY at line 5 was only a dummy test because of the required BUY instruction in ProBacktest. Anyway, do “GRAPH a” shows a value of 10k at first candlestick of the backtest?

    #79231 quote
    Leo
    Participant
    Veteran

    thanks a lot, the little issue i have is m daughter is 2 weeks old

    Oh Wow! You are a hero managing to achieve anything other than ‘survive’ … with a new precious little bundle to care for ! 🙂 Awww

    I’ve just emailed Maz as promised, his profile mentions … developing neural networks and machine-learning … so I feel sure any input from Maz will benefit this Thread.

    Cheers

    The truth is, now with a baby I feel more eager to succeed  in algorithmic trading rather that my paid-8-hours-job  🙂

    Thanks for email Maz, it will be great to get an inside opinion in this project

    coscar thanked this post
    #79232 quote
    Leo
    Participant
    Veteran

    Condition to BUY at line 5 was only a dummy test because of the required BUY instruction in ProBacktest. Anyway, do “GRAPH a” shows a value of 10k at first candlestick of the backtest?

    yeah, first value to show in graph barindex is 10001.

    the condition of buying at 10000 doesn’t work but at 10001, I imagine due to the program reads the code after the first bar, then the next is in the market.

    #79496 quote
    Leo
    Participant
    Veteran

    Hi all,

    I almost complete the main body of the neural network.

    I made a modification, I don’t use a quadratic cost function but a cross-entropy cost function, and this is a very good thing because we do not have to worry about the initial values for the weights!

    Maybe someone can already complete the rest of the code and create an indicator out of it.

    // Hyperparameters to be optimized
    
    // ETA=1 //known as the learning rate
    
    
    
    /////////////////    CLASSIFIER    /////////////
    
    myATR=average[20](range)+std[20](range)
    ExtraStopLoss=MyATR
    //ExtraStopLoss=3*spread*pipsize
    
    //for long trades
    classifierlong=0
    FOR scanL=1 to candlesback DO
    IF classifierlong[scanL]=1 then
    BREAK
    ENDIF
    LongTradeLength=ProfitRiskRatio*(close[scanL]-(low[scanL]-ExtraStopLoss[scanL]))
    IF close[scanL]+LongTradeLength < high-spread*pipsize then
    IF lowest[scanL+1](low) > low[scanL]-ExtraStopLoss[scanL]+spread*pipsize then
    classifierlong=1
    candleentrylong=barindex-scanL
    BREAK
    ENDIF
    ENDIF
    NEXT
    
    //for short trades
    classifiershort=0
    FOR scanS=1 to candlesback DO
    IF classifiershort[scanS]=1 then
    BREAK
    ENDIF
    ShortTradeLength=ProfitRiskRatio*((high[scanS]-close[scanS])+ExtraStopLoss[scanS])
    IF close[scanS]-ShortTradeLength > low+spread*pipsize then
    IF highest[scanS+1](high) < high[scanS]+ExtraStopLoss[scanS]-spread*pipsize then
    classifiershort=1
    candleentryshort=barindex-scanS
    BREAK
    ENDIF
    ENDIF
    NEXT
    
    /////////////////////////  NEURONAL NETWORK  ///////////////////
    
    //variable1=     // to be defined
    //variable2=     // to be defined
    //variable3=     // to be defined
    //variable4=     // to be defined
    
    //   >>>    LEARNING PROCESS  <<<
    IF classifierlong=1 or classifiershort=1 THEN
    candleentry=max(candleentrylong,candleentryshort)
    Y1=classifierlong
    Y2=classifiershort
    
    //    >>> INPUT NEURONS <<<
    input1=variable1[barindex-candleentry]
    input2=variable2[barindex-candleentry]
    input3=variable3[barindex-candleentry]
    input4=variable4[barindex-candleentry]
    
    FOR i=1 to 10 DO //  THIS HAVE TO BE IMPROVED 
    ETAi=ETA/i
    //   >>> FIRST LAYER OF NEURONS <<<
    F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1
    F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2
    F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3
    F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4
    F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5
    F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6
    F1=1/(1+EXP(-1*F1))
    F2=1/(1+EXP(-1*F2))
    F3=1/(1+EXP(-1*F3))
    F4=1/(1+EXP(-1*F4))
    F5=1/(1+EXP(-1*F5))
    F6=1/(1+EXP(-1*F6))
    //   >>> OUTPUT NEURONS <<<
    output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1
    output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2
    output1=1/(1+EXP(-1*output1))
    output2=1/(1+EXP(-1*output2))
    
    //   >>> PARTIAL DERIVATES OF COST FUNCTION <<<
    //     ... CROSS-ENTROPY AS COST FUCTION ...
    // COST = - ( (Y1*LOG(output1)+(1-Y1)*LOG(1-output1) ) - (Y2*LOG(output2)+(1-Y2)*LOG(1-output2) )
    
    DerObias1 = (output1-Y1) * 1
    DerObias2 = (output2-Y2) * 1
    
    Derb11    = (output1-Y1) * F1
    Derb12    = (output1-Y1) * F2
    Derb13    = (output1-Y1) * F3
    Derb14    = (output1-Y1) * F4
    Derb15    = (output1-Y1) * F5
    Derb16    = (output1-Y1) * F6
    
    Derb21    = (output2-Y2) * F1
    Derb22    = (output2-Y2) * F2
    Derb23    = (output2-Y2) * F3
    Derb24    = (output2-Y2) * F4
    Derb25    = (output2-Y2) * F5
    Derb26    = (output2-Y2) * F6
    
    //Implementing BackPropagation
    Obias1=Obias1-ETAi*DerObias1
    Obias2=Obias2-ETAi*DerObias2
    
    b11=b11-ETAi*Derb11
    b12=b12-ETAi*Derb12
    b13=b11-ETAi*Derb13
    b14=b11-ETAi*Derb14
    b15=b11-ETAi*Derb15
    b16=b11-ETAi*Derb16
    
    b21=b11-ETAi*Derb21
    b22=b12-ETAi*Derb22
    b23=b11-ETAi*Derb23
    b24=b11-ETAi*Derb24
    b25=b11-ETAi*Derb25
    b26=b11-ETAi*Derb26
    
    //   >>> PARTIAL DERIVATES OF COST FUNCTION (LAYER) <<<
    
    DerFbias1 = (output1-Y1) * b11 * F1*(1-F1) * 1 + (output2-Y2) * b21 * F1*(1-F1) * 1
    DerFbias2 = (output1-Y1) * b12 * F2*(1-F2) * 1 + (output2-Y2) * b22 * F2*(1-F2) * 1
    DerFbias3 = (output1-Y1) * b13 * F3*(1-F3) * 1 + (output2-Y2) * b23 * F3*(1-F3) * 1
    DerFbias4 = (output1-Y1) * b14 * F4*(1-F4) * 1 + (output2-Y2) * b24 * F4*(1-F4) * 1
    DerFbias5 = (output1-Y1) * b15 * F5*(1-F5) * 1 + (output2-Y2) * b25 * F5*(1-F5) * 1
    DerFbias6 = (output1-Y1) * b16 * F6*(1-F6) * 1 + (output2-Y2) * b26 * F6*(1-F6) * 1
    
    Dera11    = (output1-Y1) * b11 * F1*(1-F1) * input1 + (output2-Y2) * b21 * F1*(1-F1) * input1
    Dera12    = (output1-Y1) * b11 * F1*(1-F1) * input2 + (output2-Y2) * b21 * F1*(1-F1) * input2
    Dera13    = (output1-Y1) * b11 * F1*(1-F1) * input3 + (output2-Y2) * b21 * F1*(1-F1) * input3
    Dera14    = (output1-Y1) * b11 * F1*(1-F1) * input4 + (output2-Y2) * b21 * F1*(1-F1) * input4
    
    Dera21    = (output1-Y1) * b12 * F2*(1-F2) * input1 + (output2-Y2) * b22 * F2*(1-F2) * input1
    Dera22    = (output1-Y1) * b12 * F2*(1-F2) * input2 + (output2-Y2) * b22 * F2*(1-F2) * input2
    Dera23    = (output1-Y1) * b12 * F2*(1-F2) * input3 + (output2-Y2) * b22 * F2*(1-F2) * input3
    Dera24    = (output1-Y1) * b12 * F2*(1-F2) * input4 + (output2-Y2) * b22 * F2*(1-F2) * input4
    
    Dera31    = (output1-Y1) * b13 * F3*(1-F3) * input1 + (output2-Y2) * b23 * F3*(1-F3) * input1
    Dera32    = (output1-Y1) * b13 * F3*(1-F3) * input2 + (output2-Y2) * b23 * F3*(1-F3) * input2
    Dera33    = (output1-Y1) * b13 * F3*(1-F3) * input3 + (output2-Y2) * b23 * F3*(1-F3) * input3
    Dera34    = (output1-Y1) * b13 * F3*(1-F3) * input4 + (output2-Y2) * b23 * F3*(1-F3) * input4
    
    Dera41    = (output1-Y1) * b14 * F4*(1-F4) * input1 + (output2-Y2) * b24 * F4*(1-F4) * input1
    Dera42    = (output1-Y1) * b14 * F4*(1-F4) * input2 + (output2-Y2) * b24 * F4*(1-F4) * input2
    Dera43    = (output1-Y1) * b14 * F4*(1-F4) * input3 + (output2-Y2) * b24 * F4*(1-F4) * input3
    Dera44    = (output1-Y1) * b14 * F4*(1-F4) * input4 + (output2-Y2) * b24 * F4*(1-F4) * input4
    
    Dera51    = (output1-Y1) * b15 * F5*(1-F5) * input1 + (output2-Y2) * b25 * F5*(1-F5) * input1
    Dera52    = (output1-Y1) * b15 * F5*(1-F5) * input2 + (output2-Y2) * b25 * F5*(1-F5) * input2
    Dera53    = (output1-Y1) * b15 * F5*(1-F5) * input3 + (output2-Y2) * b25 * F5*(1-F5) * input3
    Dera54    = (output1-Y1) * b15 * F5*(1-F5) * input4 + (output2-Y2) * b25 * F5*(1-F5) * input4
    
    Dera61    = (output1-Y1) * b16 * F6*(1-F6) * input1 + (output2-Y2) * b26 * F6*(1-F6) * input1
    Dera62    = (output1-Y1) * b16 * F6*(1-F6) * input2 + (output2-Y2) * b26 * F6*(1-F6) * input2
    Dera63    = (output1-Y1) * b16 * F6*(1-F6) * input3 + (output2-Y2) * b26 * F6*(1-F6) * input3
    Dera64    = (output1-Y1) * b16 * F6*(1-F6) * input4 + (output2-Y2) * b26 * F6*(1-F6) * input4
    
    //Implementing BackPropagation
    Fbias1=Fbias1-ETAi*DerFbias1 
    Fbias2=Fbias2-ETAi*DerFbias2
    Fbias3=Fbias3-ETAi*DerFbias3
    Fbias4=Fbias4-ETAi*DerFbias4
    Fbias5=Fbias5-ETAi*DerFbias5
    Fbias6=Fbias6-ETAi*DerFbias6
    
    a11=a11-ETAi*Dera11
    a12=a12-ETAi*Dera12
    a13=a13-ETAi*Dera13
    a14=a14-ETAi*Dera14
    
    a21=a21-ETAi*Dera21
    a22=a22-ETAi*Dera22
    a23=a23-ETAi*Dera23
    a24=a24-ETAi*Dera24
    
    a31=a31-ETAi*Dera31
    a32=a32-ETAi*Dera32
    a33=a33-ETAi*Dera33
    a34=a34-ETAi*Dera34
    
    a41=a41-ETAi*Dera41
    a42=a42-ETAi*Dera42
    a43=a43-ETAi*Dera43
    a44=a44-ETAi*Dera44
    
    a51=a51-ETAi*Dera51
    a52=a52-ETAi*Dera52
    a53=a53-ETAi*Dera53
    a54=a54-ETAi*Dera54
    
    a61=a61-ETAi*Dera61
    a62=a62-ETAi*Dera62
    a63=a63-ETAi*Dera63
    a64=a64-ETAi*Dera64
    
    //GradientNorm = SQRT(DerObias1*DerObias1 + DerObias2*DerObias2+Derb11*Derb11+Derb12*Derb12+Derb13*Derb13+Derb14*Derb14+Derb15*Derb15+Derb16*Derb16 + Derb21*Derb21+Derb22*Derb22+Derb23*Derb23+Derb24*Derb24+Derb25*Derb25+Derb26*Derb26 + DerFbias1*DerFbias1+DerFbias2*DerFbias2+DerFbias3+DerFbias3+DerFbias4*DerFbias4+DerFbias4*DerFbias5+DerFbias6*DerFbias6 + Dera11*Dera11+Dera12*Dera12+Dera13*Dera13+Dera14*Dera14 + Dera21*Dera21+Dera22*Dera22+Dera23*Dera23+Dera24*Dera24 + Dera31*Dera31+Dera32*Dera32+Dera33*Dera33+Dera34*Dera34 + Dera41*Dera41+Dera42*Dera42+Dera43*Dera43+Dera44*Dera44 + Dera51*Dera51+Dera52*Dera52+Dera53*Dera53+Dera54*Dera54 + Dera61*Dera61+Dera62*Dera62+Dera63*Dera63+Dera64*Dera64)
    
    NEXT
    
    ENDIF
    
    ///////////////////    NEW PREDICTION  ///////////////////
    //    >>> INPUT NEURONS <<<
    input1=variable1
    input2=variable2
    input3=variable3
    input4=variable4
    //   >>> FIRST LAYER OF NEURONS <<<
    F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1
    F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2
    F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3
    F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4
    F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5
    F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6
    F1=1/(1+EXP(-1*F1))
    F2=1/(1+EXP(-1*F2))
    F3=1/(1+EXP(-1*F3))
    F4=1/(1+EXP(-1*F4))
    F5=1/(1+EXP(-1*F5))
    F6=1/(1+EXP(-1*F6))
    //   >>> OUTPUT NEURONS <<<
    output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1
    output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2
    output1=1/(1+EXP(-1*output1))
    output2=1/(1+EXP(-1*output2))
    
    #79502 quote
    Nicolas
    Keymaster
    Master

    I think that it might be difficult for anyone to understand the whole concept of your project and the code. Could you please take some time to explain us?

    Defining what variables to inject into the neural networks is also something we should work on 😉 any idea?

    #79509 quote
    Leo
    Participant
    Veteran

    Hi Nicolas,

    I should be honest, is my first Neural Network. While I read about the topic I try to define for trading with prorealtime

    Basically:

    We give  4 input and get 2 outputs: the outputs are values from 0 to 1.

    output1= near to 1 and output2= near to 0  –> we go long and opposite for short ( what happend if both near to 1?  maybe open pending position maybe) , if both output near to 0 we don’t trade

    As inputs we define 4 data my ideas are: in fact, can be what ever as an input

    SMA20=average[20](close)
    SMA200=average[200](close)
    SMA2400=average[2400](close) //in 5 min time frame this is the value of SMA 200 periods in hourly 
    
    variable1= RSI[14](close)     // or to be defined
    variable3= (close-SMA20)/SMA20 *100 //or to be defined
    variable3= (SMA20-SMA200)/SMA200 *100 //or to be defined
    variable4= (SMA200-SMA2400)/SMA2400 *100   // to be defined

    But first, The neural network must learn:

    so we look in the past for possible wining trades ( with the classifier) and get the values of inputs just in that moment. With and algorithm called “decent gradient” the neural network start to modifies all his values, the more data the more it learns, if data change the neural network change. The way it modify the values a11…a16, a21… a26, .. etc is for trying to find the minimums values of a cost function which measure the errors by using the partial derivates of this values (Dera11,…dera16, dera21… dera26…etc)

    Is like we make an optimisation of variable while running the code at the same time!!! The market changes, our neural network changes!!!

    Another part of the code is running again the neural network to predict and hopefully we get a winning trade.

    I will be glad if the code predicts more that 50%, If we want better predictions we need to increase the number of neurons and inputs. Or even go further like deep learning wich implicates more layer of neurons.

    Here I add again the video which inspired me,  the document that I found the easiest to follow, and the concept (attached) of the neural network which I am building

    http://neuralnetworksanddeeplearning.com/

    https://www.youtube.com/watch?v=ILsA4nyG7I0

    #79519 quote
    Leo
    Participant
    Veteran

    Hi all,

    My first neural network is working.

    I think I get hooked !

    please test it as much as possible and to see it in action in an strategy will be cool!

    Here my code and photo with some predictions

    // Hyperparameters to be optimized
    
    // ETA=1 //known as the learning rate
    // candlesback=7 // for the classifier
    //ProfitRiskRatio=2 // for the classifier
    //spread=0.9 // for the classifier
    
    
    /////////////////    CLASSIFIER    /////////////
    
    myATR=average[20](range)+std[20](range)
    ExtraStopLoss=MyATR
    //ExtraStopLoss=3*spread*pipsize
    
    //for long trades
    classifierlong=0
    FOR scanL=1 to candlesback DO
    IF classifierlong[scanL]=1 then
    BREAK
    ENDIF
    LongTradeLength=ProfitRiskRatio*(close[scanL]-(low[scanL]-ExtraStopLoss[scanL]))
    IF close[scanL]+LongTradeLength < high-spread*pipsize then
    IF lowest[scanL+1](low) > low[scanL]-ExtraStopLoss[scanL]+spread*pipsize then
    classifierlong=1
    candleentrylong=barindex-scanL
    BREAK
    ENDIF
    ENDIF
    NEXT
    
    //for short trades
    classifiershort=0
    FOR scanS=1 to candlesback DO
    IF classifiershort[scanS]=1 then
    BREAK
    ENDIF
    ShortTradeLength=ProfitRiskRatio*((high[scanS]-close[scanS])+ExtraStopLoss[scanS])
    IF close[scanS]-ShortTradeLength > low+spread*pipsize then
    IF highest[scanS+1](high) < high[scanS]+ExtraStopLoss[scanS]-spread*pipsize then
    classifiershort=1
    candleentryshort=barindex-scanS
    BREAK
    ENDIF
    ENDIF
    NEXT
    
    /////////////////////////  NEURONAL NETWORK  ///////////////////
    
    // ...INITIAL VALUES...
    once a11=1
    once a12=-1
    once a13=1
    once a14=-1
    
    once a21=1
    once a22=1
    once a23=-1
    once a24=1
    
    once a31=-1
    once a32=1
    once a33=-1
    once a34=1
    
    once a41=1
    once a42=-1
    once a43=1
    once a44=-1
    
    once a51=-1
    once a52=1
    once a53=-1
    once a54=1
    
    once a61=1
    once a62=-1
    once a63=1
    once a64=-1
    
    once Fbias1=0
    once Fbias2=0
    once Fbias3=0
    once Fbias4=0
    once Fbias5=0
    once Fbias6=0
    
    once b11=1
    once b12=-1
    once b13=1
    once b14=-1
    once b15=1
    once b16=-1
    
    once b21=-1
    once b22=1
    once b23=-1
    once b24=1
    once b25=-1
    once b26=1
    
    once Obias1=0
    once Obias2=0
    
    // ...DEFINITION OF INPUTS...
    
    SMA20=average[min(20,barindex)](close)
    SMA200=average[min(200,barindex)](close)
    SMA2400=average[min(2400,barindex)](close) //in 5 min time frame this is the value of SMA 200 periods in hourly
    
    variable1= RSI[14](close)     // or to be defined
    variable2= (close-SMA20)/SMA20 *100 //or to be defined
    variable3= (SMA20-SMA200)/SMA200 *100 //or to be defined
    variable4= (SMA200-SMA2400)/SMA2400 *100   // to be defined
    
    //   >>>    LEARNING PROCESS  <<<
    // If the classifier has detected a wining trade in the past
    //IF hour > 7 and hour < 21 then
    IF BARINDEX > 2500 THEN
    IF classifierlong=1 or classifiershort=1 THEN
    IF hour > 7 and hour < 21 then
    candleentry=max(candleentrylong,candleentryshort)
    Y1=classifierlong
    Y2=classifiershort
    
    //    >>> INPUT FOR NEURONS <<<
    input1=variable1[barindex-candleentry]
    input2=variable2[barindex-candleentry]
    input3=variable3[barindex-candleentry]
    input4=variable4[barindex-candleentry]
    
    FOR i=1 to 10 DO //  THIS HAVE TO BE IMPROVED
    ETAi=ETA - ETA/10*(i-1) //Learning Rate
    //   >>> FIRST LAYER OF NEURONS <<<
    F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1
    F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2
    F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3
    F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4
    F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5
    F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6
    F1=1/(1+EXP(-1*F1))
    F2=1/(1+EXP(-1*F2))
    F3=1/(1+EXP(-1*F3))
    F4=1/(1+EXP(-1*F4))
    F5=1/(1+EXP(-1*F5))
    F6=1/(1+EXP(-1*F6))
    //   >>> OUTPUT NEURONS <<<
    output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1
    output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2
    output1=1/(1+EXP(-1*output1))
    output2=1/(1+EXP(-1*output2))
    
    //   >>> PARTIAL DERIVATES OF COST FUNCTION <<<
    //     ... CROSS-ENTROPY AS COST FUCTION ...
    // COST = - ( (Y1*LOG(output1)+(1-Y1)*LOG(1-output1) ) - (Y2*LOG(output2)+(1-Y2)*LOG(1-output2) )
    
    DerObias1 = (output1-Y1) * 1
    DerObias2 = (output2-Y2) * 1
    
    Derb11    = (output1-Y1) * F1
    Derb12    = (output1-Y1) * F2
    Derb13    = (output1-Y1) * F3
    Derb14    = (output1-Y1) * F4
    Derb15    = (output1-Y1) * F5
    Derb16    = (output1-Y1) * F6
    
    Derb21    = (output2-Y2) * F1
    Derb22    = (output2-Y2) * F2
    Derb23    = (output2-Y2) * F3
    Derb24    = (output2-Y2) * F4
    Derb25    = (output2-Y2) * F5
    Derb26    = (output2-Y2) * F6
    
    //Implementing BackPropagation
    Obias1=Obias1-ETAi*DerObias1
    Obias2=Obias2-ETAi*DerObias2
    
    b11=b11-ETAi*Derb11
    b12=b12-ETAi*Derb12
    b13=b11-ETAi*Derb13
    b14=b11-ETAi*Derb14
    b15=b11-ETAi*Derb15
    b16=b11-ETAi*Derb16
    
    b21=b11-ETAi*Derb21
    b22=b12-ETAi*Derb22
    b23=b11-ETAi*Derb23
    b24=b11-ETAi*Derb24
    b25=b11-ETAi*Derb25
    b26=b11-ETAi*Derb26
    
    //   >>> PARTIAL DERIVATES OF COST FUNCTION (LAYER) <<<
    
    DerFbias1 = (output1-Y1) * b11 * F1*(1-F1) * 1 + (output2-Y2) * b21 * F1*(1-F1) * 1
    DerFbias2 = (output1-Y1) * b12 * F2*(1-F2) * 1 + (output2-Y2) * b22 * F2*(1-F2) * 1
    DerFbias3 = (output1-Y1) * b13 * F3*(1-F3) * 1 + (output2-Y2) * b23 * F3*(1-F3) * 1
    DerFbias4 = (output1-Y1) * b14 * F4*(1-F4) * 1 + (output2-Y2) * b24 * F4*(1-F4) * 1
    DerFbias5 = (output1-Y1) * b15 * F5*(1-F5) * 1 + (output2-Y2) * b25 * F5*(1-F5) * 1
    DerFbias6 = (output1-Y1) * b16 * F6*(1-F6) * 1 + (output2-Y2) * b26 * F6*(1-F6) * 1
    
    Dera11    = (output1-Y1) * b11 * F1*(1-F1) * input1 + (output2-Y2) * b21 * F1*(1-F1) * input1
    Dera12    = (output1-Y1) * b11 * F1*(1-F1) * input2 + (output2-Y2) * b21 * F1*(1-F1) * input2
    Dera13    = (output1-Y1) * b11 * F1*(1-F1) * input3 + (output2-Y2) * b21 * F1*(1-F1) * input3
    Dera14    = (output1-Y1) * b11 * F1*(1-F1) * input4 + (output2-Y2) * b21 * F1*(1-F1) * input4
    
    Dera21    = (output1-Y1) * b12 * F2*(1-F2) * input1 + (output2-Y2) * b22 * F2*(1-F2) * input1
    Dera22    = (output1-Y1) * b12 * F2*(1-F2) * input2 + (output2-Y2) * b22 * F2*(1-F2) * input2
    Dera23    = (output1-Y1) * b12 * F2*(1-F2) * input3 + (output2-Y2) * b22 * F2*(1-F2) * input3
    Dera24    = (output1-Y1) * b12 * F2*(1-F2) * input4 + (output2-Y2) * b22 * F2*(1-F2) * input4
    
    Dera31    = (output1-Y1) * b13 * F3*(1-F3) * input1 + (output2-Y2) * b23 * F3*(1-F3) * input1
    Dera32    = (output1-Y1) * b13 * F3*(1-F3) * input2 + (output2-Y2) * b23 * F3*(1-F3) * input2
    Dera33    = (output1-Y1) * b13 * F3*(1-F3) * input3 + (output2-Y2) * b23 * F3*(1-F3) * input3
    Dera34    = (output1-Y1) * b13 * F3*(1-F3) * input4 + (output2-Y2) * b23 * F3*(1-F3) * input4
    
    Dera41    = (output1-Y1) * b14 * F4*(1-F4) * input1 + (output2-Y2) * b24 * F4*(1-F4) * input1
    Dera42    = (output1-Y1) * b14 * F4*(1-F4) * input2 + (output2-Y2) * b24 * F4*(1-F4) * input2
    Dera43    = (output1-Y1) * b14 * F4*(1-F4) * input3 + (output2-Y2) * b24 * F4*(1-F4) * input3
    Dera44    = (output1-Y1) * b14 * F4*(1-F4) * input4 + (output2-Y2) * b24 * F4*(1-F4) * input4
    
    Dera51    = (output1-Y1) * b15 * F5*(1-F5) * input1 + (output2-Y2) * b25 * F5*(1-F5) * input1
    Dera52    = (output1-Y1) * b15 * F5*(1-F5) * input2 + (output2-Y2) * b25 * F5*(1-F5) * input2
    Dera53    = (output1-Y1) * b15 * F5*(1-F5) * input3 + (output2-Y2) * b25 * F5*(1-F5) * input3
    Dera54    = (output1-Y1) * b15 * F5*(1-F5) * input4 + (output2-Y2) * b25 * F5*(1-F5) * input4
    
    Dera61    = (output1-Y1) * b16 * F6*(1-F6) * input1 + (output2-Y2) * b26 * F6*(1-F6) * input1
    Dera62    = (output1-Y1) * b16 * F6*(1-F6) * input2 + (output2-Y2) * b26 * F6*(1-F6) * input2
    Dera63    = (output1-Y1) * b16 * F6*(1-F6) * input3 + (output2-Y2) * b26 * F6*(1-F6) * input3
    Dera64    = (output1-Y1) * b16 * F6*(1-F6) * input4 + (output2-Y2) * b26 * F6*(1-F6) * input4
    
    //Implementing BackPropagation
    Fbias1=Fbias1-ETAi*DerFbias1
    Fbias2=Fbias2-ETAi*DerFbias2
    Fbias3=Fbias3-ETAi*DerFbias3
    Fbias4=Fbias4-ETAi*DerFbias4
    Fbias5=Fbias5-ETAi*DerFbias5
    Fbias6=Fbias6-ETAi*DerFbias6
    
    a11=a11-ETAi*Dera11
    a12=a12-ETAi*Dera12
    a13=a13-ETAi*Dera13
    a14=a14-ETAi*Dera14
    
    a21=a21-ETAi*Dera21
    a22=a22-ETAi*Dera22
    a23=a23-ETAi*Dera23
    a24=a24-ETAi*Dera24
    
    a31=a31-ETAi*Dera31
    a32=a32-ETAi*Dera32
    a33=a33-ETAi*Dera33
    a34=a34-ETAi*Dera34
    
    a41=a41-ETAi*Dera41
    a42=a42-ETAi*Dera42
    a43=a43-ETAi*Dera43
    a44=a44-ETAi*Dera44
    
    a51=a51-ETAi*Dera51
    a52=a52-ETAi*Dera52
    a53=a53-ETAi*Dera53
    a54=a54-ETAi*Dera54
    
    a61=a61-ETAi*Dera61
    a62=a62-ETAi*Dera62
    a63=a63-ETAi*Dera63
    a64=a64-ETAi*Dera64
    
    //GradientNorm = SQRT(DerObias1*DerObias1 + DerObias2*DerObias2+Derb11*Derb11+Derb12*Derb12+Derb13*Derb13+Derb14*Derb14+Derb15*Derb15+Derb16*Derb16 + Derb21*Derb21+Derb22*Derb22+Derb23*Derb23+Derb24*Derb24+Derb25*Derb25+Derb26*Derb26 + DerFbias1*DerFbias1+DerFbias2*DerFbias2+DerFbias3+DerFbias3+DerFbias4*DerFbias4+DerFbias4*DerFbias5+DerFbias6*DerFbias6 + Dera11*Dera11+Dera12*Dera12+Dera13*Dera13+Dera14*Dera14 + Dera21*Dera21+Dera22*Dera22+Dera23*Dera23+Dera24*Dera24 + Dera31*Dera31+Dera32*Dera32+Dera33*Dera33+Dera34*Dera34 + Dera41*Dera41+Dera42*Dera42+Dera43*Dera43+Dera44*Dera44 + Dera51*Dera51+Dera52*Dera52+Dera53*Dera53+Dera54*Dera54 + Dera61*Dera61+Dera62*Dera62+Dera63*Dera63+Dera64*Dera64)
    
    NEXT
    ENDIF
    ENDIF
    //ENDIF
    
    ///////////////////    NEW PREDICTION  ///////////////////
    //    >>> INPUT NEURONS <<<
    input1=variable1
    input2=variable2
    input3=variable3
    input4=variable4
    //   >>> FIRST LAYER OF NEURONS <<<
    F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1
    F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2
    F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3
    F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4
    F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5
    F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6
    F1=1/(1+EXP(-1*F1))
    F2=1/(1+EXP(-1*F2))
    F3=1/(1+EXP(-1*F3))
    F4=1/(1+EXP(-1*F4))
    F5=1/(1+EXP(-1*F5))
    F6=1/(1+EXP(-1*F6))
    //   >>> OUTPUT NEURONS <<<
    output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1
    output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2
    output1=1/(1+EXP(-1*output1))
    output2=1/(1+EXP(-1*output2))
    ENDIF
    
    return output1 as "prediction long", output2 as "prediction short", threshold as "threshold"
    
    GraHal, didi059, Andyswede and 3 others thanked this post
    #79538 quote
    didi059
    Participant
    Junior

    Leo

    I found the theory and your work on it an eyeopener…

    As mentioned by Nicolas, the choice of variables is a crucial element.  Presently all your variables are correlated together as there are all issued from the price action. Why not incorporate a variable independent from the price action such as Volume ?

    Also I would try the DPO (Detrended Price Oscillator) indicator as it gives a better sense of a cycle’s typical high/low range as well as its duration.

    With these new variables you will have two momentum variables and 2 cyclic variables. Therefore the intermediate functions can be reduced from 4 to 2… This would accelerate the processing time.

    I cannot simulate it (trial version only..)

    Anyway, GREAT JOB!

    #79540 quote
    Leo
    Participant
    Veteran

    Hi didi059,

    Thanks for your motivation words.

    As input you can choose whatever, even (only in strategy) values in other timeframes.

    The initial values for the parameters, does’t matter anymore. In fact, that’s the beauty of the Neural Network: the algorithm find this values and optimise them continuously.

    Now that we implement Artificial Intelligent in ProRealTime the options are almost infinitive. Therefore, It is not possible for me to test your set of inputs.

    #79552 quote
    Nicolas
    Keymaster
    Master

    Good input by @didi059 about Volume as an input (but still no Volumes on Forex pairs and CFD). About DPO, it uses future data in its default setting, so don’t count on it for trading systems. Anyway, thank you Leo for your last code, I’ll try to take time this week to get a better comprehension of it. I must admit that I’m still worried about the way you are using back propagation and its accuracy because of curve fitting.

    Did you try to load the indicator only until a date? To simulate a forward test of what it has learned? With an encapsulation of the code like this:

    if date<20180701 then 
     // all the indicator's learning functions code
    endif
    // other codes that plot signals

    EDIT: I know it doesn’t work like that because of learning continuously and each new signal is similar as an unique forward test, but it could be a good proof of concept.

    #79568 quote
    Leo
    Participant
    Veteran

    Hi Nicolas,

    Curve fitting? 43 parameters curved fitted! don’t have any doubt that is curve fitting… the thing is the code is curving fitting continuously and adapting continuously, and the more neurons the system have more and more “curve fitting packs” can be “store” in the neural network. That’s why I saw a great future for this way of working.

    But I am not naive, I know that the market will do whatever it wants and a prediction in recent data maybe is not valid.

    I do not implement the code yet in an strategy. In an strategy we can even set as input variable some other values in different time frame, for example. RSI[14] on 5 min, RSI[14] on 15 min, RSI[14] on 1 hour and RSI[14] on 4hours.

    About your opinion of the back propagation algorithm, I am not comfortable either (in line 131 I wrote “this have to be improved”). I am working in a new one.

    Let me finished the new code and then review it.

    by the way, I am not comfortable with the classifier either.

    #79577 quote
    GraHal
    Participant
    Master

    I have nearly got a System ready to post, but odd thing happened?

    I was optimising 4 variables over 100k bars (2 x TP and 2 x SL, one each for Long and Short) the Profit / result was over 6K.  I had one eye on the TV and entered 1 x TP as 100, but then realised it should have been 150.

    So just to make sure, I re-optimised on that 1 x TP only, but with value as 150 the profit was near 3K.

    Is this how the self-learning neural network is supposed to operate?? Optimise over 7000 of combinations and profit / result can work out more than  optimising over  14 combinations … even though same values are shown as optimum??

    The only explanation can be that Leo’s Neurals were self-learning and changing the variables within the neural network during the 7000 optimising combinations?? (echoes of Skynet / Terminator here … scary but exciting?? 🙂 🙂 )

    Hope above makes sense, just say or ask away if not?

    #79591 quote
    Leo
    Participant
    Veteran

    Hard to say GraHal,

    I would optimise hyperparameters, i.e. the parameters that control the learning algorithm like ETA… more that optimise is to find the correct one. For the data I choose as input ETA is around 0.1.

    Another parameter to be optimised is  the threshold for decision take, the output in order to predict like 0.6 or 0.7 (output is always values from 0 to 1 )

    The use of pending orders is highly recommended.

    You can also test different kind of inputs like Volume.

    Values for the classifier are irrelevant just set one combination and run.

    Test 100K should be taking ages or ?

    Meta Signals Pro thanked this post
Viewing 15 posts - 31 through 45 (of 127 total)
  • You must be logged in to reply to this topic.

Neural networks programming with prorealtime


ProBuilder: Indicators & Custom Tools

New Reply
Author
author-avatar
Leo @leito Participant
Summary

This topic contains 126 replies,
has 8 voices, and was last updated by MobiusGrey
2 years, 4 months ago.

Topic Details
Forum: ProBuilder: Indicators & Custom Tools
Language: English
Started: 08/18/2018
Status: Active
Attachments: 32 files
Logo Logo
Loading...