ProRealCode - Trading & Coding with ProRealTime™
Hi all here is my last version.
It store the last 5 inputs data and improve the algorithm over this information, we prioritise more the most recent data.
// Hyperparameters to be optimized
// ETA=1 //known as the learning rate
// candlesback=7 // for the classifier
//ProfitRiskRatio=2 // for the classifier
//spread=0.9 // for the classifier
///////////////// CLASSIFIER /////////////
myATR=average[20](range)+std[20](range)
ExtraStopLoss=MyATR
//ExtraStopLoss=3*spread*pipsize
//for long trades
classifierlong=0
FOR scanL=1 to candlesback DO
IF classifierlong[scanL]=1 then
BREAK
ENDIF
LongTradeLength=ProfitRiskRatio*(close[scanL]-(low[scanL]-ExtraStopLoss[scanL]))
IF close[scanL]+LongTradeLength < high-spread*pipsize then
IF lowest[scanL+1](low) > low[scanL]-ExtraStopLoss[scanL]+spread*pipsize then
classifierlong=1
candleentrylong=barindex-scanL
BREAK
ENDIF
ENDIF
NEXT
//for short trades
classifiershort=0
FOR scanS=1 to candlesback DO
IF classifiershort[scanS]=1 then
BREAK
ENDIF
ShortTradeLength=ProfitRiskRatio*((high[scanS]-close[scanS])+ExtraStopLoss[scanS])
IF close[scanS]-ShortTradeLength > low+spread*pipsize then
IF highest[scanS+1](high) < high[scanS]+ExtraStopLoss[scanS]-spread*pipsize then
classifiershort=1
candleentryshort=barindex-scanS
BREAK
ENDIF
ENDIF
NEXT
///////////////////////// NEURONAL NETWORK ///////////////////
// ...INITIAL VALUES...
once a11=1
once a12=1
once a13=1
once a14=1
once a21=1
once a22=1
once a23=1
once a24=1
once a31=1
once a32=1
once a33=1
once a34=1
once a41=1
once a42=1
once a43=1
once a44=1
once a51=1
once a52=1
once a53=1
once a54=1
once a61=1
once a62=1
once a63=1
once a64=1
once Fbias1=0
once Fbias2=0
once Fbias3=0
once Fbias4=0
once Fbias5=0
once Fbias6=0
once b11=1
once b12=1
once b13=1
once b14=1
once b15=1
once b16=1
once b21=1
once b22=1
once b23=1
once b24=1
once b25=1
once b26=1
once Obias1=0
once Obias2=0
// ...DEFINITION OF INPUTS...
SMA20=average[min(20,barindex)](close)
SMA200=average[min(200,barindex)](close)
SMA2400=average[min(2400,barindex)](close) //in 5 min time frame this is the value of SMA 200 periods in hourly
variable1= RSI[14](close) // or to be defined
variable2= (close-SMA20)/SMA20 *100 //or to be defined
variable3= (SMA20-SMA200)/SMA200 *100 //or to be defined
variable4= (SMA200-SMA2400)/SMA2400 *100 // to be defined
// >>> LEARNING PROCESS <<<
// If the classifier has detected a wining trade in the past
//IF hour > 7 and hour < 21 then
//STORING THE LEARNING DATA
IF classifierlong=1 or classifiershort=1 THEN
BBBBBcandleentry=BBBBcandleentry
BBBBBY1=BBBBY1
BBBBBY2=BBBBY2
BBBBcandleentry=BBBcandleentry
BBBBY1=BBBY1
BBBBY2=BBBY2
BBBcandleentry=BBcandleentry
BBBY1=BBY1
BBBY2=BBY2
BBcandleentry=Bcandleentry
BBY1=BY1
BBY2=BY2
Bcandleentry=max(candleentrylong,candleentryshort)
BY1=classifierlong
BY2=classifiershort
ENDIF
IF BARINDEX > 2500 THEN
IF classifierlong=1 or classifiershort=1 THEN
IF hour > 8 and hour < 21 then
FOR i=1 to 5 DO // THIS HAVE TO BE IMPROVED
IF i = 1 THEN
candleentry=BBBBBcandleentry
Y1=BBBBBY1
Y2=BBBBBY2
ENDIF
IF i = 2 THEN
candleentry=BBBBcandleentry
Y1=BBBBY1
Y2=BBBBY2
ENDIF
IF i = 3 THEN
candleentry=BBBcandleentry
Y1=BBBY1
Y2=BBBY2
ENDIF
IF i = 4 THEN
candleentry=BBcandleentry
Y1=BBY1
Y2=BBY2
ENDIF
IF i = 5 THEN
candleentry=Bcandleentry
Y1=BY1
Y2=BY2
ENDIF
// >>> INPUT FOR NEURONS <<<
input1=variable1[barindex-candleentry]
input2=variable2[barindex-candleentry]
input3=variable3[barindex-candleentry]
input4=variable4[barindex-candleentry]
ETAi=(ETA/5)*i //Learning Rate
// >>> FIRST LAYER OF NEURONS <<<
F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1
F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2
F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3
F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4
F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5
F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6
F1=1/(1+EXP(-1*F1))
F2=1/(1+EXP(-1*F2))
F3=1/(1+EXP(-1*F3))
F4=1/(1+EXP(-1*F4))
F5=1/(1+EXP(-1*F5))
F6=1/(1+EXP(-1*F6))
// >>> OUTPUT NEURONS <<<
output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1
output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2
output1=1/(1+EXP(-1*output1))
output2=1/(1+EXP(-1*output2))
// >>> PARTIAL DERIVATES OF COST FUNCTION <<<
// ... CROSS-ENTROPY AS COST FUCTION ...
// COST = - ( (Y1*LOG(output1)+(1-Y1)*LOG(1-output1) ) - (Y2*LOG(output2)+(1-Y2)*LOG(1-output2) )
DerObias1 = (output1-Y1) * 1
DerObias2 = (output2-Y2) * 1
Derb11 = (output1-Y1) * F1
Derb12 = (output1-Y1) * F2
Derb13 = (output1-Y1) * F3
Derb14 = (output1-Y1) * F4
Derb15 = (output1-Y1) * F5
Derb16 = (output1-Y1) * F6
Derb21 = (output2-Y2) * F1
Derb22 = (output2-Y2) * F2
Derb23 = (output2-Y2) * F3
Derb24 = (output2-Y2) * F4
Derb25 = (output2-Y2) * F5
Derb26 = (output2-Y2) * F6
//Implementing BackPropagation
Obias1=Obias1-ETAi*DerObias1
Obias2=Obias2-ETAi*DerObias2
b11=b11-ETAi*Derb11
b12=b12-ETAi*Derb12
b13=b11-ETAi*Derb13
b14=b11-ETAi*Derb14
b15=b11-ETAi*Derb15
b16=b11-ETAi*Derb16
b21=b11-ETAi*Derb21
b22=b12-ETAi*Derb22
b23=b11-ETAi*Derb23
b24=b11-ETAi*Derb24
b25=b11-ETAi*Derb25
b26=b11-ETAi*Derb26
// >>> PARTIAL DERIVATES OF COST FUNCTION (LAYER) <<<
DerFbias1 = (output1-Y1) * b11 * F1*(1-F1) * 1 + (output2-Y2) * b21 * F1*(1-F1) * 1
DerFbias2 = (output1-Y1) * b12 * F2*(1-F2) * 1 + (output2-Y2) * b22 * F2*(1-F2) * 1
DerFbias3 = (output1-Y1) * b13 * F3*(1-F3) * 1 + (output2-Y2) * b23 * F3*(1-F3) * 1
DerFbias4 = (output1-Y1) * b14 * F4*(1-F4) * 1 + (output2-Y2) * b24 * F4*(1-F4) * 1
DerFbias5 = (output1-Y1) * b15 * F5*(1-F5) * 1 + (output2-Y2) * b25 * F5*(1-F5) * 1
DerFbias6 = (output1-Y1) * b16 * F6*(1-F6) * 1 + (output2-Y2) * b26 * F6*(1-F6) * 1
Dera11 = (output1-Y1) * b11 * F1*(1-F1) * input1 + (output2-Y2) * b21 * F1*(1-F1) * input1
Dera12 = (output1-Y1) * b11 * F1*(1-F1) * input2 + (output2-Y2) * b21 * F1*(1-F1) * input2
Dera13 = (output1-Y1) * b11 * F1*(1-F1) * input3 + (output2-Y2) * b21 * F1*(1-F1) * input3
Dera14 = (output1-Y1) * b11 * F1*(1-F1) * input4 + (output2-Y2) * b21 * F1*(1-F1) * input4
Dera21 = (output1-Y1) * b12 * F2*(1-F2) * input1 + (output2-Y2) * b22 * F2*(1-F2) * input1
Dera22 = (output1-Y1) * b12 * F2*(1-F2) * input2 + (output2-Y2) * b22 * F2*(1-F2) * input2
Dera23 = (output1-Y1) * b12 * F2*(1-F2) * input3 + (output2-Y2) * b22 * F2*(1-F2) * input3
Dera24 = (output1-Y1) * b12 * F2*(1-F2) * input4 + (output2-Y2) * b22 * F2*(1-F2) * input4
Dera31 = (output1-Y1) * b13 * F3*(1-F3) * input1 + (output2-Y2) * b23 * F3*(1-F3) * input1
Dera32 = (output1-Y1) * b13 * F3*(1-F3) * input2 + (output2-Y2) * b23 * F3*(1-F3) * input2
Dera33 = (output1-Y1) * b13 * F3*(1-F3) * input3 + (output2-Y2) * b23 * F3*(1-F3) * input3
Dera34 = (output1-Y1) * b13 * F3*(1-F3) * input4 + (output2-Y2) * b23 * F3*(1-F3) * input4
Dera41 = (output1-Y1) * b14 * F4*(1-F4) * input1 + (output2-Y2) * b24 * F4*(1-F4) * input1
Dera42 = (output1-Y1) * b14 * F4*(1-F4) * input2 + (output2-Y2) * b24 * F4*(1-F4) * input2
Dera43 = (output1-Y1) * b14 * F4*(1-F4) * input3 + (output2-Y2) * b24 * F4*(1-F4) * input3
Dera44 = (output1-Y1) * b14 * F4*(1-F4) * input4 + (output2-Y2) * b24 * F4*(1-F4) * input4
Dera51 = (output1-Y1) * b15 * F5*(1-F5) * input1 + (output2-Y2) * b25 * F5*(1-F5) * input1
Dera52 = (output1-Y1) * b15 * F5*(1-F5) * input2 + (output2-Y2) * b25 * F5*(1-F5) * input2
Dera53 = (output1-Y1) * b15 * F5*(1-F5) * input3 + (output2-Y2) * b25 * F5*(1-F5) * input3
Dera54 = (output1-Y1) * b15 * F5*(1-F5) * input4 + (output2-Y2) * b25 * F5*(1-F5) * input4
Dera61 = (output1-Y1) * b16 * F6*(1-F6) * input1 + (output2-Y2) * b26 * F6*(1-F6) * input1
Dera62 = (output1-Y1) * b16 * F6*(1-F6) * input2 + (output2-Y2) * b26 * F6*(1-F6) * input2
Dera63 = (output1-Y1) * b16 * F6*(1-F6) * input3 + (output2-Y2) * b26 * F6*(1-F6) * input3
Dera64 = (output1-Y1) * b16 * F6*(1-F6) * input4 + (output2-Y2) * b26 * F6*(1-F6) * input4
//Implementing BackPropagation
Fbias1=Fbias1-ETAi*DerFbias1
Fbias2=Fbias2-ETAi*DerFbias2
Fbias3=Fbias3-ETAi*DerFbias3
Fbias4=Fbias4-ETAi*DerFbias4
Fbias5=Fbias5-ETAi*DerFbias5
Fbias6=Fbias6-ETAi*DerFbias6
a11=a11-ETAi*Dera11
a12=a12-ETAi*Dera12
a13=a13-ETAi*Dera13
a14=a14-ETAi*Dera14
a21=a21-ETAi*Dera21
a22=a22-ETAi*Dera22
a23=a23-ETAi*Dera23
a24=a24-ETAi*Dera24
a31=a31-ETAi*Dera31
a32=a32-ETAi*Dera32
a33=a33-ETAi*Dera33
a34=a34-ETAi*Dera34
a41=a41-ETAi*Dera41
a42=a42-ETAi*Dera42
a43=a43-ETAi*Dera43
a44=a44-ETAi*Dera44
a51=a51-ETAi*Dera51
a52=a52-ETAi*Dera52
a53=a53-ETAi*Dera53
a54=a54-ETAi*Dera54
a61=a61-ETAi*Dera61
a62=a62-ETAi*Dera62
a63=a63-ETAi*Dera63
a64=a64-ETAi*Dera64
//GradientNorm = SQRT(DerObias1*DerObias1 + DerObias2*DerObias2+Derb11*Derb11+Derb12*Derb12+Derb13*Derb13+Derb14*Derb14+Derb15*Derb15+Derb16*Derb16 + Derb21*Derb21+Derb22*Derb22+Derb23*Derb23+Derb24*Derb24+Derb25*Derb25+Derb26*Derb26 + DerFbias1*DerFbias1+DerFbias2*DerFbias2+DerFbias3+DerFbias3+DerFbias4*DerFbias4+DerFbias4*DerFbias5+DerFbias6*DerFbias6 + Dera11*Dera11+Dera12*Dera12+Dera13*Dera13+Dera14*Dera14 + Dera21*Dera21+Dera22*Dera22+Dera23*Dera23+Dera24*Dera24 + Dera31*Dera31+Dera32*Dera32+Dera33*Dera33+Dera34*Dera34 + Dera41*Dera41+Dera42*Dera42+Dera43*Dera43+Dera44*Dera44 + Dera51*Dera51+Dera52*Dera52+Dera53*Dera53+Dera54*Dera54 + Dera61*Dera61+Dera62*Dera62+Dera63*Dera63+Dera64*Dera64)
NEXT
ENDIF
ENDIF
//ENDIF
/////////////////// NEW PREDICTION ///////////////////
// >>> INPUT NEURONS <<<
input1=variable1
input2=variable2
input3=variable3
input4=variable4
// >>> FIRST LAYER OF NEURONS <<<
F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1
F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2
F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3
F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4
F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5
F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6
F1=1/(1+EXP(-1*F1))
F2=1/(1+EXP(-1*F2))
F3=1/(1+EXP(-1*F3))
F4=1/(1+EXP(-1*F4))
F5=1/(1+EXP(-1*F5))
F6=1/(1+EXP(-1*F6))
// >>> OUTPUT NEURONS <<<
output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1
output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2
output1=1/(1+EXP(-1*output1))
output2=1/(1+EXP(-1*output2))
ENDIF
return output1 coloured(0,150,0) style(line,2) as "prediction long" , output2 coloured(200,0,0) style(line,2) as "prediction short",0.5 coloured(0,0,200) as "0.5", 0.6 coloured(0,0,200) as "0.6", 0.7 coloured(0,0,200) as "0.7", 0.8 coloured(0,0,200) as "0.8"
Test 100K should be taking ages or ?
Yes it was taking ages, but I was doing other tasks so – for once – I didn’t mind!
I wasn’t optimising any variables / hyperparameters in your code … just my TB and SL. During the running of 7000 combinations over 100k bars it was if the neural network was self-learning and auto-changing values within your code to give higher overall profit than if I had simply input the optimised value of TP and SL and then pressed Probacktest my System to give a one combination result.
Maybe I will try the same exercise again tomorrow and post results to prove I am not deluded!? 🙂
I just realised why I might have been confusing you Leo! 🙂
My two most recent posts above were while I was optimising my Strategy / System (not your Indicator).
I should have posted on my Systems Topic (I will in future to save confusion).
If a Mod / Nicolas wants to move – mine #79577, Leos answer, #79591 and mine #79595 – to my Systems Topic below then feel free? And then delete this post?
Hi all,
Here another version of the neural network, I improved a bit the back propagation loop.
I also change the inputs ( it can be what ever you want as long as ETA is calibrated)
// Hyperparameters to be optimized
// ETA=0.05 //known as the learning rate
//candlesback=7 // for the classifier
//ProfitRiskRatio=2 // for the classifier
//spread=0.9 // for the classifier
//P1=20 //FOR CURVE AS INPUT
//P1=200 //FOR CURVE AS INPUT
///////////////// CLASSIFIER /////////////
myATR=average[20](range)+std[20](range)
ExtraStopLoss=MyATR
//ExtraStopLoss=3*spread*pipsize
//for long trades
classifierlong=0
FOR scanL=1 to candlesback DO
IF classifierlong[scanL]=1 then
BREAK
ENDIF
LongTradeLength=ProfitRiskRatio*(close[scanL]-(low[scanL]-ExtraStopLoss[scanL]))
IF close[scanL]+LongTradeLength < high-spread*pipsize then
IF lowest[scanL+1](low) > low[scanL]-ExtraStopLoss[scanL]+spread*pipsize then
classifierlong=1
candleentrylong=barindex-scanL
BREAK
ENDIF
ENDIF
NEXT
//for short trades
classifiershort=0
FOR scanS=1 to candlesback DO
IF classifiershort[scanS]=1 then
BREAK
ENDIF
ShortTradeLength=ProfitRiskRatio*((high[scanS]-close[scanS])+ExtraStopLoss[scanS])
IF close[scanS]-ShortTradeLength > low+spread*pipsize then
IF highest[scanS+1](high) < high[scanS]+ExtraStopLoss[scanS]-spread*pipsize then
classifiershort=1
candleentryshort=barindex-scanS
BREAK
ENDIF
ENDIF
NEXT
///////////////////////// NEURONAL NETWORK ///////////////////
// ...INITIAL VALUES...
once a11=1
once a12=1
once a13=1
once a14=1
once a21=1
once a22=1
once a23=1
once a24=1
once a31=1
once a32=1
once a33=1
once a34=1
once a41=1
once a42=1
once a43=1
once a44=1
once a51=1
once a52=1
once a53=1
once a54=1
once a61=1
once a62=1
once a63=1
once a64=1
once Fbias1=0
once Fbias2=0
once Fbias3=0
once Fbias4=0
once Fbias5=0
once Fbias6=0
once b11=1
once b12=1
once b13=1
once b14=1
once b15=1
once b16=1
once b21=1
once b22=1
once b23=1
once b24=1
once b25=1
once b26=1
once Obias1=0
once Obias2=0
// ...DEFINITION OF INPUTS...
//ANGLE DEFINITION
ONCE PANGLE1=ROUND(SQRT(P1/2))
CURVE1=AVERAGE[P1](CLOSE)
ANGLE1=ATAN(CURVE1-CURVE1[1])*180/3.1416
ANGLEAVERAGE1=WeightedAverage[PANGLE1](ANGLE1)
ONCE PANGLE2=ROUND(SQRT(P2/2))
CURVE2=AVERAGE[P2](CLOSE)
ANGLE2=ATAN(CURVE2-CURVE2[1])*180/3.1416
ANGLEAVERAGE2=WeightedAverage[PANGLE2](ANGLE2)
variable1= (close-CURVE1)/CURVE1 *100 //or to be defined
variable2= (CURVE1-CURVE2)/CURVE2 *100 //or to be defined
variable3= ANGLEAVERAGE1 // to be defined
variable4= ANGLEAVERAGE2 // to be defined
// >>> LEARNING PROCESS <<<
// If the classifier has detected a wining trade in the past
//IF hour > 7 and hour < 21 then
//STORING THE LEARNING DATA
IF classifierlong=1 or classifiershort=1 THEN
candleentry0010=candleentry0009
Y10010=Y10009
Y20010=Y20009
candleentry0009=candleentry0008
Y10009=Y10008
Y20009=Y20008
candleentry0008=candleentry0007
Y10008=Y10007
Y20008=Y20007
candleentry0007=candleentry0006
Y10007=Y10006
Y20007=Y20006
candleentry0006=candleentry0005
Y10006=Y10005
Y20006=Y20005
candleentry0005=candleentry0004
Y10005=Y10004
Y20005=Y20004
candleentry0004=candleentry0003
Y10004=Y10003
Y20004=Y20003
candleentry0003=candleentry0002
Y10003=Y10002
Y20003=Y20002
candleentry0002=candleentry0001
Y10002=Y10001
Y20002=Y20001
candleentry0001=max(candleentrylong,candleentryshort)
Y10001=classifierlong
Y20001=classifiershort
ENDIF
IF BARINDEX > 1000 THEN
IF classifierlong=1 or classifiershort=1 THEN
IF hour > 8 and hour < 21 then
FOR i=1 to 10 DO // THERE ARE BETTER IDEAS
ETAi=ETA*(0.7*i/10+0.3) //Learning Rate
IF i = 1 THEN
candleentry=candleentry0010
Y1=Y10010
Y2=Y20010
ENDIF
IF i = 2 THEN
candleentry=candleentry0009
Y1=Y10009
Y2=Y20009
ENDIF
IF i = 3 THEN
candleentry=candleentry0008
Y1=Y10008
Y2=Y20008
ENDIF
IF i = 4 THEN
candleentry=candleentry0007
Y1=Y10007
Y2=Y20007
ENDIF
IF i = 5 THEN
candleentry=candleentry0006
Y1=Y10006
Y2=Y20006
ENDIF
IF i = 6 THEN
candleentry=candleentry0005
Y1=Y10005
Y2=Y20005
ENDIF
IF i = 7 THEN
candleentry=candleentry0004
Y1=Y10004
Y2=Y20004
ENDIF
IF i = 8 THEN
candleentry=candleentry0003
Y1=Y10003
Y2=Y20003
ENDIF
IF i = 9 THEN
candleentry=candleentry0002
Y1=Y10002
Y2=Y20002
ENDIF
IF i = 10 THEN
candleentry=candleentry0001
Y1=Y10001
Y2=Y20001
ENDIF
// >>> INPUT FOR NEURONS <<<
input1=variable1[barindex-candleentry]
input2=variable2[barindex-candleentry]
input3=variable3[barindex-candleentry]
input4=variable4[barindex-candleentry]
// >>> FIRST LAYER OF NEURONS <<<
F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1
F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2
F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3
F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4
F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5
F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6
F1=1/(1+EXP(-1*F1))
F2=1/(1+EXP(-1*F2))
F3=1/(1+EXP(-1*F3))
F4=1/(1+EXP(-1*F4))
F5=1/(1+EXP(-1*F5))
F6=1/(1+EXP(-1*F6))
// >>> OUTPUT NEURONS <<<
output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1
output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2
output1=1/(1+EXP(-1*output1))
output2=1/(1+EXP(-1*output2))
// >>> PARTIAL DERIVATES OF COST FUNCTION <<<
// ... CROSS-ENTROPY AS COST FUCTION ...
// COST = - ( (Y1*LOG(output1)+(1-Y1)*LOG(1-output1) ) - (Y2*LOG(output2)+(1-Y2)*LOG(1-output2) )
DerObias1 = (output1-Y1) * 1
DerObias2 = (output2-Y2) * 1
Derb11 = (output1-Y1) * F1
Derb12 = (output1-Y1) * F2
Derb13 = (output1-Y1) * F3
Derb14 = (output1-Y1) * F4
Derb15 = (output1-Y1) * F5
Derb16 = (output1-Y1) * F6
Derb21 = (output2-Y2) * F1
Derb22 = (output2-Y2) * F2
Derb23 = (output2-Y2) * F3
Derb24 = (output2-Y2) * F4
Derb25 = (output2-Y2) * F5
Derb26 = (output2-Y2) * F6
//Implementing BackPropagation
Obias1=Obias1-ETAi*DerObias1
Obias2=Obias2-ETAi*DerObias2
b11=b11-ETAi*Derb11
b12=b12-ETAi*Derb12
b13=b11-ETAi*Derb13
b14=b11-ETAi*Derb14
b15=b11-ETAi*Derb15
b16=b11-ETAi*Derb16
b21=b11-ETAi*Derb21
b22=b12-ETAi*Derb22
b23=b11-ETAi*Derb23
b24=b11-ETAi*Derb24
b25=b11-ETAi*Derb25
b26=b11-ETAi*Derb26
// >>> PARTIAL DERIVATES OF COST FUNCTION (LAYER) <<<
DerFbias1 = (output1-Y1) * b11 * F1*(1-F1) * 1 + (output2-Y2) * b21 * F1*(1-F1) * 1
DerFbias2 = (output1-Y1) * b12 * F2*(1-F2) * 1 + (output2-Y2) * b22 * F2*(1-F2) * 1
DerFbias3 = (output1-Y1) * b13 * F3*(1-F3) * 1 + (output2-Y2) * b23 * F3*(1-F3) * 1
DerFbias4 = (output1-Y1) * b14 * F4*(1-F4) * 1 + (output2-Y2) * b24 * F4*(1-F4) * 1
DerFbias5 = (output1-Y1) * b15 * F5*(1-F5) * 1 + (output2-Y2) * b25 * F5*(1-F5) * 1
DerFbias6 = (output1-Y1) * b16 * F6*(1-F6) * 1 + (output2-Y2) * b26 * F6*(1-F6) * 1
Dera11 = (output1-Y1) * b11 * F1*(1-F1) * input1 + (output2-Y2) * b21 * F1*(1-F1) * input1
Dera12 = (output1-Y1) * b11 * F1*(1-F1) * input2 + (output2-Y2) * b21 * F1*(1-F1) * input2
Dera13 = (output1-Y1) * b11 * F1*(1-F1) * input3 + (output2-Y2) * b21 * F1*(1-F1) * input3
Dera14 = (output1-Y1) * b11 * F1*(1-F1) * input4 + (output2-Y2) * b21 * F1*(1-F1) * input4
Dera21 = (output1-Y1) * b12 * F2*(1-F2) * input1 + (output2-Y2) * b22 * F2*(1-F2) * input1
Dera22 = (output1-Y1) * b12 * F2*(1-F2) * input2 + (output2-Y2) * b22 * F2*(1-F2) * input2
Dera23 = (output1-Y1) * b12 * F2*(1-F2) * input3 + (output2-Y2) * b22 * F2*(1-F2) * input3
Dera24 = (output1-Y1) * b12 * F2*(1-F2) * input4 + (output2-Y2) * b22 * F2*(1-F2) * input4
Dera31 = (output1-Y1) * b13 * F3*(1-F3) * input1 + (output2-Y2) * b23 * F3*(1-F3) * input1
Dera32 = (output1-Y1) * b13 * F3*(1-F3) * input2 + (output2-Y2) * b23 * F3*(1-F3) * input2
Dera33 = (output1-Y1) * b13 * F3*(1-F3) * input3 + (output2-Y2) * b23 * F3*(1-F3) * input3
Dera34 = (output1-Y1) * b13 * F3*(1-F3) * input4 + (output2-Y2) * b23 * F3*(1-F3) * input4
Dera41 = (output1-Y1) * b14 * F4*(1-F4) * input1 + (output2-Y2) * b24 * F4*(1-F4) * input1
Dera42 = (output1-Y1) * b14 * F4*(1-F4) * input2 + (output2-Y2) * b24 * F4*(1-F4) * input2
Dera43 = (output1-Y1) * b14 * F4*(1-F4) * input3 + (output2-Y2) * b24 * F4*(1-F4) * input3
Dera44 = (output1-Y1) * b14 * F4*(1-F4) * input4 + (output2-Y2) * b24 * F4*(1-F4) * input4
Dera51 = (output1-Y1) * b15 * F5*(1-F5) * input1 + (output2-Y2) * b25 * F5*(1-F5) * input1
Dera52 = (output1-Y1) * b15 * F5*(1-F5) * input2 + (output2-Y2) * b25 * F5*(1-F5) * input2
Dera53 = (output1-Y1) * b15 * F5*(1-F5) * input3 + (output2-Y2) * b25 * F5*(1-F5) * input3
Dera54 = (output1-Y1) * b15 * F5*(1-F5) * input4 + (output2-Y2) * b25 * F5*(1-F5) * input4
Dera61 = (output1-Y1) * b16 * F6*(1-F6) * input1 + (output2-Y2) * b26 * F6*(1-F6) * input1
Dera62 = (output1-Y1) * b16 * F6*(1-F6) * input2 + (output2-Y2) * b26 * F6*(1-F6) * input2
Dera63 = (output1-Y1) * b16 * F6*(1-F6) * input3 + (output2-Y2) * b26 * F6*(1-F6) * input3
Dera64 = (output1-Y1) * b16 * F6*(1-F6) * input4 + (output2-Y2) * b26 * F6*(1-F6) * input4
//Implementing BackPropagation
Fbias1=Fbias1-ETAi*DerFbias1
Fbias2=Fbias2-ETAi*DerFbias2
Fbias3=Fbias3-ETAi*DerFbias3
Fbias4=Fbias4-ETAi*DerFbias4
Fbias5=Fbias5-ETAi*DerFbias5
Fbias6=Fbias6-ETAi*DerFbias6
a11=a11-ETAi*Dera11
a12=a12-ETAi*Dera12
a13=a13-ETAi*Dera13
a14=a14-ETAi*Dera14
a21=a21-ETAi*Dera21
a22=a22-ETAi*Dera22
a23=a23-ETAi*Dera23
a24=a24-ETAi*Dera24
a31=a31-ETAi*Dera31
a32=a32-ETAi*Dera32
a33=a33-ETAi*Dera33
a34=a34-ETAi*Dera34
a41=a41-ETAi*Dera41
a42=a42-ETAi*Dera42
a43=a43-ETAi*Dera43
a44=a44-ETAi*Dera44
a51=a51-ETAi*Dera51
a52=a52-ETAi*Dera52
a53=a53-ETAi*Dera53
a54=a54-ETAi*Dera54
a61=a61-ETAi*Dera61
a62=a62-ETAi*Dera62
a63=a63-ETAi*Dera63
a64=a64-ETAi*Dera64
//GradientNorm = SQRT(DerObias1*DerObias1 + DerObias2*DerObias2+Derb11*Derb11+Derb12*Derb12+Derb13*Derb13+Derb14*Derb14+Derb15*Derb15+Derb16*Derb16 + Derb21*Derb21+Derb22*Derb22+Derb23*Derb23+Derb24*Derb24+Derb25*Derb25+Derb26*Derb26 + DerFbias1*DerFbias1+DerFbias2*DerFbias2+DerFbias3+DerFbias3+DerFbias4*DerFbias4+DerFbias4*DerFbias5+DerFbias6*DerFbias6 + Dera11*Dera11+Dera12*Dera12+Dera13*Dera13+Dera14*Dera14 + Dera21*Dera21+Dera22*Dera22+Dera23*Dera23+Dera24*Dera24 + Dera31*Dera31+Dera32*Dera32+Dera33*Dera33+Dera34*Dera34 + Dera41*Dera41+Dera42*Dera42+Dera43*Dera43+Dera44*Dera44 + Dera51*Dera51+Dera52*Dera52+Dera53*Dera53+Dera54*Dera54 + Dera61*Dera61+Dera62*Dera62+Dera63*Dera63+Dera64*Dera64)
NEXT
ENDIF
ENDIF
//ENDIF
/////////////////// NEW PREDICTION ///////////////////
// >>> INPUT NEURONS <<<
input1=variable1
input2=variable2
input3=variable3
input4=variable4
// >>> FIRST LAYER OF NEURONS <<<
F1=a11*input1+a12*input2+a13*input3+a14*input4+Fbias1
F2=a21*input1+a22*input2+a23*input3+a24*input4+Fbias2
F3=a31*input1+a32*input2+a33*input3+a34*input4+Fbias3
F4=a41*input1+a42*input2+a43*input3+a44*input4+Fbias4
F5=a51*input1+a52*input2+a53*input3+a54*input4+Fbias5
F6=a61*input1+a62*input2+a63*input3+a64*input4+Fbias6
F1=1/(1+EXP(-1*F1))
F2=1/(1+EXP(-1*F2))
F3=1/(1+EXP(-1*F3))
F4=1/(1+EXP(-1*F4))
F5=1/(1+EXP(-1*F5))
F6=1/(1+EXP(-1*F6))
// >>> OUTPUT NEURONS <<<
output1=b11*F1+b12*F2+b13*F3+b14*F4+b15*F5+b16*F6+Obias1
output2=b21*F1+b22*F2+b23*F3+b24*F4+b25*F5+b26*F6+Obias2
output1=1/(1+EXP(-1*output1))
output2=1/(1+EXP(-1*output2))
ENDIF
return output1 coloured(0,150,0) style(line,2) as "prediction long" , output2 coloured(200,0,0) style(line,2) as "prediction short",0.5 coloured(0,0,200) as "0.5", 0.6 coloured(0,0,200) as "0.6", 0.7 coloured(0,0,200) as "0.7", 0.8 coloured(0,0,200) as "0.8"
My two most recent posts above were while I was optimising my Strategy / System (not your Indicator).
Here is an example with screen shots. I feel like I’m losing my grip ! ha
If I optimise P1 and P2 then the top result (values 38 and 100) is Leo 18.
If I then insert values 38 and 100 into my System I get result Leo 19.
Anybody any suggestions or comments, esp Leo?
Edit / PS
Don’t worry just yet … I may have the reason, more later! 🙂
I’ve got into the habit of naming my variables A+ Line number, so for Line 8 (Line 11 in my System) I used …
P2=A11 //FOR CURVE AS INPUT
But there is also an A11 in Leo’s code and so I was making P2 = A11 (by mistake) but it produced great results!! 🙂
Leo’s Neural Network code is beyond me, but I will try and explore what is going on re my blunder! 🙂
Also I will set my corrupted System going on Demo Forward Test and report results over on my own Topic using Leo’s Neural Network code.
So in summary … a storm in a tea cup, but I may have unearthed something interesting?
I read you fisrt code and I were thinking on that, then my daugther finally slept againg (4:33 am) and cotinue reading.
You take one weight A11 and reset it every time with a value 38. Do not feel disappointed, I do not know either what effect take it that particular neuron and propagate to the others. I can imagine the effect was like drinking alcohol to that neuron. Haha
Awesome Video about the Math of neural networks. It is what I am coding here.
Leo
There is software published by Google allowing Deep Learning . “Tensorflow”
In the following presentation , you will find interesting topics
In the mean time I made already a list of additional data that be included in the system:
Ret : daily return of the asset
HV : realized volatility for the past 5 sessions
M5 Momentum 5
M10 Momentum 10
VIX9d or formally VXST 9-day volatility index of the S&P 500 (= market sentiment short term)
VIX of S&P 500 (= market sentiment long term)
VVIX (= market sentiment momentum)
MO month of the year (seasonality)
DAY day of the week (seasonality)
Hi didi059,
there are other libraries for machine learning like sciki-learn but we need to learn another language of programming and other interface with the brocker. So far I do not know how to implement for implement those libraries. We know from the beginning that prorealtime is not the right language for artificial intelligent.
Those indicator sound good, I keep them in mind.
Hi Leo,
Any idea of I should do to make it universaly work with stocks?
Best,
Chris
Hi, Actually is already universal, If it is working that the issue 🙂
From line 106 to 123 you add your indicators or variable and the output is a prediction of going long or short base on those variable you choose
Actuallly in an strategy, those variables ca be in other time frames
Hi,
thanks for the code.
If I’m not wrong you use “if barindex > xxxx then” to create a learning period.
Do you think that if I replace BARINDEX by “DEFPARAM PRELOADBARS = xxx” it could allow to create a learning period a start to trade immediately?
Yes you are right.
I use that for show it as an indicator. Defparam bars are also used for learning process.
I would let “if barindex > xxxx then” feature inside because there are so much values involve that better have time for load everything.
Neural networks programming with prorealtime
This topic contains 126 replies,
has 8 voices, and was last updated by MobiusGrey
2 years, 4 months ago.
| Forum: | ProBuilder: Indicators & Custom Tools |
| Language: | English |
| Started: | 08/18/2018 |
| Status: | Active |
| Attachments: | 32 files |
The information collected on this form is stored in a computer file by ProRealCode to create and access your ProRealCode profile. This data is kept in a secure database for the duration of the member's membership. They will be kept as long as you use our services and will be automatically deleted after 3 years of inactivity. Your personal data is used to create your private profile on ProRealCode. This data is maintained by SAS ProRealCode, 407 rue Freycinet, 59151 Arleux, France. If you subscribe to our newsletters, your email address is provided to our service provider "MailChimp" located in the United States, with whom we have signed a confidentiality agreement. This company is also compliant with the EU/Swiss Privacy Shield, and the GDPR. For any request for correction or deletion concerning your data, you can directly contact the ProRealCode team by email at privacy@prorealcode.com If you would like to lodge a complaint regarding the use of your personal data, you can contact your data protection supervisory authority.