1. RElu, PRElu (parametrized RElu) activation functions, see https://arxiv.org/pdf/1502.01852.pdf 2. test (P)RElu for MLP,QBPTT, task is Protein-SS-Pred 3. RMSprop: mu=0.99 -v eta=0.001 -v eps=1e0 mu1=(1-mu) ldx=dx[nl,ii,ih]; #gradient dw[nl,ii,ih]*=mu; dw[nl,ii,ih]+=mu1*ldx*ldx; w[nl,ii,ih]+=eta*ldx/(sqrt(dw[nl,ii,ih])+eps); dx[nl,ii,ih]=0; 2. with global flag, change all BP-based algos to support cross_entropy error: if (cross_entropy==0){# cross_entropy==1 means omit f' in output layer 3. dropout dropout[2]=0.2; if (rand()