"lol"

Not easy maths....

.

^{delta weight adjustment = learning rate * Original activation * difference between expected and actual output}

@korrelan thanks very much.

i think that this one iused for the forward propergation rules (only on the hidden layer?) i used it on the input layer (silly me) im wondering if it makes a difference...

i will experiment...

Ive done the forward Ann the same as that formula.. and the backward as the derivatives... I only have derivative formulaes for the sigmoid and the Hyperbolic tangent, when drawn they look really simular in shape. so even these activation functions im sure are the same...

im going to try figure out the gausian derivative... among the others.. although some do not have derivatives.

my simple tests are just the times table... inputs 1,2,3,4,5,6,7,8,9,10 outputs 2,4,6,8,10,12,14,16,18,20

then see if it can do the three times table...

I cracked it today (calculating the deltas) for each layer then calculating the errors for each layer and adjusting the weights relative to thier respctive errors for each node in each layer. (real Tricky)

^{ Private Function ForwardLearning(ByRef NN As NeuralNetwork) As NeuralNetwork 'ApplyLearning rule 'inputLayer? 'Weight = Weight LocalError * LearningRate * input NN.Params.BiasedTerm = NN.Params.BiasedTerm + NN.Params.LearningRate * localError For Each Node As Neuron In NN.InputLayer.Nodes Node.weight = Node.weight + localError * NN.Params.LearningRate * Node.input Next For Each Node As Neuron In NN.InputLayer.Nodes Node.weight = Node.weight + localError * NN.Params.LearningRate * Node.input Next For Each Node As Neuron In NN.HiddenLayer.Nodes Node.weight = Node.weight + localError * NN.Params.LearningRate * Node.input Next Return NN End Function Private Function BackwardLearning(ByRef NN As NeuralNetwork) As NeuralNetwork Dim deltaOutput As Double = 0.0 Dim DeltaInput As Double = 0.0 Dim deltaHidden As Double = 0.0 Dim deltaOutputError As Double = 0.0 Dim deltaHiddenError As Double = 0.0 For Each Node As Neuron In NN.OutputLayer.Nodes 'LearningRule 'Di = LocalError * SigmoidDerivitive(Node.input) deltaOutput = Node.NeuronError * TransferFunctions.EvaluateTransferFunctionDerivative(Node.input, NN.Params.FunctionType) 'nw=w+lr*node.output*Di Node.weight = Node.weight + NN.Params.LearningRate * Node.output * deltaOutput deltaOutputError = deltaOutputError + deltaOutput Next For Each Node As Neuron In NN.HiddenLayer.Nodes 'dj = SigmoidDerivitive(Node.input) * SumOfDeltaOutput*weights deltaHidden = TransferFunctions.EvaluateTransferFunctionDerivative(Node.input, NN.Params.FunctionType) * SumWeights(NN.OutputLayer) * deltaOutputError Node.weight = Node.weight + NN.Params.LearningRate * Node.output * deltaHidden deltaHiddenError = deltaHiddenError + deltaHidden Next For Each node As Neuron In NN.InputLayer.Nodes DeltaInput = TransferFunctions.EvaluateTransferFunctionDerivative(node.input, NN.Params.FunctionType) * SumWeights(NN.HiddenLayer) * deltaHiddenError node.weight = node.weight + NN.Params.LearningRate * node.output * DeltaInput Next Return NN}

thanks guys ..... im so close its biting me in the "Ann"