ok thanks.
mine is just:
public float LogSigmoid(float x)
{
return (float)(1.0f / (1.0f + System.Math.Exp(-x)));
}
so i guess i can use your solution, but what does the stand for?
nevermind ;-) found it.
I'm sorry, but I don't understand very well NN terminology.
Public Shared Function Sigmoid(ByRef Value As Integer) As Double
Return 1 / (1 + Math.Exp(-Value))
End Function
Public Shared Function SigmoidDerivitive(ByRef Value As Integer) As Double
Return Sigmoid(Value) * (1 - Sigmoid(Value))
End Function
these are the sigmoid functions i used...
I have now fully understood the neural network paradigm, except for some other forms such as RNN (which im working on) and some other forms ....
But in reality they are as i expected. the outputs provided are not really useful, in saying that you could have a load of objects converted in to some numerical inputs and train the network to recognize these objects as some binary output, which intern when looked up in a table which represents those binary outputs could clasify or identify what the object was.... now when asked in natural language what is this object .... it could quickly classify the object then look it up ... then respond with the found data in the table... yet even so for a robot its lots of processing power. as each new object would need to be converted into some numerical input and would not be able to identifiy the object unless it fit a previous object in the table so for new object which had not been previously saved in the lookup table it would have to "fit" the object to an existing object... so it would still need to be told what the object was, for it to classify new object with the same digital "ID" or class.... if a simular object, to a Stored idea of a bed was scanned it would recognize bed.
this type of machine lerning or classification still needs thought on how the data should be taken in to become some form of input to match a desired output of (whateever binary number which would match the stored table lookup) .....
the networks use all types of transfer functions which produce only numbers between -0 and 1 so if it outputed a number such as 0.9987 then you could say that it is a close match to 1 but it is still not 1 ..... so the last function in the network should be a simple binary function ... if its below 1 then its 0 if its above 0 then its a 1 just to make the output become easy to understand..(the outputs) the inputs can be any numerical value such as height, weight, color etc....
to recognise a letter the old dot matrix character can be turned into a binary number 8X8 then the out put would would as well be some binary output (upto 27) denoting which letter it was.... given some handwriting sample by scanning the letter in as a dot matrix figure the binarys for squares with the handwiring color ie: "black" when entered into the input may detect Simular shapes and classify them as one of the 27 numbered outputs if you set an error threshold and used it as part of the learning rule then Close matches would detect correctly.....
the transfer functions used just enable for moving the weight values along the curve produced by the function (Sigmoid wave form) so its derivitive would know which way up or down the wave to travel towards the intended output. so in reality any transfer fuinction can be used... its a little experimental otherwise there would be no need of all the differing functions. as some learn quicker than others.... but once you find your one that your conmfortable with it really doesnt matter....
the question for me remains ..... can neural networks create a prediction?