woah im stuck (totally baffeled about neural nets)

  • 10 Replies
  • 5749 Views
*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 322
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
woah im stuck (totally baffeled about neural nets)
« on: August 05, 2015, 06:39:13 pm »
Help  ::
Im stuck trying to code a neural network.
I have created the hidden layers and the inputs and outputs but...

1) Which transfer function / activation function goes with which learning rules...

every where i read or watch , lots of "mathmatical notation" when if it was explained in words i would understand it...

one in particular (simple probally ) is the delta rule for learning and the derivative...

1) when is the derivative applied?
        ''' <summary>
        ''' in a liner neuron the weight(s) represent unknown values to be determined
        ''' the outputs could represent the known values of a meal and the
        ''' inputs the items in the meal and the
        ''' weights the prices of the individual items
        ''' There are no hidden layers
        ''' </summary>
        ''' <remarks>answers are determined by determining the weights of the linear neurons
        ''' the delta rule is used as the learning rule: Weight = Learning rate * Input * LocalError of neuron</remarks>
        Public Sub Linear()
            ' Output = Bias + (Input*Weight)

        End Sub

when do i apply the derivative? and what would the derivative be?

I will probably have loads more questions

3): how many hidden layers should i use in a feed forward network? or should i just use loads of neurons in a single layer ?
Does it make a difference? loads of layers or loads of neurons.
which network should i design (neuron type)
I would like to build a feed forward then a back propergation ...

( i have a sigmoid function and its derivative) ( i have a tanH + not sure on its derivative)

woah ... totally stuck and baffled...
LOL
Sorry if its wrong to ask... as we usually just chat ideas..... and figure the rest out for ourselves.

PS: i write all my own code and do not use plugins like numb.py or other premade stuff!! one of the reasons for not using things like python! <zx spectrum basic>

« Last Edit: August 05, 2015, 07:06:19 pm by spydaz »

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: woah im stuck (totally baffeled about neural nets)
« Reply #1 on: August 05, 2015, 08:12:19 pm »
Hi Spydaz,

It's certainly not wrong to ask :)

I don't know NNs at all yet, I know some here do though.

You should get some replies, but if not I know someone I can refer to the topic if they have time. I'll keep an eye on this thread.

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 322
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
Re: woah im stuck (totally baffeled about neural nets)
« Reply #2 on: August 05, 2015, 08:49:12 pm »
i have created so many functions for, learning yet this has eluded me... thanks..

the ability for a conversational AI to create a neural network to answer some questions based on past cases then save the network in a data base for recall when presented with a new case .. i think is vital to giving the ai a problem based on past cases ... also the ability for it to determine if the data given to it (based on past knowledge) is true or false is also important for the AI to detect if the information being given to it is true or false...

this problem of rubbish in and rubbish out needs to be solved.

I think that an AI should have the paradigms for self learning and self checking as well as reinforcement learning, gaining some positive points and negative points, based on the user it interacts with.
we humans can have a personal stance towards new knowledge being presented by a user. if that user has previously "lied" then it should be taking that users information with a pinch of salt... saving it as unconfirmed knowledge and when presented with more cases confirm or deny that knowledge based on the probablity of it being correct. as well as being able to be given loads of problems which it could solve by creating neural nets for certain formal problems and calculating the possible answer.
as supervised learning goes... formal models can be created and given to the AI in program form(plugins) and internal programming for unsupervised(guided) learning.

i am close to natural language deconstruction in english, and can gain a massive amount of syntactical knowledge from a peice of text. and respond to it. yet if a problem is presented which is more than "all men are mortals, john is a man therefore john must be mortal" , like given dinner costs 1.50 for eggs and bacon and toast on monday , and scambelled eggs and toast cost 1.35 on tuesday... how much is scrambled eggs, and bacon and toast on wednesday... it cant figure it out, yet given a neural network with a single linear neron, the price of each object would be the weights of each input for the monday and tuesday dinners(obviously more training cases would be needed), starting with random prices (weight for the input meal items) ...
now i can see the importance for conversational problems (predicting the answer) ...
im not a champion of neural networks and know that it has loads more uses but after long thoughts i realise that its very useful for these predictive capabilitys ...(not just for a robot learning to walk) or (car parking itself)...

this is why a basic neural network is all i need and it can be developed over time....
given conflicting ideas (programming) which choice to make world come from presenting both problems to the network with their goals as the output and then it could decide which program would have priority.... ie: Dont arrest any ocb worker, vs protect and serve (robo cop) how did he overcome his programming.... some kind of neural net(solved it) ... he electricuted his-self to overcome the glitchy programming... sad to se inhumans that the robots deployed with asimovs laws could not defend themselves against abuse... yet those without could, and still did not choose to destroy all humans....

*

rottjung

  • Roomba
  • *
  • 17
Re: woah im stuck (totally baffeled about neural nets)
« Reply #3 on: August 06, 2015, 01:52:16 pm »
Depending on what your ANN has to do there are different ways of training it.
see of it as a 3D graph.
you have input x and y with should result in z being 1 if trained that way.
now some tricky parts are that often input scaling in required.
for example I have a NN that recognizes notes in frequencies. The human hearing range is 20-20000hz. and i want a sigmoid activation function. My NN will always output 1 because the input values are higher then the active range for a sigmoid log. so before inputting the freq values in the net i scale them to between -5, 5 (which is more or less sigmoid active range)
therefor to train the net i also scale my desired output to this range and then sigmoid log it also.
best way to get your head around NN's is trying to make a simple XOR and graph out the results...
like this (not XOR but the freq one i did which is more of AND than XOR)
https://dl.dropboxusercontent.com/u/32556668/XOR.html
(press new net and then train, the feed is just random values to check after training)
the matter of using how much hidden layers or nodes and such is best find out by testing.
important is to know how many inputs and outputs...
« Last Edit: August 06, 2015, 07:28:37 pm by rottjung »

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1302
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: woah im stuck (totally baffeled about neural nets)
« Reply #4 on: August 06, 2015, 06:59:15 pm »
In my opinion, no worries about that.  I think that may be expected with ANNs.  My friendly advice would be to add a control to your ANN experiment.  As a control, I suggest a simple ANN prototype that runs in parallel to your current ANN research.

My Very Enormous Monster Just Stopped Using Nine

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: woah im stuck (totally baffeled about neural nets)
« Reply #5 on: August 06, 2015, 07:22:41 pm »
Hi Spydaz

Using the common ANN to parse natural language... your probably making a rod for your own back.  There not really designed for handling such specific data, and the shear range of words in any/our language... hope I'm wrong (as usual) but good luck.

Common ANN's are basically universal approximators... but this might help...

http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html

Now... when you've had enough of common ANN's... look into spiking ANN's  :D
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

rottjung

  • Roomba
  • *
  • 17
Re: woah im stuck (totally baffeled about neural nets)
« Reply #6 on: August 06, 2015, 08:02:02 pm »
hey Spydaz,

since the Input of a neuron is the activation function result of the summation of the output*weights of the neurons of the previous layer + a bias, the output of your outputLayer nodes is it's input and thus normally an activation function result.
for the learning method i'm sure you read by now that backpropagation is the most commonly used (or improvements thereof).
it is a good practice to draw out your ANN architecture from the code you wrote, to see if you made any errors in the order of calculations and such.
mine looks like this:
 

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 322
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
Re: woah im stuck (totally baffeled about neural nets)
« Reply #7 on: August 07, 2015, 02:00:27 am »
I shall have do a similar diagram ...
I think im very close ... To yours ...

as far as i understand it so far , the forward propergation uses the input weights as the adjuster and the back propergation uses the hidden weights as the adjuster...then reforward feed the network ?

im still stuck on the delta rule...

« Last Edit: August 07, 2015, 03:15:04 pm by spydaz »

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: woah im stuck (totally baffeled about neural nets)
« Reply #8 on: August 08, 2015, 11:35:33 am »
OMG been donks since i've had to think about this stuff...

Delta Rule or back propagation rule; used during training only... goes something like this...

delta weight adjustment = learning rate * Original activation * difference between expected and actual output

Only works on non-hidden layers though... the rule is basically the ammount of adjustment to be applied to each weighted connection into the neuron.  It slowly drives the weights to an ideal weight to produce the desired output.  The overall function can be described as a paraboloid in n-space where the minimum value is ideal weight vector.

It's used on non hidden layers because it's very efficent.  No such rule exists for hidden layers though because usually their error surface rarely a paraboloid.
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 322
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
Re: woah im stuck (totally baffeled about neural nets)
« Reply #9 on: August 08, 2015, 02:04:27 pm »
"lol"

Not easy maths....

.

 
delta weight adjustment = learning rate * Original activation * difference between expected and actual output


@korrelan thanks very much.


i think that this one iused for the forward propergation rules (only on the hidden layer?) i used it on the input layer (silly me) im wondering if it makes a difference...
i will experiment...

Ive done the forward Ann the same as that formula.. and the backward as the derivatives... I only have derivative formulaes for the sigmoid and the Hyperbolic tangent, when drawn they look really simular in shape. so even these activation functions im sure are the same...

im going to try figure out the gausian derivative... among the others.. although some do not have derivatives.

my simple tests are just the times table... inputs 1,2,3,4,5,6,7,8,9,10 outputs 2,4,6,8,10,12,14,16,18,20
then see if it can do the three times table...

I cracked it today (calculating the deltas) for each layer then calculating the errors for each layer and adjusting the weights relative to thier respctive errors for each node in each layer. (real Tricky)


         Private Function ForwardLearning(ByRef NN As NeuralNetwork) As NeuralNetwork
                'ApplyLearning rule
                'inputLayer?
                'Weight = Weight  LocalError * LearningRate * input

                NN.Params.BiasedTerm = NN.Params.BiasedTerm + NN.Params.LearningRate * localError
                For Each Node As Neuron In NN.InputLayer.Nodes
                    Node.weight = Node.weight + localError * NN.Params.LearningRate * Node.input
                Next
                For Each Node As Neuron In NN.InputLayer.Nodes
                    Node.weight = Node.weight + localError * NN.Params.LearningRate * Node.input
                Next
                For Each Node As Neuron In NN.HiddenLayer.Nodes
                    Node.weight = Node.weight + localError * NN.Params.LearningRate * Node.input
                Next
                Return NN
            End Function
            Private Function BackwardLearning(ByRef NN As NeuralNetwork) As NeuralNetwork
                Dim deltaOutput As Double = 0.0
                Dim DeltaInput As Double = 0.0
                Dim deltaHidden As Double = 0.0
                Dim deltaOutputError As Double = 0.0
                Dim deltaHiddenError As Double = 0.0

                For Each Node As Neuron In NN.OutputLayer.Nodes
                    'LearningRule
                    'Di = LocalError * SigmoidDerivitive(Node.input)
                    deltaOutput = Node.NeuronError * TransferFunctions.EvaluateTransferFunctionDerivative(Node.input, NN.Params.FunctionType)
                    'nw=w+lr*node.output*Di
                    Node.weight = Node.weight + NN.Params.LearningRate * Node.output * deltaOutput
                    deltaOutputError = deltaOutputError + deltaOutput
                Next

                For Each Node As Neuron In NN.HiddenLayer.Nodes
                    'dj = SigmoidDerivitive(Node.input) * SumOfDeltaOutput*weights
                    deltaHidden = TransferFunctions.EvaluateTransferFunctionDerivative(Node.input, NN.Params.FunctionType) * SumWeights(NN.OutputLayer) * deltaOutputError
                    Node.weight = Node.weight + NN.Params.LearningRate * Node.output * deltaHidden
                    deltaHiddenError = deltaHiddenError + deltaHidden
                Next

                For Each node As Neuron In NN.InputLayer.Nodes
                    DeltaInput = TransferFunctions.EvaluateTransferFunctionDerivative(node.input, NN.Params.FunctionType) * SumWeights(NN.HiddenLayer) * deltaHiddenError
                    node.weight = node.weight + NN.Params.LearningRate * node.output * DeltaInput
                Next
                Return NN


thanks guys ..... im so close its biting me in the "Ann"
« Last Edit: August 08, 2015, 08:03:22 pm by spydaz »

*

spydaz

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 322
  • Developing Conversational AI (Natural Language/ML)
    • Spydaz_Web
Re: woah im stuck (totally baffeled about neural nets)
« Reply #10 on: August 11, 2015, 11:57:48 pm »
I have found that text can be entered into a neural network ... @korrelan
Using. Recursive neural network , text sequences can be predicted. As well as word classification ...
A word list needs to be mapped to an index then a word matrix needs to be created normalising the words to a matrix of vectors , then the vectors can be entered into the network to predict new vectors ...

Google brain uses this type of neural network ..

Im not sure if its a new breakthru . It uses a deep learning matrix using a soft max as the final layer in the network and a sigmoid or log function  for the hidden layer ...

So now the potential of the neural network has just become usable for natural language processing .

There are lots of videos on you tube ... (Hugo Larochelle)
https://youtu.be/FoDz01QNSiY
« Last Edit: August 12, 2015, 11:02:03 am by spydaz »

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
March 28, 2024, 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

347 Guests, 0 Users

Most Online Today: 396. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles