Simulating neural networks with symbolic approach

  • 5 Replies
  • 7795 Views
*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Simulating neural networks with symbolic approach
« on: September 29, 2023, 10:17:38 am »
I'm exploring new possibilities, it's as exciting as always. It seems that sequent like rules, as ones used in Prolog and similar languages, can simulate neurons. For example, if we have rules:

a1 /\ a2 /\ a3 -> b
a4 /\ a5 /\ a6 -> b

it is like we have:

(a1 /\ a2 /\ a3) \/ (a4 /\ a5 /\ a6) -> b

where `a` is input, and `b` is output. Chaining rules like these may form entire neural networks like regular ANNs, from some input to some output. ANNs usually use a set of children neurons that may fire parent neuron if sum of thresholds is high enough. With symbolic neural network, we deal with conjunctions (minimal logical state in the set) and disjunctions (maximal logical state in the set), and I believe this may make NNs more compact, which would mean better results in less training, not to mention possible 100% (or controllably less) confidence in result.

This would shift endless training of ANN to a process of program synthesis (possibly with adjusting of factor of program correctness to support fuzzy values). As brute force program synthesis would take a time similar to factorial of length of input, some genetic rule set construction would be more than welcome. For more complicated rules, just like in discovering new formulas in natural way, probably something like conscious rules constructions would additionally speed up program synthesis.

*

MagnusWootton

  • Replicant
  • ********
  • 634
Re: Simulating neural networks with symbolic approach
« Reply #1 on: September 29, 2023, 10:56:48 am »
getting brute force to factorial cost is really hard to do,   that is super code if you can.

Brute force has to be 2^problemspace,   any less than that actually is really good and gets ai going tho.

So if you can beat that you should be well on your way to success already!!!  (factorial is good enough, its a quantum speed up already)

.... unless im misunderstanding things....

is factorial cost the same as fibonacci cost?  or its not.    (fibonacci is 1+2+5+8+13+21...  the rabbits population growing)  which is under 2^rabbits easily.

But fibonacci cost is half the quadratic cost.

But if factorial cost isnt fibonacci cost im mistaken....   But if u got a quantum computer down to fibonacci cost, then that is quantum supremacy already.

<EDIT>
I think factorial cost is worse than a brute force hehe.  if u got a brute to factorial cost it actually means you stuffed it up completely.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Re: Simulating neural networks with symbolic approach
« Reply #2 on: September 29, 2023, 11:42:46 am »
getting brute force to factorial cost is really hard to do,   that is super code if you can.

Actually it's not good. For length of 20 symbols, 20! would be a very big number.

*

MagnusWootton

  • Replicant
  • ********
  • 634
Re: Simulating neural networks with symbolic approach
« Reply #3 on: September 29, 2023, 03:29:08 pm »
yeh i confused factorials with fibonacci.  mistake.

factorial is worse than 2^p.    so im not sure what use it has to me, since i dont want my quantum computer that fast anyway.



If you want to put logic into a neural network itll work,  its just a matter of simplifying it into it.

What I do is try to superimpose alot pictures on top of each other on the synapses, and into one giant single mega braincell and i have to slowly ease em all in over time,   so it never reaches the full power of the weights could represent,  would be huge but I can only imagine.

But you can actually simplify anything together, even data is the same as logic,   if your getting a whole lot of pictures and giving them a value, its become a logic gate   data->output.

So I guess I could try logically simplifying it to the output, instead of doing this way of training im currently doing.

Thanks for idea.

Brute force looks like the way that you can make ai come true, even to human standards, if only u could test all possibilities.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Re: Simulating neural networks with symbolic approach
« Reply #4 on: September 29, 2023, 04:04:06 pm »
Brute force looks like the way that you can make ai come true, even to human standards, if only u could test all possibilities.

I believe brute force could work for some basic activities up to small number of symbols, but for anything more complex, there would be need for some guidance in a sense of logically choosing what to combine and how. Like a logic within a logic that searches for more complex truths that hold and ideas to be tried.

Divide, then conquer.

*

MagnusWootton

  • Replicant
  • ********
  • 634
Re: Simulating neural networks with symbolic approach
« Reply #5 on: September 29, 2023, 04:12:43 pm »
Yeh, your right, only a low amount of symbols you can check all combinations of.
Thats how the latest generative networks work, and motor generation, its with a low amount of symbols.

To do a million symbols you need a million qbits (an actual quantum computer)  But if u had one, it would create a computer as strong as a person.   But it would be somewhat alien and misguided in a way where I think that we have things in measure to keep us from becoming too evil.  (But it doesnt seem to working sometimes with the world around us.)

The computer would be even worse I think,  given that much power.

If your not planning on cheating by just trying every combination of the symbols,  you have to break the problem down and actually solve it directly.
(but u can get aid from brute forces under ~20 binary symbols)

Im sure gpt4 has been solved in that manner, solving it directly without cheating as aposed to with just brute forcing it.    Brute forcing it is cheating,   you get a solution you dont even understand fully.

But if it ends up possible to brute force it,  Lots of people would do it.

Because uve only got a low amount of symbols to brute force realisticly, to get any success out of it you have to plan for a low amount of symbols due what ur input and output makeup is.

So it can help when your doing it, to do these little brute forces in the ai,  being able to brute force the whole thing,  is just a scientific fantasy.


Back prop can be done similarly to brute forcing, that ur going for an approximate fit for your dataset into the perceptron,   its not ideal what u get, but thats how i do it,  you just keep tweaking them all in until it settles on them all inside it,    it takes a while like that,  theres ways to improve it.

But if you brute forced it with a quantum computer it would fit insane amounts in the neurons, cause it would actually end up procedurally generating the data output with amazing little programs contained in the network,  and then u would get the closest thing possible to infinite ideal compression.

There is a crazy of a quantum capacitor,  its completely silly calling it so, where you would store kolgomorov complexity like amounts of data in the lowest amounts of bits.  But a quantum computer is the same thing as this "quantum capacitor"  you can use it as one.

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

261 Guests, 1 User
Users active in past 15 minutes:
Freddy
[Administrator]

Most Online Today: 359. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles