Backpropagation VS me

  • 0 Replies


  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4423
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Backpropagation VS me
« on: November 10, 2020, 08:56:41 pm »
So I'm still writing my AGI guide so we can finally create human AI soon. And I know I'm up against Backprop which is used by 98% of the field. Even Hinton said it is unnatural. It's not a "simple ideas are the best" either, it's all explained in complex math and no one can explain it to me. Backprop requires the network to be already in hierarchy form, narrow in the middle, and you can't add new neurons unless start over with a bigger network. Backprop isn't a one trick pony either you need to add to it RNN, etc.

The simplest rule/pattern is If-Then, syntactics, you recognize A and predict B to transform the data/particles and create the future.

Just what patterns is Backprop learning? Why (and I don't just mean efficiency, I mean actual higher intelligence) is Backprop preferred over Markov Model ways? Everything in AI is based on A>B. Semantics, KNN/K-Means, recency, it's all pattern matching.

They keep talking about the Backprop's Chain Rule in a FFN or RNN building functions made of smaller functions but I'm beginning to think this is only useful for financial stats now and is only a calculator and doesn't add anything else to intelligence over Markov Chains. Humans don't come up with a solution to 2, 13, 88, 900, ?, ?, ? unless they begin trying possible algorithms in their head to find the pattern. Backprop is finding lines/curves in house prices based on room location etc data but this is one task not "complex math backprop". They make it sound like a mess, AI is not a mess.

In short, Backprop is adjusting the weights from end to start of the net but does it know how to change them or does it try a few times during a batch? And, if it knows how to change them, what is it merging? - There has to be some way it recognizes an unseen new input Z and says oh this is A and I know B comes after A.

If we take a sentence ex. "I was walking down the ?_?" how does Backprop know the answer probabilities to this or other prompts in text (or vision)? It has to look at various views of it ex. with words missing or translate them. What matches can Backprop find!? I want not how Backprop works, I need to know what backprop learns in its encryted mess. What types of patterns (rules/ functions) does it merge/find?