This is amazing.
The reason my AI needed a Language Neural Network pattern recognizer isn't just so it can recognize phonemes ~ words (hashed or not), but also because mine can't sum up what you said i.e. sentence/topic for input NOR output language! (or skills!!).
The Language Neural Network aka HHMM hierarchy tree has phonemes so words can be recognized and words for sentences and sentences for topics and are of what WE say - no wonder it has pattern recognizers for ex "biotechnology" or "hello there". plus what we output. We don't say jar box can ox open lights NOR fjiagoolut op tala. We Have a common language.
The reason we all have different words/sentences/topics is because by the rewards we switch words for other words to say "Go to money" instead of "go to school". Every time we think we create brand-new sentences/words. How? Supervision by rewards. When we internally hear ourselves mentally we see pictures and hear sentences, these select a single image or new sentence you see hear (word by word) next. It appears then that when sentences form new ones and say are no good, the image it matches OR the sentence in the network is a negative image/network sum-up. After all network sum-up lets you output sum-ups to talk/act, while any of the words might all match positive images i.e. AI-stuff - it must be that negative-sum ups are the case?
Network hierarchy sums.
Network is stereotyped language.
Network creates new language by +/- rewards and knows when in the wrong only by itself.
"Hello me", "really, well ok"
"why I write standardized words/sentences", "cus I can ask mom for food/etc"
"this sums it up good, I never had these sentences before today, new findings"
As we write sentences, we also stare a lot, and sculpt it better instead of writing it instantly.
Reason I came across this is because I was trying to figure out if we layered sounds for music or line them up. Layering would be realistic (like music production programs) and better therefore. (edit: I wrote "before" instead of "better". . .holy)
The music it creates is what we HEAR on Earth PLUS what we love!!!!!!!!!!!! That's why I write English words ex. "hello, nuggets", and that's why I write *what I write "AI is important". !!!!!!!!!!!!!!!!!!!!!!!!!!!
So when we hear music, we may be able to separate sounds plus re-construct them since won't be complete, so we CAN get them. . .
Now the network has to work multi-time ! It could layer 88 sounds for rendering the piece (not 1 phoneme, the *phonemes will make up the rendering of 88 different instruments/sounds).
If it can line them up, it can do the same for which stacks to line up, and, as for the stacks themselves, if it can use 1 sound/phoneme, why not many you were listening to, enjoyed, and that got separated and re-built.
That's it. We might not even have that ability.