Ai Dreams Forum
Member's Experiments & Projects => General Project Discussion => Topic started by: LOCKSUIT on March 12, 2017, 04:00:26 AM
This is my project thread.
The reason no one is more determined to create AI than me is because only I collect information from everywhere and create a precise hierarchy 24/7. After initialization, it only took me 1 year before I discovered the field of AI that is actually well developed. And I instantly noticed it. I instantly noticed the core of AI from my first read. That's how fast my Hierarchy self-corrects me. Now it's been 1.5 years since and I am here to tell you that I have empirical knowledge that I have the core of AI, and ASI! 100% guarantee !
All of my posts on the forum are in separate threads, mine, yours, but this thread is going to try to hold my next posts together so you can to quickly and easily find, follow, and understand all of my work. Anything important I've said elsewhere is on my desktop, so you will hear about it again here. You don't currently have access to my desktop, only my website in replace to make up for it, while this thread is an extension of it. But this thread won't be permanently engraved to my desktop/website since anything new on this thread will be copied to my desktop/website. Currently my website (and this extension thread) is awaiting my recent work, which I really shouldn't show you all of it.
- Immortal Discoveries
The information and discoveries I'm getting recently are overwhelming! I don't know what will happen! I keep making discoveries and findings!
Now I realized how our minds create music that we like! If you remember the Language Neural Network I drew for yous, just how it recognizes letters words sentences topics and outputs them in reverse (use duplicate tree for fastest recognizing), it also recognizes music and outputs music as internal input.
Just realized without reading this that we can hear *quiet *whistling by low-activation level but time-dependent frequency vibrations like hit hit hit or hit hit hit for deep sounds and high pitches and separately the Volume-Level. On top that we have many sound receptors.
Yet more extendable capabilities for AI and their other sensory types.
This is amazing.
The reason my AI needed a Language Neural Network pattern recognizer isn't just so it can recognize phonemes ~ words (hashed or not), but also because mine can't sum up what you said i.e. sentence/topic for input NOR output language! (or skills!!).
The Language Neural Network aka HHMM hierarchy tree has phonemes so words can be recognized and words for sentences and sentences for topics and are of what WE say - no wonder it has pattern recognizers for ex "biotechnology" or "hello there". plus what we output. We don't say jar box can ox open lights NOR fjiagoolut op tala. We Have a common language.
The reason we all have different words/sentences/topics is because by the rewards we switch words for other words to say "Go to money" instead of "go to school". Every time we think we create brand-new sentences/words. How? Supervision by rewards. When we internally hear ourselves mentally we see pictures and hear sentences, these select a single image or new sentence you see hear (word by word) next. It appears then that when sentences form new ones and say are no good, the image it matches OR the sentence in the network is a negative image/network sum-up. After all network sum-up lets you output sum-ups to talk/act, while any of the words might all match positive images i.e. AI-stuff - it must be that negative-sum ups are the case?
Network hierarchy sums.
Network is stereotyped language.
Network creates new language by +/- rewards and knows when in the wrong only by itself.
"Hello me", "really, well ok"
"why I write standardized words/sentences", "cus I can ask mom for food/etc"
"this sums it up good, I never had these sentences before today, new findings"
As we write sentences, we also stare a lot, and sculpt it better instead of writing it instantly.
Reason I came across this is because I was trying to figure out if we layered sounds for music or line them up. Layering would be realistic (like music production programs) and better therefore. (edit: I wrote "before" instead of "better". . .holy)
The music it creates is what we HEAR on Earth PLUS what we love!!!!!!!!!!!! That's why I write English words ex. "hello, nuggets", and that's why I write *what I write "AI is important". !!!!!!!!!!!!!!!!!!!!!!!!!!!
So when we hear music, we may be able to separate sounds plus re-construct them since won't be complete, so we CAN get them. . .
Now the network has to work multi-time ! It could layer 88 sounds for rendering the piece (not 1 phoneme, the *phonemes will make up the rendering of 88 different instruments/sounds).
If it can line them up, it can do the same for which stacks to line up, and, as for the stacks themselves, if it can use 1 sound/phoneme, why not many you were listening to, enjoyed, and that got separated and re-built.
That's it. We might not even have that ability.
What do you think of a program trying every possible particle arrangement to discover all possible technologies waha!?
Or trying all possible circuit arrangements on a neuro-morphic chip that acts like the brain (they exist)?
Plus Evolutionary Algorithms on the computer (clone animal each with mutations and then clone the one that gets the reward function).
Plus my discovery - with 3D-ized objects (yes, we already can), re-arrange them scale them morph them clone them etc until looks like image/data, then do so again in this bounded area, repeat. Until pencil is perfectly straight (or slanted) in ex. flower pot.
Imagine a very large space. The tweaking looks for new technology/AI/AI-improvements in a very small area. You can try farther out. You can try everywhere randomly to see all "fields". Then find what field blooms. Then Evolutionary Algorithms. Then try all possible particle/circuit arrangements from quickest to lengthiest on supercomputers!
We could use this Language NN to control the Auto-Creator within an Evolutionary Algorithm combined with trying every possible neural/atomic configuration to find AI, technologies, knowledge.
The LNN could also be used for recognizing/remembering vision/actions, recognizing/building technologies/knowledge. So could the Auto-Creator, E.A., and trying all possibilities.
First use Auto-creator/arranger to create something similar ex. recognize 2 human attributes/start from noise and creates model that has both within Evolutionary Algorithm, then try all possible neural/particle configurations to 100% for sure find AI
New thread I posted on chatbots:
GENERATING A BRAIN
Why Automation is incredible.