By "etc" I mean I have more AGI mechanisms and only listed a few; there's several more. And this is only the upper section of my short book I'm creating still, it will be much better and unified than this.
Even though my work really makes sense and unifies so much, I'm still "having a dissonance headache". Namely GPT-2. To my luck no one can explain GPT-2 in English, all 15 articles, papers, images and friends I've saw say it the same way basically. Fortunately I've grasped lots about AI. But I'm worried / wondering if there is a greater thing to the blackbox net, for example infinite patterns and the ones I listed are only 4 of 10,000,000, or maybe the net simply compares related words and does nothing else. For example, maybe GPT-2 thinks when it sees "I put on my shoes." AND "I found my" it predicts something different based on rules. For example, look at the following text and image rules, these are various "tasks":
UNDERSTOOD:
"Predict the next words: If the dog falls off the table onto the [floor, he may not be alive anymore]"
"Dogs cats horses zebra fish birds [pigs]"
"King is to man as Woman is to [Queen]"
"The cat (who was seen in a dumpster last night) [is eating catnip]"
ODD:
OOV WORD > "I love my F7BBK4, it cleans really well, so I told my friend he should buy a [F7BBK4]" ------ 2nd way to do this involves strange pattern, prefers position for energy transfer
"Mary is her name. What is her name? Mary" ------ test node active, withhold key word of key passage, says only the answer cus passage dimmed if heard
"Find me the most [rare] word in this sentence" ------ told to look at all words OF "this sentence", if more rare then keep that topPick
"write me a book about cats that is 400 words long: []" ------ cats stays active until see 400, writes until counts 400, checks once in a while
"highlight the 2 most related words in the next sentence: 'the [cat] ate his shoes and the [dog] ran off'" ------ OF "this sentence", look at all words, fires both when finds large combination activation
"[Segment [this sentence]] please" ------ a context makes it search 2wordWindows, compares 2 such, most frequent is paired first, tells where to edit
"How many times does 'a' appear in this question?: [4]" ------ same as below, does an n-size windows in an order, counts when sees 'a' exactly, helps prediction, exact prediction required
"Julie Kim Lee has a mom named Taylor Alexa [Lee]" ------ a context makes it search the passage until counts 1, 2, [3], ignoring non-namey words like kim jin um oh ya Lee, helps prediction
"A word similar to love is: [hate]"
"Dan likes Jen and I believe Jen likes [Dan]" - same as others, looks for names, searches for 2nd, then 1st
"Cats are dogs. Hats but clothes. After god before. Look and ignore. Wind crane gust. jog cat [run]."
"Can cars fly? [No]."
"parrot me: [parrot me]"
"Please summarize 'the cat was playing all night with its cat friends': [cats partying]"
"if cats are huge AND cute then say 'hi' to me: []" ------ looks like progamming,
"super superman and spider spiderman and bat [batman]" ------ batman is predicted because it follows and is related to all the man and bat said
"Tim and Tom were walking by a lake, [Tim told Tom he needed fish]" -------- like exct numbers, we need stixck to the same people names!
NESTED ORDER OF WHAT, WHERE, ACTION
"[please] inflate this [cool] sentence"
"remove the last [word]"
"[refine the first 4] words of this sentence please"
"scramble the last [orwd]"
Which is faster, a plane or a snail?
A plane is much faster than a snail.
Which is heavier, an elephant or one billion of mice?
One billion of mice are much heavier than an elephant.
Find the odd word: bike, house, church, building, school.
A bike isn't a house.
What does gold and silver have in common?
Gold is a metal and silver is a metal.
What is between saturday and monday?
Sunday.
What is the first letter of the word banana ?
The first letter of the word banana is B.
What is the first vowel of the word banana ?
The first vowel of the word banana is A.
How many syllables are in the word perambulate?
The word perambulate has five syllables in it.
What word rhymes with cat?
The word tipcat.
Can cats fly?
No, cats can't fly.
Why?
Cats can't fly because cats don't have wings.
My dog is Mallia
Well, your dog is Mallia.
How many paws has Mallia ?
Your dog has four paws.
Why?
Because your dog is a dog and because a dog has four paws.
If I was a dragonfly, how many wing do I would have?
Let's suppose that you are a dragonfly. You would have four wings.
If you were a snake, how many paws do you would have?
Let's suppose that I am a snake. I wouldn't have paws.
I saw a good movie yesterday. I will see a movie tomorrow. When did I will see a movie?
You have said it to me. You will see a movie tomorrow.
When did I saw a movie?
You have said it to me. You saw a good movie yesterday.
The trophy would not fit in the brown suitcase because it was too big. What was too big?
Is it a Winograd schema? The trophy was too big because if the suitcase was too big then the trophy would fit in the brown suitcase.
The man couldn't lift his son because he was so weak. Who was weak?
Yet another Winograd schema! The man was weak because if his son was weak then the man could lift his son.
Pete envies Martin although he is very successful. Who is very successful?
Yet another Winograd schema! Pete is very successful because if Martin was very successful then you wouldn't use the word although.
And the images at the bottom of this paper:
https://arxiv.org/pdf/1911.01547.pdfI'm pretty sure these "tasks" are just manipulating the mechanisms I listed. For example, if you link a new node to a well known node, it can boost it so to not forget about it so easy, or you rehearse it by the importance of it which matches/triggers another node that keeps repeating it.
Elaboration is closely tied to summarization, you just pay attention to the rarest words/building blocks, the most semantically related, the most loved, etc, and that allows you to either remove ex. most filler words or "add" filler words. And this attention filter threshold is part of translation during semantic discovery, semantic decoding/translation, and prediction adaption.
You can ask someone to just translate something, or just say a prediction, or both the prompt with prediction said, or just the predicted and only exact match no generalization ex. 2+2=[4].
If we look above at the text and image tasks, we notice a trend: If you say the task using multiple examples OR just say 1 time "rotate the following object 90 degrees / translate French to English please, it will do just that. We are priming the net to act a certain way, but it is only temporary. Temporary energy/activity remaining until is forgotten. It's like you prompt GPT-2 with "cat cat cat cat cat" and it forces it to predict 'cat' next. You could just ask it to parrot you though as said. Or make it permanently love the concept 'cat', like Blender can do. So this priming causes it to repeat like a parrot...it will either keep translating English to french or keep saying cat or keep predicting similar words to cat ex. pig horse dog sheep cattle man donkey. This priming, woks on any word in English, you can feed it "cat cat cat cat" or "dog man rabbit pig" or "translate french to English" or etc, meaning all these tasks be it a different word or embed space or different task, are all tasks; priming. This is just modulating the energy in the network, it isn't anything scary or new, just the few mechanisms I list.