Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: LOCKSUIT on July 17, 2020, 04:13:59 am

Title: Making AI easy and clear
Post by: LOCKSUIT on July 17, 2020, 04:13:59 am
If you want to explain how GPT-2 works, to a beginner, easily, in no time, why not say it like this?:

Table of contents (compressors/ MIXERS):
Syntactics
BackOff
Semantics
Byte Pair Encoding
More data
etc


First, you must understand "hierarchy brain": https://ibb.co/p22LNrN

Syntactics:
Intro: Letters, words, and phrases re-occur in text. AI finds such patterns in data and **mixes** them. We don't store the same letter or phrase twice, we just update connection weights to represent frequencies.
Explanation: If our algorithm has only seen "Dogs eat. Cats eat. Cats sleep. My Dogs Bark." in the past, and is prompted with the input "My Dogs" and we pay Attention to just 'Dogs' and require an exact memory match, the possible predicted futures and their probabilities (frequencies) are 'eat' 50% and 'Bark' 50%. If we consider 'My Dogs', we have fewer memories and predict 'Bark' 100%. The matched neuron's parent nodes receive split energy from the child match.

BackOff:
A longer match considers more information but has very little experience, while a short match has most experience but little context. A summed **mix** predicts better, we look in memory at what follows 'Dogs' and 'My Dogs' and blend the 2 sets of predictions to get ex. 'eat' 40% and 'Bark' 60%.

Semantics:
If 'cat' and 'dog' both share 50% of the same contexts, then maybe the ones they don't share are shared as well. So you see cat ate, cat ran, cat ran, cat jumped, cat jumped, cat licked......and dog ate, dog ran, dog ran. Therefore, probably the predictions not shared could be shared as well, so maybe 'dog jumped' is a good prediction. This helps prediction lots, it lets you match a given prompt to many various memories that are similar worded. Like the rest above, you mix these, you need not store all seen sentence from your experiences. Resulting in fast, low-storage, brain. Semantics looks at both sides of a word or phrase, and closer items impact it's meaning more.

Byte Pair Encoding:
Take a look on Wikipedia, it is really simple and can compress a hierarchy too. Basically you just find the most common low level pair ex. st, etc, then you find the next higher level pair made of those ex. st+ar....it segments text well showing its building blocks.

More Data:
Literally just feeding the hierarchy/ heterarchy more data improves its prediction accuracy of what word/ building block usually comes next in sequence. More data alone improves intelligence, it's actually called "gathering intelligence". It does however have slow down at some point and requires other mechanisms, like the ones above.

etc
Title: Re: Making AI easy and clear
Post by: infurl on July 17, 2020, 04:59:38 am
This is an excellent post @LOCKSUIT. I enjoyed reading it and you explained these points very clearly. I'm intrigued by one part though and that's the "etc" heading. What else do you think we need to know?
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 17, 2020, 08:08:07 am
By "etc" I mean I have more AGI mechanisms and only listed a few; there's several more. And this is only the upper section of my short book I'm creating still, it will be much better and unified than this.

Even though my work really makes sense and unifies so much, I'm still "having a dissonance headache". Namely GPT-2. To my luck no one can explain GPT-2 in English, all 15 articles, papers, images and friends I've saw say it the same way basically. Fortunately I've grasped lots about AI. But I'm worried / wondering if there is a greater thing to the blackbox net, for example infinite patterns and the ones I listed are only 4 of 10,000,000, or maybe the net simply compares related words and does nothing else. For example, maybe GPT-2 thinks when it sees "I put on my shoes." AND "I found my" it predicts something different based on rules. For example, look at the following text and image rules, these are various "tasks":



UNDERSTOOD:
"Predict the next words: If the dog falls off the table onto the [floor, he may not be alive anymore]"
"Dogs cats horses zebra fish birds [pigs]"
"King is to man as Woman is to [Queen]"
"The cat (who was seen in a dumpster last night) [is eating catnip]"

ODD:
 OOV WORD > "I love my F7BBK4, it cleans really well, so I told my friend he should buy a [F7BBK4]" ------ 2nd way to do this involves strange pattern, prefers position for energy transfer
"Mary is her name. What is her name? Mary" ------ test node active, withhold key word of key passage, says only the answer cus passage dimmed if heard
"Find me the most [rare] word in this sentence" ------ told to look at all words OF "this sentence", if more rare then keep that topPick
"write me a book about cats that is 400 words long: []" ------ cats stays active until see 400, writes until counts 400, checks once in a while
"highlight the 2 most related words in the next sentence: 'the [cat] ate his shoes and the [dog] ran off'" ------ OF "this sentence", look at all words, fires both when finds large combination activation
"[Segment [this sentence]] please" ------ a context makes it search 2wordWindows, compares 2 such, most frequent is paired first, tells where to edit
"How many times does 'a' appear in this question?: [4]" ------ same as below, does an n-size windows in an order, counts when sees 'a' exactly, helps prediction, exact prediction required
"Julie Kim Lee has a mom named Taylor Alexa [Lee]" ------ a context makes it search the passage until counts 1, 2, [3], ignoring non-namey words like kim jin um oh ya Lee, helps prediction
"A word similar to love is: [hate]"
"Dan likes Jen and I believe Jen likes [Dan]" - same as others, looks for names, searches for 2nd, then 1st
"Cats are dogs. Hats but clothes. After god before. Look and ignore. Wind crane gust. jog cat [run]."
"Can cars fly? [No]."
"parrot me: [parrot me]"
"Please summarize 'the cat was playing all night with its cat friends': [cats partying]"
"if cats are huge AND cute then say 'hi' to me: []" ------ looks like progamming,
"super superman and spider spiderman and bat [batman]" ------ batman is predicted because it follows and is related to all the man and bat said
"Tim and Tom were walking by a lake, [Tim told Tom he needed fish]" -------- like exct numbers, we need stixck to the same people names!

NESTED ORDER OF WHAT, WHERE, ACTION
"[please] inflate this [cool] sentence"
"remove the last [word]"
"[refine the first 4] words of this sentence please"
"scramble the last [orwd]"

Which is faster, a plane or a snail?
A plane is much faster than a snail.
Which is heavier, an elephant or one billion of mice?
One billion of mice are much heavier than an elephant.
Find the odd word: bike, house, church, building, school.
A bike isn't a house.
What does gold and silver have in common?
Gold is a metal and silver is a metal.
What is between saturday and monday?
Sunday.
What is the first letter of the word banana ?
The first letter of the word banana is B.
What is the first vowel of the word banana ?
The first vowel of the word banana is A.
How many syllables are in the word perambulate?
The word perambulate has five syllables in it.
What word rhymes with cat?
The word tipcat.

Can cats fly?
No, cats can't fly.
Why?
Cats can't fly because cats don't have wings.

My dog is Mallia
Well, your dog is Mallia.
How many paws has Mallia ?
Your dog has four paws.
Why?
Because your dog is a dog and because a dog has four paws.

If I was a dragonfly, how many wing do I would have?
Let's suppose that you are a dragonfly. You would have four wings.
If you were a snake, how many paws do you would have?
Let's suppose that I am a snake. I wouldn't have paws.
I saw a good movie yesterday. I will see a movie tomorrow. When did I will see a movie?
You have said it to me. You will see a movie tomorrow.
When did I saw a movie?
You have said it to me. You saw a good movie yesterday.
The trophy would not fit in the brown suitcase because it was too big. What was too big?
Is it a Winograd schema? The trophy was too big because if the suitcase was too big then the trophy would fit in the brown suitcase.
The man couldn't lift his son because he was so weak. Who was weak?
Yet another Winograd schema! The man was weak because if his son was weak then the man could lift his son.
Pete envies Martin although he is very successful. Who is very successful?
Yet another Winograd schema! Pete is very successful because if Martin was very successful then you wouldn't use the word although.



And the images at the bottom of this paper:
https://arxiv.org/pdf/1911.01547.pdf



I'm pretty sure these "tasks" are just manipulating the mechanisms I listed. For example, if you link a new node to a well known node, it can boost it so to not forget about it so easy, or you rehearse it by the importance of it which matches/triggers another node that keeps repeating it.
Elaboration is closely tied to summarization, you just pay attention to the rarest words/building blocks, the most semantically related, the most loved, etc, and that allows you to either remove ex. most filler words or "add" filler words. And this attention filter threshold is part of translation during semantic discovery, semantic decoding/translation, and prediction adaption.
You can ask someone to just translate something, or just say a prediction, or both the prompt with prediction said, or just the predicted and only exact match no generalization ex. 2+2=[4].



If we look above at the text and image tasks, we notice a trend: If you say the task using multiple examples OR just say 1 time "rotate the following object 90 degrees / translate French to English please, it will do just that. We are priming the net to act a certain way, but it is only temporary. Temporary energy/activity remaining until is forgotten. It's like you prompt GPT-2 with "cat cat cat cat cat" and it forces it to predict 'cat' next. You could just ask it to parrot you though as said. Or make it permanently love the concept 'cat', like Blender can do. So this priming causes it to repeat like a parrot...it will either keep translating English to french or keep saying cat or keep predicting similar words to cat ex. pig horse dog sheep cattle man donkey. This priming, woks on any word in English, you can feed it "cat cat cat cat" or "dog man rabbit pig" or "translate french to English" or etc, meaning all these tasks be it a different word or embed space or different task, are all tasks; priming. This is just modulating the energy in the network, it isn't anything scary or new, just the few mechanisms I list.
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 18, 2020, 07:34:06 am
So a common ANN is just learning basically what I presented in my opening post? Most ANNs use Backprop but really the underlying theory is the neural connections are based on data/ accesses made to the network, it carves itself out on its own.

I believe ANN bias nodes that get added to the sum are also data-based and should not be found by backprop.

Anyway, there is a set of rules that the data itself defines the way the net splits/manages the energy spreading up the net. We must ignore backprop and understand those juicy mechanisms like I presented in my opening post. We must understand what 'backprop' is finding in the neural connections and in the biases / activation functions.
Title: Re: Making AI easy and clear
Post by: infurl on July 18, 2020, 08:58:55 am
@LOCKSUIT The good thing about your first post in this thread was that it was brief and to the point. You didn't say a huge amount of stuff that obscured your message. Your second post was a bit too long to take in easily because you were trying to say too much at once and you provided too many examples. It takes extra effort to write a shorter clearer piece, but it is always worth it.

In that second post you seemed to be asking why GPT-2 was going off the rails so easily. I think it's because it doesn't have any consciousness. It doesn't know what it's supposed to be doing, let alone whether or not it's doing it. Google has done some experiments with neural networks that create other neural networks. Maybe you could do some experiments with neural networks that watch other neural networks to see what they're doing.
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 18, 2020, 10:33:13 am
Even though I gave yous a fine lesson, I was actually asking why doesn't anyone else explain it like that, and can you add more items to the table of contents? GPT-2 may sound like algebra but underneath it must be doing the things I said in my 1st post.
Title: Re: Making AI easy and clear
Post by: silent one on July 18, 2020, 01:38:27 pm
You need something better for your environmental model than just parroting a huge amount of text, you need something that's more like the equation for the text,  not just the text itself.
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 18, 2020, 02:05:57 pm
Do ANNs have that, or are you suggesting something new for true human intelligence?
Title: Re: Making AI easy and clear
Post by: silent one on July 18, 2020, 02:30:47 pm
Yes as in its old news from ages ago.  You can get it by trying all possible configurations, but realisticly u cant test more than 30, 1 bit dimensions before it takes an ice age to finish.
So what you've actually got here,  with the exchangeable words,  the king to boy to queen to girl, and maybe others, would be the best you can get, but that's the equivalent of telling the computer the way to think, instead of it working it out itself.   so its a step back from AGI that way.

Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 18, 2020, 02:39:45 pm
Please be more clear....
"you need something that's more like the equation for the text,  not just the text itself."
Is it existing technology or not? If it exists, explain it much clearer.
Title: Re: Making AI easy and clear
Post by: Korrelan on July 18, 2020, 03:22:02 pm
I feel the initial corpus used for demonstrating the GPT system/ technique (words/ text) not only showed its relative power (attention) compared to existing NLP systems, but also gives insight into its many weaknesses.

The underlying premise for GPT techniques are sound, a general purpose pattern finder with an incorporated attention mechanism.  Although the initial attention map (12 layers with 12 independent attention mechanisms) only gives a possible 144 perspectives, it still produces decent results.

However, whilst I agree there are lessons to be learned from the GPT tech, this will never lead to a human+ level AGI, especially using just a language corpus, the system requires a much more versatile attention mechanism, greater knowledge generality and a physical grounding in reality, etc.

 :)
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 19, 2020, 06:13:57 am
Discovery Time: In text you can discover if spiders are dangerous and spiders and rats both bite you, then more probably rats are dangerous.

The same discovery can be found using Vision.Therefore, if vision is grounded, text must be too. Oh look, I just had a Discovery Time

And related translation isn't the only discovery done in text / vision, there's segmentation, frequency, loveness, temp activity, etc.


Yes, improving GPT-2 or the better Blender, and mixing with it Vision and Touch would be best, but it's not so mandatory just yet.
Title: Re: Making AI easy and clear
Post by: Korrelan on July 19, 2020, 10:14:24 am
Language/ text is just a protocol, it not comparable to vision.

When a human ‘thinks’ about a spider they form a model in their imagination, they understand everything about the spider, it’s shape, colour, how it moves, and that they can be dangerous... this model is grounded in reality.

If the human wants to convey this information to another human they would form a sentence ‘spiders are dangerous’, this is just a common protocol, a string of words and letters that holds no information.

It relies on the intelligence of the receiver to understand/ decode the meaning; the string is designed to trigger the same spider model (or their equivalent) in the imagination of the receiver.

Our language/ protocol has syntax, the order that letters/ words are usually arranged to help the receiver with decoding the meaning… But the sentence/ string ‘spiders are dangerous’ on its own means absolutely nothing.

GPT learns the syntax; the order embedded within the protocol from a massive corpus and is able to recreate this order based on the many combinations within the corpus.  It’s only able to construct replies based on the ‘implied/ embedded’ human intelligence originally used to create the corpus. It has no imagination, it just uses its learned language syntax/ order to ‘lookup’ what a human has previously said regarding a topic.

If GPT writes ‘spiders are dangerous’ it has no grounded/ deep understanding of what the sentence actually ‘means’, there is no ‘mind’ behind it,  it just knows that this combination of letters/ words is usually given in reply to this question/ scenario.

Yes, its able to use an attention mechanism to re-order/ combine the corpus snippets into new paragraphs but this is the 144 attention maps that have either been pre-defined or learned from the corpus… GPT is just a mimic.

GPT only leverages/ learns the order/ syntax of language; this is why it can never become an AGI based on language alone.



Take three words… ‘dog, the, ran’ and randomly reorder them to form sentences, there is no ‘mind’ behind this, just an randomisation algorithm.

Dog ran the
The dog ran
Ran the dog

All three are just strings, they are exactly the same, in every way except the second one has randomly/ accidentally encoded human syntax, this enables ‘you/ your mind’ to make sense of the string.

 :)
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 19, 2020, 10:44:22 am
"Language/ text is just a protocol, it not comparable to vision."

Vision is just one way to view the world, their is noses, temperature, pressure, sound, radio, and many other sensors.


"When a human %u2018thinks%u2019 about a spider they form a model in their imagination, they understand everything about the spider, it%u2019s shape, colour, how it moves, and that they can be dangerous... this model is grounded in reality."

So does text. Word2Vec is a good example. The spider, it%u2019s shape, color, how it moves, and that they can be dangerous are all tied to the context "spider".


"If the human wants to covey this information to another human they would form a sentence %u2018spiders are dangers%u2019, this is just a common protocol, a string of words and letters that holds no information."

Wrong,, text IS information. Anything that exists is information. And text has patterns - derived from man, not just any patterns.


"Our language/ protocol has syntax, the order that letters/ words are usually arranged to help the receiver with decoding the meaning"

So does vision, an object 'cat' is recognized as 'dog' by either their similar structure or their surrounding contexts ex. both appear in snowy areas.


"But the sentence/ string %u2018spiders are dangerous%u2019 on its own means absolutely nothing."

Same for a visual object or image. Only by big data context can you make a decent embed space like Word2Vec does to start learning patterns.


"Yes, its able to use an attention mechanism to re-order/ combine the corpus snippets into new paragraphs but this is the 144 attention maps that have either been pre-defined or learned from the corpus"

What's your point here. That is true, 144, and I'm not sure what these do actually, there should be no such thing. It is not apparent when using GPT-2 though, as if there is infinite maps. How can you map something you don't know will be spaced apart/ worded.


"GPT only leverages/ learns the order/ syntax of language; this is why it can never become an AGI based on language alone."

No, even vision is made of frame by frame casualty i.e. cause>effect, you can only play forward a memory. All your memories are made of the smallest elementary parts/memories and all your memories are a sequence or collage of these, your whole brain is movies of syntax. And, GPT can invent NEW and useful sentences.
Title: Re: Making AI easy and clear
Post by: Korrelan on July 19, 2020, 01:12:11 pm
Vision has a direct correspondence with reality, text is an interpreted protocol... They are different

I'm not stating that GPT is useless, just that GPT using purely language is useless, GPT with other sensory modalities is required, as a minimum.

 :)

https://towardsdatascience.com/openai-gpt-2-understanding-language-generation-through-visualization-8252f683b2f8

 :)
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 19, 2020, 04:34:53 pm
The same discoveries you can make in Vision can be made in Text. For example, that cats are similar to dogs either because of their looks [zook]=[mooq] or their neighbor contexts's looks dog_[bowl]=cat_[bowl]. Another: That cats usually lick and rarely sleep. Another: The topic as of recency is all about cats, cats will probably come up again probably. Cats. "Useless", eh? Beat that.

The 144 attention heads, I never looked at them like that. Interesting. It seems they look at certain words. One for example saw James [Vig] is the [son] of John _ .....the attention head for Son/mom/etc saw Son was said, allowing/causing the last name attention head to look for the 2nd Name and predict 'that'. Or something like that.
Title: Re: Making AI easy and clear
Post by: Korrelan on July 19, 2020, 05:22:38 pm
Quote
The same discoveries you can make in Vision can be made in Text.

Ok... I disagree but that's fine... carry on.   :)
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 19, 2020, 06:07:21 pm
What novel type of discovery can you discover in Vision, that you cannot in Text?
Title: Re: Making AI easy and clear
Post by: ivan.moony on July 19, 2020, 06:08:08 pm
Vision is 2D. Text is 1D. A sequence of finite 1Ds is 2D. Hence, 2D can be expressed in 1D. And so can 3D and so on. This probably causes a mess until NN finds out how to stitch multiple 1Ds into a single 2D, but once it does, it can do stuff in 2D over text dimension, right?
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 19, 2020, 06:35:40 pm
I don't believe image is a sequence, I think objects in it are feature-ized and are able to 'not' save anything else but the board dice near the ex. upper right corner.

Well, it we feed it image data, and teach it how to look at it, then yes....
Title: Re: Making AI easy and clear
Post by: infurl on July 20, 2020, 03:56:52 am
Reality doesn't care what you believe. As long as you keep making such wild guesses based on your own limited experience and knowledge, and assuming they are correct without actually verifying them, you will continue to compound your errors and wander even further from any useful results.

You should familiarize yourself with Jeff Hawkins' research. His company, Numenta, has pushed our understanding of how the brain really works further than anybody. What they have found is that all types of perception are active processes which combine prediction with feedback. Perception has at least four dimensions because it is always tied to both space and time.
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 20, 2020, 07:38:18 am
Oops, i didn't mean vision isn't a sequence, i meant image isn't. Obviously you don't store image object parts as lines of the image, you need to store segmented objects! Ex. you store a diamond object node/capsule:

not like this:
                      -------
--------------------------
--------------------------
---------

but like this:
        -
       ----
    --------
      -----
        -

Now this, this is a sequence, you can see one part of the diamond and generate the next, yes.....however to do it correctly you must look at it as 2D objects/capsule nodes once again, back to the thing I illustrated above. We don't store full image lines in nodes.....


I already knew the brain mixes all sensory types for improving its prediction. And how the brain uses feedback/data over time to specialize what kind of feedback/data it collects. That gives it "more data" technically. And when in-confident, will seek more data from some next best domain.
Title: Re: Making AI easy and clear
Post by: silent one on July 20, 2020, 08:36:33 am
An image Is a sequence,  what if it was a photo of a newspaper. :)
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 20, 2020, 08:41:17 am
A newspaper is an image, you can only read it if you scan over it word by word, which is a video therefore. If you widden your attention to the whole page and try to read it all in 1 frame, you can't, all you'll see/read is "wall o text" XD.

And k, answer that question! xd
Title: Re: Making AI easy and clear
Post by: silent one on July 20, 2020, 11:41:48 am
Markov chains work in graphics as well as text,  So  I agree with Locksuit,  but Korrellan is onto some other way of thinking that's just as good.
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 22, 2020, 06:28:08 pm
Referring to the images in this pdf https://arxiv.org/pdf/1911.01547.pdf

Text/GPT-2 and Vision both have an "ARC" set of tasks. I call it "infinite tasks of things" or "Internet of Things" because you'll see it is absurdly flexible. See attached a list of tasks for text I made/collected. This is my "ARC" text set.

Whether vision or text, it works the same way, you either show it multiple examples or tell it "translate x to french" (assuming it has seen multiple examples) and it will do just that. It's a temporary priming/control or "way of thinking" for the net.

If we look at the start of my book I'm creating (see other attachment), we see Syntactics, Semantics, BackOff, BPE, etc can learn patterns, however we (not the algorithm) tell the algorithm to find these (priors), the algorithm should be able to discover these! Is there more it can discover? Semantics, BPE, what else is there? Can it find more!? Look at the text tasks I made, translation, word unscramble, find rare word in sentence, mom/dad/son same last names pattern, etc. There is more! But they are less common, so they seem like the last end of the hog's shoe and are hard/unnoticeable to find. In fact I think the main AGI net will be based on the most common priors, and will have to emulate other priors using the more common mechanisms. It will improve prediction accuracy less if less common, unless you can handle 1,000 types of pattern tasks.

GPT-2 has 144 attention heads it learns and then they look for "patterns".

As for rotating an L shaped objects in text, vision does just that, if you do just that in text then it is really vision after all, so if we want to do it in text the way text was designed for, it would be more like this: We have an L shaped object with the top of the L sticking up, please rotate it clockwise, what do we have now? Say answer: The top of the L shaped object is sticking out to the right.

I'm still checking if these "many tasks" are really just the few mechanisms listed in my attachment BOOK.txt - it could be that there is some pattern among them.
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 23, 2020, 05:48:30 am
And the "AGI net" seems to be only the top most common patterns. Why? Frequency/syntactics, semantics, BPE.....Does it not handle all other tasks? When you call/ask for the task "translate to french", this makes my architecture only use the recognized nodes, it only requires my architecture see!, and it uses energy to cause tasks temporarily, and highlights french too using activation energy temporarily, further after the task is said, you feed it a subject to manipulate, ex. dog, and it translates it to french "dog" version, because the 'dog' was activated and so was the french version. So it seems using just energy activation you can control not just the task but the subject fed in, using just a simple architecture?

There is many patterns in text/vision, many are less used, few are common, the brain architecture I'm guessing is only made of the most common patterns to naturally find those without being told to store/find them, How can it do this? It emulates, but how? It must be that the most common patterns make up the other bazillion patterns! A sequence of tasks. Now, even the net mechanisms have patterns, frequency, semantics, BPE, more data, all give u more data! And they blend the nodes as representations. Thee bazillion other tasks/patterns that 'reoccur' also give you more data and help prediction. Just saying. Anyway, hmm. I couldn't imagine a brain mechanism to make last names the same, 2nd names the same, or first, or etc etc etc, it must be more general than that.

Maybe you can help. How does AGI say the 3rd name as the same as relative? Ex. Jim Fan [Lee] has a son named Roy Tan [Lee]. The son and the other name clue it in. But how exactly. It could be said like Jim Fan um oh ya Lee. It knows names are odd, so it's looking for "names" and counting to 3? That would require a 123 node and a low frequency threshold circuit, and scanning memory maybe. ??
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 23, 2020, 07:37:38 pm
Just realized, vision has one discovery edge over text, but only increases accuracy of prediction, not a fundamental difference do note!:

If you have a hierarchy of [visual features], pig is is made of teeth, hooves, tail, eyes, retina, pink, has a genital, etc. But in a text hierarchy, well, pig isn't made of those!!! It's made of pi+g, and pi is made of p+i. And no, you DON'T make a Concept Hierarchy, the brain has no such. So how does text know pig has legs? Oh it can, by ex. seeing it in text or generalizing semantically. Just not as often as vision.

This is why you can activate a node and all it's parts can be activated (legs, pink, teeth, etc) more than similar features (sheep, dog, monkey, goat).
Title: Re: Making AI easy and clear
Post by: ivan.moony on July 23, 2020, 07:57:05 pm
Vision is one abstraction. Hearing is another. So is smell, and so on. And all those abstractions relate in outer world.

Now, text is interesting. Text on its own isn't very special, it is a merely arranged system of letters. But in correlation to our sensing of outer world... Text is artificial abstraction used to represent vision, hearing, ... We draw similarities between words made of letters, and tangible concepts in vision, sound, ..., so we can communicate those concepts for doing serious business, or simply enjoying fun. Words become merely keys for tagging real concepts.

It sounds so banal when I read what I just wrote, but I'll publish it anyway.
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 23, 2020, 08:06:51 pm
Text [alone] tells you what flowers smell like, vision [alone] doesn't :P

1up for text now!
Title: Re: Making AI easy and clear
Post by: Korrelan on July 23, 2020, 11:23:42 pm
 :o
Title: Re: Making AI easy and clear
Post by: infurl on July 24, 2020, 12:47:13 am
@Korrelan that's exactly what I was thinking.  :2funny:
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 24, 2020, 01:22:16 am
Because a text dataset can say flowers smell like X or that bark smells like X and flowers are similar to bark hence flowers probably smell like X. You end up learning flowers smell like, um, ex, fish, and so flows are similar to fish then....

Images don't show molecular smells, unless you have microscopic images of molecules.....ok fine vision can tell you what flowers smell like, you 'can' see that flowers say had fish molecules, so they'd smell like fish then....Assuming they can be picked up by human noses.... Do images show that?.........Nope. 1up 4 text then!
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 24, 2020, 04:00:05 am
Before I write my book, I'm just trying to first find out from yous how the brain can more accurately answer a wider variety of problems. My AGI blueprint is a hierarchy of representations, that uses energy sitting in nodes to make the hierarchy "do anything". I can't see it any other way. Any task or pattern uses those memories.

When you view a text prompt (like GPT does), you can look at it multiple ways and manipulate it multiple ways, and use your merged representations to e-merge out something new. Syntactics tells you the usual word that follows (frequency), the pattern helps you. So does seeing all family 3rd names are identical, this pattern tells you what usually follows. - If you gave the last 15 words to a syntactic/semantic AI only it would not know to put the [ ] word at end > Al Ba Zoy has a very healthy son who's name is said to be Ga Wa [Zoy], because there's nothing saying or suggesting it goes there, the first time it has seen that word is just now and the last person's name is not the same and therefore doesn't trigger the same word to follow. Such would look like: I have [my zwq] and so [my>zwq]. Now of course the names could be decodes as "name name name has a son name name _" and this would make both last names same though :( ex. Tom Kee Lee has a son Jack Kee Lee. To make just the 3rd name same, a rule must be told. Or discover in text. Or must invent the rule itself.
Title: Re: Making AI easy and clear
Post by: silent one on July 24, 2020, 02:32:28 pm
Yes that's true! What Ivan said,  without the senses there would be no use for language that describes it... maybe..  but its definitely true its a problem involved with chat bots is they are talking about things they have no experience for.
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on July 24, 2020, 08:23:00 pm
https://www.reddit.com/r/artificial/comments/hwtfls/vision_is_special/
Title: Re: Making AI easy and clear
Post by: LOCKSUIT on August 04, 2020, 12:14:13 am
How far along is your book Korrelan? Can I take a peak?
Title: Re: Making AI easy and clear
Post by: Korrelan on August 04, 2020, 09:22:35 am
Sorry... not quite ready for human consumption yet...

 :)