Making AI easy and clear

  • 37 Replies
  • 3511 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Making AI easy and clear
« Reply #15 on: July 19, 2020, 04:34:53 pm »
The same discoveries you can make in Vision can be made in Text. For example, that cats are similar to dogs either because of their looks [zook]=[mooq] or their neighbor contexts's looks dog_[bowl]=cat_[bowl]. Another: That cats usually lick and rarely sleep. Another: The topic as of recency is all about cats, cats will probably come up again probably. Cats. "Useless", eh? Beat that.

The 144 attention heads, I never looked at them like that. Interesting. It seems they look at certain words. One for example saw James [Vig] is the [son] of John _ .....the attention head for Son/mom/etc saw Son was said, allowing/causing the last name attention head to look for the 2nd Name and predict 'that'. Or something like that.
« Last Edit: July 19, 2020, 05:16:35 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Making AI easy and clear
« Reply #16 on: July 19, 2020, 05:22:38 pm »
Quote
The same discoveries you can make in Vision can be made in Text.

Ok... I disagree but that's fine... carry on.   :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Making AI easy and clear
« Reply #17 on: July 19, 2020, 06:07:21 pm »
What novel type of discovery can you discover in Vision, that you cannot in Text?
Emergent          https://openai.com/blog/

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Re: Making AI easy and clear
« Reply #18 on: July 19, 2020, 06:08:08 pm »
Vision is 2D. Text is 1D. A sequence of finite 1Ds is 2D. Hence, 2D can be expressed in 1D. And so can 3D and so on. This probably causes a mess until NN finds out how to stitch multiple 1Ds into a single 2D, but once it does, it can do stuff in 2D over text dimension, right?

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Making AI easy and clear
« Reply #19 on: July 19, 2020, 06:35:40 pm »
I don't believe image is a sequence, I think objects in it are feature-ized and are able to 'not' save anything else but the board dice near the ex. upper right corner.

Well, it we feed it image data, and teach it how to look at it, then yes....
« Last Edit: July 20, 2020, 07:23:32 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1365
  • Humans will disappoint you.
    • Home Page
Re: Making AI easy and clear
« Reply #20 on: July 20, 2020, 03:56:52 am »
Reality doesn't care what you believe. As long as you keep making such wild guesses based on your own limited experience and knowledge, and assuming they are correct without actually verifying them, you will continue to compound your errors and wander even further from any useful results.

You should familiarize yourself with Jeff Hawkins' research. His company, Numenta, has pushed our understanding of how the brain really works further than anybody. What they have found is that all types of perception are active processes which combine prediction with feedback. Perception has at least four dimensions because it is always tied to both space and time.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Making AI easy and clear
« Reply #21 on: July 20, 2020, 07:38:18 am »
Oops, i didn't mean vision isn't a sequence, i meant image isn't. Obviously you don't store image object parts as lines of the image, you need to store segmented objects! Ex. you store a diamond object node/capsule:

not like this:
                      -------
--------------------------
--------------------------
---------

but like this:
        -
       ----
    --------
      -----
        -

Now this, this is a sequence, you can see one part of the diamond and generate the next, yes.....however to do it correctly you must look at it as 2D objects/capsule nodes once again, back to the thing I illustrated above. We don't store full image lines in nodes.....


I already knew the brain mixes all sensory types for improving its prediction. And how the brain uses feedback/data over time to specialize what kind of feedback/data it collects. That gives it "more data" technically. And when in-confident, will seek more data from some next best domain.
Emergent          https://openai.com/blog/

*

silent one

  • Nomad
  • ***
  • 51
Re: Making AI easy and clear
« Reply #22 on: July 20, 2020, 08:36:33 am »
An image Is a sequence,  what if it was a photo of a newspaper. :)

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Making AI easy and clear
« Reply #23 on: July 20, 2020, 08:41:17 am »
A newspaper is an image, you can only read it if you scan over it word by word, which is a video therefore. If you widden your attention to the whole page and try to read it all in 1 frame, you can't, all you'll see/read is "wall o text" XD.

And k, answer that question! xd
Emergent          https://openai.com/blog/

*

silent one

  • Nomad
  • ***
  • 51
Re: Making AI easy and clear
« Reply #24 on: July 20, 2020, 11:41:48 am »
Markov chains work in graphics as well as text,  So  I agree with Locksuit,  but Korrellan is onto some other way of thinking that's just as good.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Making AI easy and clear
« Reply #25 on: July 22, 2020, 06:28:08 pm »
Referring to the images in this pdf https://arxiv.org/pdf/1911.01547.pdf

Text/GPT-2 and Vision both have an "ARC" set of tasks. I call it "infinite tasks of things" or "Internet of Things" because you'll see it is absurdly flexible. See attached a list of tasks for text I made/collected. This is my "ARC" text set.

Whether vision or text, it works the same way, you either show it multiple examples or tell it "translate x to french" (assuming it has seen multiple examples) and it will do just that. It's a temporary priming/control or "way of thinking" for the net.

If we look at the start of my book I'm creating (see other attachment), we see Syntactics, Semantics, BackOff, BPE, etc can learn patterns, however we (not the algorithm) tell the algorithm to find these (priors), the algorithm should be able to discover these! Is there more it can discover? Semantics, BPE, what else is there? Can it find more!? Look at the text tasks I made, translation, word unscramble, find rare word in sentence, mom/dad/son same last names pattern, etc. There is more! But they are less common, so they seem like the last end of the hog's shoe and are hard/unnoticeable to find. In fact I think the main AGI net will be based on the most common priors, and will have to emulate other priors using the more common mechanisms. It will improve prediction accuracy less if less common, unless you can handle 1,000 types of pattern tasks.

GPT-2 has 144 attention heads it learns and then they look for "patterns".

As for rotating an L shaped objects in text, vision does just that, if you do just that in text then it is really vision after all, so if we want to do it in text the way text was designed for, it would be more like this: We have an L shaped object with the top of the L sticking up, please rotate it clockwise, what do we have now? Say answer: The top of the L shaped object is sticking out to the right.

I'm still checking if these "many tasks" are really just the few mechanisms listed in my attachment BOOK.txt - it could be that there is some pattern among them.
« Last Edit: July 22, 2020, 06:54:26 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Making AI easy and clear
« Reply #26 on: July 23, 2020, 05:48:30 am »
And the "AGI net" seems to be only the top most common patterns. Why? Frequency/syntactics, semantics, BPE.....Does it not handle all other tasks? When you call/ask for the task "translate to french", this makes my architecture only use the recognized nodes, it only requires my architecture see!, and it uses energy to cause tasks temporarily, and highlights french too using activation energy temporarily, further after the task is said, you feed it a subject to manipulate, ex. dog, and it translates it to french "dog" version, because the 'dog' was activated and so was the french version. So it seems using just energy activation you can control not just the task but the subject fed in, using just a simple architecture?

There is many patterns in text/vision, many are less used, few are common, the brain architecture I'm guessing is only made of the most common patterns to naturally find those without being told to store/find them, How can it do this? It emulates, but how? It must be that the most common patterns make up the other bazillion patterns! A sequence of tasks. Now, even the net mechanisms have patterns, frequency, semantics, BPE, more data, all give u more data! And they blend the nodes as representations. Thee bazillion other tasks/patterns that 'reoccur' also give you more data and help prediction. Just saying. Anyway, hmm. I couldn't imagine a brain mechanism to make last names the same, 2nd names the same, or first, or etc etc etc, it must be more general than that.

Maybe you can help. How does AGI say the 3rd name as the same as relative? Ex. Jim Fan [Lee] has a son named Roy Tan [Lee]. The son and the other name clue it in. But how exactly. It could be said like Jim Fan um oh ya Lee. It knows names are odd, so it's looking for "names" and counting to 3? That would require a 123 node and a low frequency threshold circuit, and scanning memory maybe. ??
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Making AI easy and clear
« Reply #27 on: July 23, 2020, 07:37:38 pm »
Just realized, vision has one discovery edge over text, but only increases accuracy of prediction, not a fundamental difference do note!:

If you have a hierarchy of [visual features], pig is is made of teeth, hooves, tail, eyes, retina, pink, has a genital, etc. But in a text hierarchy, well, pig isn't made of those!!! It's made of pi+g, and pi is made of p+i. And no, you DON'T make a Concept Hierarchy, the brain has no such. So how does text know pig has legs? Oh it can, by ex. seeing it in text or generalizing semantically. Just not as often as vision.

This is why you can activate a node and all it's parts can be activated (legs, pink, teeth, etc) more than similar features (sheep, dog, monkey, goat).
Emergent          https://openai.com/blog/

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1723
    • mind-child
Re: Making AI easy and clear
« Reply #28 on: July 23, 2020, 07:57:05 pm »
Vision is one abstraction. Hearing is another. So is smell, and so on. And all those abstractions relate in outer world.

Now, text is interesting. Text on its own isn't very special, it is a merely arranged system of letters. But in correlation to our sensing of outer world... Text is artificial abstraction used to represent vision, hearing, ... We draw similarities between words made of letters, and tangible concepts in vision, sound, ..., so we can communicate those concepts for doing serious business, or simply enjoying fun. Words become merely keys for tagging real concepts.

It sounds so banal when I read what I just wrote, but I'll publish it anyway.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Making AI easy and clear
« Reply #29 on: July 23, 2020, 08:06:51 pm »
Text [alone] tells you what flowers smell like, vision [alone] doesn't :P

1up for text now!
Emergent          https://openai.com/blog/

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

216 Guests, 0 Users

Most Online Today: 276. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles