Releasing full AGI/evolution research

  • 290 Replies
  • 195102 Views
*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: Releasing full AGI/evolution research
« Reply #150 on: January 03, 2021, 07:54:10 am »
Possibly there is no meaningful difference between symbolic language, like ''table'' written down, and sensory data like seeing a table. Both the word and the image could be symbols which represent the concept of a table in the mind. We might even be fully symbolic; each sensory input could be treated as a symbol. Maybe that's why language came so naturally to us, our brains were already doing the same thing with other shapes and sounds. If that's the case, it seems like we'd need to build a visual vocabulary, eg ''hinge, rectangular plate, frame''; and then rearrange it, based on the combinations we encounter, to create new concepts such as ''door.'' So, we need to figure out the recognition of symbols, and then the mechanism for their internal representations.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #151 on: January 03, 2021, 09:51:54 am »
Hierarchy automatically builds bigger memories for each sense type cortex. Text looks like ex.: https://imgbb.com/p22LNrN

Text is just a way to share the same visions. Cuz we can't share vision, unless link text then say text then hear it then link to its images. So text is really similar because of that.

Nonetheless any dataset has patterns in it if it does, and the better pattern finder will compress it better. We need not worry if its vision or music. Though working on music/ vision etc each shed light on others as I proved. OpenAI also realized this.

A brain network only merges data into pattern clusters, we don't store the same word twice, only strengthen a connection to represent it. Same for similar nodes, we link them close to each other clustered by strength if they both are fire together wire together if share similar contexts ex. in the net dog and cat both link to same parent nodes eat sleep eat run play love cute animal etc. Allows both translation and swap use ex. cat barks, dog meows predictions.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #152 on: January 19, 2021, 05:49:51 pm »
Wait is korrelan using polar morphing of images to solve recognizing objects upside down and bended? Just a thought and inquire.
Emergent          https://openai.com/blog/

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Releasing full AGI/evolution research
« Reply #153 on: January 19, 2021, 11:08:37 pm »
No, not polar morphing per se, but the human retina does use a polar rod/ come morphology.
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #154 on: January 20, 2021, 12:04:54 am »
Yes yes, I think I see what you do below, you might not only be storing features with bending to them, you also might be doing it to inputs so they can better match to known memories!

At least it appears that way, I may be wrong. My work though, does not need to store any more than 1 rubber duck though, in its normal form upright, no polar bending. If input comes in of the duck upside-down, again I do not need to bend the duck so some it is upright and hits my upright-memory I have stored. Simply the upside-down duck matches that upright duck. How? Well, I''ll tell you soon. I've been learning lots more on Vision.

Emergent          https://openai.com/blog/

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Releasing full AGI/evolution research
« Reply #155 on: January 20, 2021, 07:38:39 am »
That vid demonstrates that with a polar schema there are no rotation or scale invariance problems, these are 'man made' hurdles derived form our insistence on using Cartesian schemas.
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #156 on: January 20, 2021, 08:27:57 pm »
LOOOOOOOL So you are doing that! It seems very obvious especially with the title of the video. Well, objects in real life, like a Face, really do have the nose above the mouth and between 2 eyes, your video shows the duck/ face/ etc actually bend and squish, and rotate. The duck in real life is not bending or even moving, why make the poor robot see something not happening!? Constantly feeding it a new morph and rotation!? Not good...

You mention you use a non-Cartesian schema, that the image really loops around to itself, but as said this changing input is unrepresentive of the actual object. The only agnostic thing that occurs is the pixels go up the network regardless of their location in the image, so all the dark pixels hit the dark pixel node, yup, and a bit of the lighter pixels nodes they trigger too a bit. You've seen my christmas hierarchy, right? Well all the a and g and z go up the same node upwards...

You're simply rotating the input so it matches better. I strongly disagree with Data Augmentation though, I have found a more interesting approach instead of distorting it to all possible combinations...
Emergent          https://openai.com/blog/

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Releasing full AGI/evolution research
« Reply #157 on: January 21, 2021, 12:09:02 am »
No I'm not doing that...

Ah, I haven't conversed with you in a while and I was hoping that you might have changed but alas... you still don't read what is written... and still only see what you want to see.

😀
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #158 on: January 21, 2021, 12:29:10 am »
I see you/other AGI pioneers still won't explain AI either, same boat :p .... it's so frustrating...

hmm i found it....
https://microship.com/log-polar-grid-rotation-scaling-vision/

this is so funny omg...
https://microship.com/cromemco-z-2d-review-kilobaud/

i got it, let me explain it now:
see the smiley face circle? Look at it as rings. The outer rings are lines but beside the other lines....so a big smile circle is a ring/line far to the right, hence the horizontal change of location it mentions for scale. If you rotate the mouth line, it is still on the same ring/ line, but not moved up/down ;p

hmmmm, so...surprised it actually works. Well, not bad. Hmm, so if we're building AGI, you store the image input as, um, a row of rings laid out side by side hmm, but is actually only the lines of the image (see sobel filters)...then store a hierarchy made of parts faces, arms, curves, lines, pixels... But it also requires we align the smiley face in the middle, thinking...

Oh boy, so not only do you need to align the laid out rings as said above, but also keep the input smiley face in the center where you saw it, it's not location invariant...if you move the circle over so it is not a ring on a ring, the circle will be, when laid out the rings, a circle still.

How do you solve both problems?

And is it stretch invariant? Hmm, no by the looks of a small test in paint i looked at.

And flip invariant? b=d. If we show it a cat, one ring would have the tail and eyes, middle ring the body parts, middlest the body center....nope, fails this too.

Also, seeing a bigger smiley face can't give the same input, it "has" to be a bit different, human notice it is bigger and not exactly the same also. Hence, making every ring the same size is bad cuz then each sized circle looks identical indeed, you make it not the same size a bit i guess...

It might look like it solves rotation and scale but really it is just making the image into rings, then unfolding them into a row of lines, and the pixel locations are only what it seen.........it is not flip invariant....maybe i'll draw a pic: (yes, i drew the cat using 1 line, free hand by my finger on my slate lol)
« Last Edit: January 21, 2021, 02:33:22 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #159 on: January 27, 2021, 02:32:08 am »
After 1 month I have implemented from scratch a novel algorithm that can with the accuracy of a human recognize image/music that it has only seen 1 time in the past no matter rotated while scaled while stretched while brighter while blurry while noisy while flipped, it tells you how similar the 2 images are exactly, however is very expensive, only works on 7 pixels, for now. No Training, no Data Augmentation. I know exactly how it works too. I think nearly for sure I found the best way to do recognition for distortions. Still thinking about it but it looks interesting. I learnt a lot on Vision too.

This is the distortions problem, obviously if you show it pics of cats then ask what is the next image it won't catch on.

I also wrote a research paper, but is in holding ATM...
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #160 on: January 29, 2021, 08:06:25 pm »
Ok the code and paper are ready. Anyone here have an arXiv so I can get the approval to post there? Also looking for someone specialized in CNNs and robust recognition in case anything I wrote is a bit off.

My code can robustly recognize image/music after seeing only 1 example even if flipped and brightened and blurry and noisy and occluded etc.
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: Releasing full AGI/evolution research
« Reply #161 on: January 30, 2021, 01:25:00 am »
I don't know, maybe you could post it in the trusty members only section. Then if someone wants to volunteer some help they can do so.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Releasing full AGI/evolution research
« Reply #162 on: January 30, 2021, 03:25:18 am »
Why do you need to get "approval" to post on arXiv? I'm not familiar with their policies so perhaps you could explain them to us. Also I feel beholden to point out that it would be very foolish of anyone to enable you to post something without seeing it first.

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Releasing full AGI/evolution research
« Reply #163 on: January 30, 2021, 03:52:02 am »
I was just looking into that ... to submit a paper to arXiv as a new author, you need an endorsement. https://arxiv.org/help/endorsement

It is likely that they only accept endorsements from people who are already arXiv authors and/or members of known research institutions. I am neither of those, and I'm not sure if anyone here is.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Releasing full AGI/evolution research
« Reply #164 on: January 30, 2021, 04:19:57 am »
It is good to know that they have such a robust procedure for vetting submissions. There is enough garbage posted on the internet already. However, as the page points out, there is nothing stopping anyone from posting a paper on their own website.

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

336 Guests, 0 Users

Most Online Today: 447. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles