I want to learn this visual GPT-2

  • 14 Replies
  • 385 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3548
  • First it wiggles, then it is rewarded.
I want to learn this visual GPT-2
« on: November 20, 2019, 08:02:49 AM »
""Calling korrelan""  :)

Is this below not a visual GPT-2? I've shown it before. It seems to be using the Transformer to do the amazing prediction of pixels instead of words:
https://openai.com/blog/sparse-transformer/

Do you know how this works? Can you teach it to us visually using a drawing? If we look at the video below we can begin to 'latch on' but sure enough there is many changes in OpenAI's implementation.

https://www.youtube.com/watch?v=PKN_Cc-GyCY

If you want, we can first better understand GPT-2 for text... If we can, we can 'build' on their ideas finally and understand data science better (physics, evolution).

That's a good idea because like vision, words have frequency and similarity to other words. Vision just is words with an extra dimension, the alphabet of pixels still makes up larger phrase parts - it just has fewer elemental letters that are much smaller. Imagine if the dictionary of words were pixels, you can build so many phrases, but some sequences are related.
Emergent

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1324
  • Look into my eyes! WOAH!
    • YouTube
Re: I want to learn this visual GPT-2
« Reply #1 on: November 20, 2019, 10:34:05 AM »
I had a quick look at the GPT schema ages ago just to see what all the fuss was about.  It was nothing new, they were using words like 'attention' to describe a weighted probability matrix, and basically using new/ modern raw computing power to magnify results gained by previous researchers many years ago.  Their GPT schema was extremely inefficient, clunky and provided nothing new to the field, their basic mantra is... lets throw more computing power at it.

I'll have a read later, but Just for a guess, I bet they are now not using a one to all matrix on their layers and have decided/ discovered to only store relevant weighted connections, using either a threshold or pattern producing a sparser/ smaller matrix footprint.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3548
  • First it wiggles, then it is rewarded.
Re: I want to learn this visual GPT-2
« Reply #2 on: November 20, 2019, 06:48:19 PM »
Ok but how can I make my own GPT-2 from scratch, we need a real good visual here (and not Jay Alammar http://jalammar.github.io/illustrated-gpt2/ )

I can pay up to 2,000 along the way if successfully understanding is achieved quickly.
Emergent

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1324
  • Look into my eyes! WOAH!
    • YouTube
Re: I want to learn this visual GPT-2
« Reply #3 on: November 20, 2019, 10:15:13 PM »
You do realise that even if you understood how GPT works it takes massive amounts of computing power just to train a model. it costs hundreds of thousands of dollars for a small model and possibly millions of dollars, just spent on compute time for a large model.

Full blow or even just a useful GPT cannot be run on home/ conventional PC's, you would need to rent months of server-farm time to train your models.

 :)



It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3548
  • First it wiggles, then it is rewarded.
Re: I want to learn this visual GPT-2
« Reply #4 on: November 20, 2019, 10:22:18 PM »
Yes I knew that, but I want to understand God. Then I want to make it run on my cheesy potato.

It's more about the understanding that will count.
Emergent

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1324
  • Look into my eyes! WOAH!
    • YouTube
Re: I want to learn this visual GPT-2
« Reply #5 on: November 21, 2019, 12:13:07 AM »
I scanned through Jay Alammar’s illustrated GPT, it seemed pretty simple to me, a logical step by step explanation… with lots of diagrams.

There is an old adage ‘things should be made simple, but no simpler’. There is a lower point/ threshold in complexity or in the complexity of an explanation where the knowledge being expressed is lost… due to the simplicity.

You have to understand that some concepts cannot be easily explained, or simplified to just one diagram.  If someone is having a problem understanding a concept, then they are lacking the underlying knowledge used to construct said concept, you know that knowledge is built hierarchically, concepts built on concepts.  You can’t just explain quantum string theory to a human who has no ‘scientific’ experience/ understanding with one diagram, no matter how good/ simple the diagram is.

This has happened to me often, and I just go back a few steps and learn about the underlying principles/ processes… then re-read the original data with new eyes/ insights/ understanding.

Quote
but I want to understand God.

Haha… GPT-2/ BERT will never provide a true AGI, it’s just a complex probability matrix… this, this and this happened… so the next most likely thing to happen is this.  It just exploits the patterns embedded in the training data… it does not ‘understand’ the data.  The sequences of strings/ images it produces are not produced by an intelligence, they reflect/ mimic the original 'intelligence' embedded in the syntax/ sequences of the training data.

Quote
Then I want to make it run on my cheesy potato.

Haha… nope.

I’ll have a think if I can simplify an explanation… I'm very short on time though so don’t hold your breath lol.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3548
  • First it wiggles, then it is rewarded.
Re: I want to learn this visual GPT-2
« Reply #6 on: November 21, 2019, 12:24:28 AM »
GPT-2 learns dog=cat.

If it sees dog, it can predict dog later or milk later:
"My dog was going to the pond to chase the dog and did find some milk to drink."

The brain saves data from outside world and learns on the inside cat=dog. Later, when he thinks, he can generate great plans, by squeezing out juice from what he knows. Intelligence is reflection, data makes new data.
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3548
  • First it wiggles, then it is rewarded.
Re: I want to learn this visual GPT-2
« Reply #7 on: November 21, 2019, 02:16:51 AM »
My 2 images I 'formed' months ago may help you see it all at once:
« Last Edit: November 21, 2019, 03:21:47 AM by LOCKSUIT »
Emergent

*

Hopefully Something

  • Trusty Member
  • ********
  • Replicant
  • *
  • 701
  • no seriously where are these cookies
Re: I want to learn this visual GPT-2
« Reply #8 on: November 21, 2019, 05:31:12 AM »
I see I'm lacking ALL of the prerequisite knowledge... Where do I start learning about this?

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3548
  • First it wiggles, then it is rewarded.
Re: I want to learn this visual GPT-2
« Reply #9 on: November 21, 2019, 06:49:03 AM »
I did find the last scholar (a young girl) OpenAI trained (on GPT-2 lol) by video call/ meeting with over just 4 months, apparently they documented her whole course - shown below, lots a reading hmm, DIY more eh?, me no likely but I'll look yet:

https://openai.com/blog/openai-scholars-spring-2020/

https://docs.google.com/document/d/12ZeukZM9T-rLKn2NJWK5b958lWuz9NmHVciA_ZcK8D8/edit

https://fatmatarlaci.wordpress.com/
Emergent

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1324
  • Look into my eyes! WOAH!
    • YouTube
Re: I want to learn this visual GPT-2
« Reply #10 on: November 21, 2019, 09:08:25 AM »
I can understand your excitement over GPT tech but I seriously do not share it.

Quote
GPT-2 learns dog=cat.

The current incarnations of GPT are language/ image corpus models; they cannot just soak up new information.  They can only apply data that was included in the training data. If you wanted to explain ‘dog=cat’ you would have to include it in the training data and then re-build the whole probability matrix from scratch.

Try telling the GPT large model something personal and then ask it a question about what you have told it… it won’t work.

GPT is not intelligent, as I have explained; there is no inherent intelligence in language.  The information you get from reading something is gleaned/ created from your own intelligence.  The order/ syntax of words are a commonly understood protocol evolved to elicit a similar mental state in the reader, as was in the writer. 

The documents created by GPT are based on a cumulative assessment of the structure/ syntax of millions of sentences/ paragraphs written by humans.  It’s mimicking our language syntax based on probability distribution, and although its writings/ output may evoke new thoughts in your brain… it has none of its own.

 :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3548
  • First it wiggles, then it is rewarded.
Re: I want to learn this visual GPT-2
« Reply #11 on: November 21, 2019, 06:57:59 PM »
Are you saying humans generate sentences by a much different way? How?

I know GPT-2 can't do Online Learning.


Nonetheless, you do however know your own project well, I do want to help you with the thinking/cash but I need your help...
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3548
  • First it wiggles, then it is rewarded.
Re: I want to learn this visual GPT-2
« Reply #12 on: November 21, 2019, 09:18:03 PM »
looks like he updated it, again:
http://jalammar.github.io/illustrated-gpt2/
checking it out, again...
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3548
  • First it wiggles, then it is rewarded.
Re: I want to learn this visual GPT-2
« Reply #13 on: November 21, 2019, 11:30:36 PM »
he drew so many images he confused it :)

the step by step is lacking......

come on korr teach us your great network step by step.......your peers are lacking......
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3548
  • First it wiggles, then it is rewarded.
Re: I want to learn this visual GPT-2
« Reply #14 on: November 22, 2019, 08:12:44 AM »
Is gpt2 just doing sequence2sequence and location vectors? It seems that's all I'm missing to make mine work well!

I used Glove, which took someone a year to train! It's the same compute hog for gpt2 but seq2seq vector learning!


I had just realized after studying capsule nets, the location of words or features in an image or sentence are the same, pattern, in different sentences....allowing recognition/generation of structure and better learn which sequence meanings are related!!!

if the cat jumps it may land
if the girl did then it may be
if we do then i will too
« Last Edit: November 22, 2019, 08:44:20 AM by LOCKSUIT »
Emergent

 


Users Online

8 Guests, 0 Users

Most Online Today: 23. Most Online Ever: 340 (March 26, 2019, 09:47:57 PM)

Articles