OpenAI Circuits

  • 7 Replies
  • 349 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4091
  • First it wiggles, then it is rewarded.
    • Main Project Thread
OpenAI Circuits
« on: April 03, 2020, 02:49:20 AM »
I found on OpenAI's Twitter a Circuits Slack_discussion. Finally they are coming my way, and having a open discussion about it.

https://distill.pub/2020/circuits/zoom-in/
Emergent

*

krayvonk

  • Electric Dreamer
  • ****
  • 125
Re: OpenAI Circuits
« Reply #1 on: April 03, 2020, 03:52:51 AM »
cool.   a neural network can actually be a circuit (and electrical one) if you want.  They have more parallel power than a gpu IMO.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4091
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: OpenAI Circuits
« Reply #2 on: April 03, 2020, 04:12:52 AM »
One cool tech I think is possible is parallel wireless transfer of data using billions of antennas to transfer a brain profile from a brain to a to another body's brain far away
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4091
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: OpenAI Circuits
« Reply #3 on: April 04, 2020, 03:51:06 AM »
This is interesting:
https://distill.pub/prize/

I just realized the whole website is filled with OpenAI, Google affiliates, and also has that article I shared about regenerative AI. I strongly agree with their work/ views.

I'm not interested in the money though. I'm interested in their desire to teach / AGI.
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4091
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: OpenAI Circuits
« Reply #4 on: April 15, 2020, 02:27:31 AM »
I seen this in their Slack mentioned, and found on their site the same thing they were talking about.

https://openai.com/blog/microscope/
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4091
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: OpenAI Circuits
« Reply #5 on: April 27, 2020, 09:20:28 AM »
1 month later! Back to coding now! Big AGI package release everyone! No AGI like this one! I'd be fascinated if you can add to it.

Change the extension to .zip and unzip it.
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4091
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: OpenAI Circuits
« Reply #6 on: April 27, 2020, 09:02:43 PM »
To point out some interesting things I go through in my AGI book above:
1) I show in my attached file how my "hierarchy" (as shown in the images) builds larger features as it reads text.
2) Then I show how, say it reads "my cat ran", then a day later reads "my dog ran", this will activate the 'cat' node in the hierarchy by indirect paths.
3) I have proof and strong belief and tests with GPT-2 (I go through them) that nodes retain energy for a while, which leads to much better prediction accuracy.
4) I propose to use reward for prediction.
5) I show how nodes can become less pure, ex. stores dog and cat as 'cotg', and can work for words too.
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • ******************
  • Hal 4000
  • *
  • 4091
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: OpenAI Circuits
« Reply #7 on: April 28, 2020, 12:34:26 AM »
one more thing:
I didn't exactly say it in my little book but also: biases in ANNs make certain nodes more easily activated, but in my research a node is only activated if its children are activated and/or if was recently activated and/or if related nodes are activated. Reward would also make some nodes more likely said and lasts a lot longer than energization (and can be modified for high level nodes). SO: other than that there's no reason it should get boosted automatically. Others and my own code do use global weight for layers, so I guess this is wrong, the idea is some layers (or better, nodes) have more stats and should get more weight during mixing predictions from multiple nodes of which letter (or word) comes next likeliest. Perhaps it is a combined averaged weight from the same layer saying 'this layer seems to have enough statistics'. But that is adaptive biases then if it learns Online, it is not fixed biases.
Emergent

 


Users Online

20 Guests, 3 Users
Users active in past 15 minutes:
LOCKSUIT, Kooxpi, Freddy
[Administrator]
[Roomba]
[Trusty Member]

Most Online Today: 28. Most Online Ever: 340 (March 26, 2019, 09:47:57 PM)

Articles