Releasing full AGI/evolution research

  • 290 Replies
  • 190195 Views
*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #120 on: July 19, 2020, 06:22:03 am »
https://www.youtube.com/watch?v=7wqmXo0Jqa4&feature=youtu.be

The universal adversarial trigger is to place some words or objects at the start to force it to talk about that. Ex. "cat dog rat The movie was: cat". I knew this already. I also know how to avoid it. You need attention stronger than it always, look to Blender - it drives the model! Blender is better than GPT-2.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #121 on: July 19, 2020, 10:22:21 am »
However, this is not universal! You can make a ANN that doesn't use semantics, temporary activity, etc, and use just plain exact matches, and will not even see the words farther back and will only use exact matches.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #122 on: July 21, 2020, 02:35:53 pm »
Ranch always said if it works u got a sellable product, well he and gpt3 are right!

And yuo should code like its your last day on earth to get stuff/ immortality done and solved, gpt3 is right again!

And don't be perfect and be lazy, the brain is a efficient jack of all trades thing and so is every task it can solve, life is like that, gpt3 is right!


Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #123 on: August 04, 2020, 06:41:23 pm »
I think I've found the anti particles in AGI ::))

My AGI blueprint has 4 elements to it that play roles.

NORMALs:
#1 Connection strength. Ex. dog>barks.
#2 Relation strength. Ex. dog=cat.
#3 Energy. Ex. likely said again soon.
#4 Reward. Ex. say what you want to happen.

ANTIs:
#1 Weight may have inhibition weights, saying it never seen it follow.
#2 If cats are red, ugly, hairy, and dogs are mice, fun, and cool, then maybe both will continue to have no same contexts.
#3 Maybe you want to temporarily ignore something.
#4 Bad reward, it hurts, don't want to talk about it.

? :)
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #124 on: August 07, 2020, 04:26:20 am »
If you have an empty brain with just the question "I will cure cancer by" it may just answer "getting rid of it". It needs more data to know how that totally won't work. Perhaps you say place cells closer on discs to make storage higher but actually this causes cells to collapse by overheating. Not only does it need more data to know its boundaries in reality, but also to answer the question. Through this giagigabhamoth, it can find a path. But now the question is, just how much dataset do you need before you can answer "I will cure cancer by"? The answer is reality. Does the solution work. Also you can check how the accuracy isn't improving much anymore as feed it more data, but still is tied to reality checking.

Strange how this goes against me myself...I know lots and use that to find my way through the maze of AGI, I become confident/satisfied as I evolve towards the end outcome. But that confidency threshold/criteria I have is based on the data I know, if I was 2 and said "cure cancer by telling it to go away" I would think it sounds correct, my criteria is based on what I know, it doesn't tell me if my criteria is correct though. Or can it? (knowing I know enough is based on what I know though) Otherwise I need to check reality. Or compare it to our accuracy. Yet, I seem to be able to know if I am there or not. Right now, I KNOW I'm missing data. How's that possible?

It's as if I see walls stopping me from saying I see a path. I could say anything right now, and I know it won't solve AGI.

Must think on this.
« Last Edit: August 07, 2020, 05:27:13 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

MikeB

  • Autobot
  • ******
  • 224
Re: Releasing full AGI/evolution research
« Reply #125 on: August 07, 2020, 07:17:10 am »
I think if you add more qualifiers... EG what type of cancer ? A specialist would have a hard time answering "how to cure all cancer at once permanently". There's lots of information on specific cancers. Or the fall back answer is correctly "I dont know / not possible" ?

Love your work.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #126 on: August 08, 2020, 08:13:39 pm »
Oh here's a better way to explain what I mean by energy, and new discoveries as well.

The more text you have the better you learn the frequency of features ex.
a
an
and
and w
and we
at
ate
atm

See attachment if the following cnofuses you. This allows you to predict the next letter or word. Given a prompt, you recognize the last letter or letters ex. "zzzz[rbage smells]_" and even just the single frequency of 1 letter (the probability of single letters, or words with no context clue) "zzzzrbage smells[_]" and you mix these to make a prediction. In "zzzz[rbage smells_]" we recognize 13 patterns each telling us what comes next "zzzz[r[b[a[g[e[ [s[m[e[l[l[s[_]]]]]]]]]]]]]".

Well, we can add a 14th pattern, energy. A neat video I made below shows that the same word or domain sticks together and is NOT evenly distributed among text. If we really thought using the above methodology that "a" and "an" and "and" and "andy" has a average frequency of 'y' then we'd have a dataset like this basically, if "andy" really makes up 25% of the dataset aka is very common frequency ex. "andy but yes when andy saw that we andy then no one".......but in reality our probability / prediction here is not that good, the ground truth isn't evenly distributed like that, the clue/pattern is we can tell when they start sticking together by when we see andy appear more. Just look for yourself:

Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #127 on: August 08, 2020, 08:19:37 pm »
(and i'm thinking if they have their own probabilities...hmm)
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #128 on: August 08, 2020, 10:02:29 pm »
Ok so yes even for the other patterns it matters. If you see/pay attention to 'new' or 'feed new', dog may come next, but if you seen dog lots recently, you may know you're in the middle of the sticky storm:

dog.......dog..........dog.......dog...dog..dog..dog.....dog.......dog...............dog

And humans don't naturally predict the next bit or letter, but rather the  next word.

I'm thinking maybe the word/domain "USA" has its own sticky storm:

USA....................USA........USA....USA.USA.USA.USA.USA.USA.USA..........USA................................USA
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #129 on: August 08, 2020, 11:11:32 pm »
oh wait, prompts aren't as long as a dataset, but they can sometimes hold sticky storms
***********************************************
wait on my wait above, no, when bench-marking on the Huter Prize dataset you do actually run through the whole wall o text like that lol. And so do humans as we live each day, I say 10,000 words in my brain each day, I may change from music to Mars simply and the domain changes. So yes, even though you have a probability of when to see x come next, it's more likely if seen more such recently - our thoughts and articles are like that!!

also, it still holds that the increase of 'dog' makes it more likely even if the occurrence is steady.
***********************************************
n/a

also:
***********************************************
its clearly useful when look back 80 words at Trump and see at the end 80 words away "and the president i.e. [Trump]"..........and so more likely if see it more.....not sure if gradually enter/leave of a domain occurs yet...........and I will look at this later, may be attention heads actually
« Last Edit: August 08, 2020, 11:55:22 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #130 on: August 10, 2020, 04:20:39 am »
complex multi motor synchronization

featuring me!

footage is in real time, no speed up

Part 1


Part 2

Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #131 on: September 02, 2020, 08:23:49 am »
Is interesting watching these dog challenges:

All that toilet paper...




Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #132 on: September 03, 2020, 09:09:07 am »




Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #133 on: September 14, 2020, 02:02:43 am »
I TALK TO GPT-3
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #134 on: September 20, 2020, 09:14:07 pm »
I'll be soon creating a new AGI group so we can get the job done. It will be a better way to work and connect the brightest minds. It will be the most powerful AGI group. It will focus on AGI only and survival. I wouldn't say it's only for experts, but it's for the right mindset. I find there is way too many beginners that can't explain their AI and don't know how to connect or why to connect tightly, they don't even have a real end goal. I want to put a lot of time into creating AGI and building a team to do so.
Emergent          https://openai.com/blog/

 


Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

423 Guests, 1 User
Users active in past 15 minutes:
WriterOfMinds
[Trusty Member]

Most Online Today: 507. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles