Releasing full AGI/evolution research

  • 290 Replies
  • 195105 Views
*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Releasing full AGI/evolution research
« Reply #225 on: April 08, 2021, 06:30:05 pm »
It can work,   but not in its initial raw state,  to get GPT to work takes alot of tricks!   Some of GPT is procedural generation I would say. its methods that work on the string information to generate the output, and that's the hard bit! 

The pattern matcher in the first place isn't the gem of the program, its only the basis of it.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #226 on: April 08, 2021, 10:11:24 pm »
Interesting comments! I've thought about all the aspects of AGI countless times and am sure I've got the big pattern figured out, all the mechanisms are, pattern based, it's amazing and explains to me all the things we handle like recognizing upside down objects, I've grown to accept the nuance of how things work. All there is in physics is patterns and that's why we love "things/tools that solve all the puzzle", we generate [new] discoveries by blinding listening to known patterns - to make new solutions, far from just Q-A pairs regurgitated! All deeper patterns /are/ actually based on exact matches yes, look up word2vec, at the deepest there is trillions of rare patterns unlike the Markov Chain way, you get these by the branch effect free like physics makes all sorts of things using a few if then rules. Even the act of looking backwards on a sentence is just a prediction like a markov chain or word2vec algorithms.

I'm not sure why infurl thinks I'm on the wrong path cuz I got a lot more big things to add and will certainly get down to ex. ~16MB per The Hutter Prize. I'm currently speeding up my python code so I can make sure how far I am by tweaking the parameters.

All the other NN variants of variants like auto-encoders, GANs, etc all achieve the same thing; prediction benchmark scores on corpuses of ex. text. Mine is very similar to them all, mine simply doesn't use Gradient Descent which you may know as the name Backprop - a variant of GD.

My algorithm is AI, it is a predictor of the next letter, which is all we do which is predict the next part of the sensory...this controls motors. AIs are the best data compressors as shown on the Hutter Prize, Matt says this too, they predict with probabilities for each next letter what they think it will be based on past context, this is then helping store a smaller number which stores the corrective answer to what the next letter really is, you elongate this number and store it, it steers the predictions to what the file is to store it. If the data has patterns in it, it can be predicted.

"Korr, you are an expert for NN"
Korrelan says I think on his website or YouTube that he is only an amateur, not an professional. If not, you should clarify in what area you mean.

Latest code can be found below. Also there is a timeit, 2 lines of code. You can modify the branch length at line 111 to 7 which is a great trade off.
Emergent          https://openai.com/blog/

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Releasing full AGI/evolution research
« Reply #227 on: April 08, 2021, 10:23:01 pm »
"Korr, you are an expert for NN"
Korrelan says I think on his website or YouTube that he is only an amateur, not an professional. If not, you should clarify in what area you mean.

Lock, let's put aside our community interfacing skills for now. I'm looking at you through a rigid problem-solving capability lens, hoping to find something interesting in your work, and hoping your youth will not stand between you and your success.
« Last Edit: April 09, 2021, 09:10:12 pm by ivan.moony »

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Releasing full AGI/evolution research
« Reply #228 on: April 08, 2021, 10:54:07 pm »
This man is driven!!      Its good how your optimistic,   maybe I guess u could say intelligence could be put in the form of predicting the next letter, but u have to get it right 100%, thats the tricky thing.  and more issues.   But I hope u get there.

Korrellan is not an amature,  his work is excellent, u shouldn't be so dismissive of other workers,   everyone has something worthwhile to add to the situation.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #229 on: April 08, 2021, 10:58:01 pm »
Well it left that impression on me even though I don't really believe it, maybe he means in the neurology department as he stated neuroscientist, though that makes NO sense, his work is bio driven all the way LOL.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #230 on: April 09, 2021, 09:09:39 pm »
For 25USD I hired a freelancer to translate 17 lines of my python (the tree storage) to C++. Exactly delivered the job, if only AI could perfectly translate code lol. So, it was only, as expected, 3-4 times faster. Though I am using pypy, you can see all 3 results below to compare. Pure python is indeed 10 times slower. All I'd need to be closer is a very fast computer, or just go with available parallelism etc I can add on... So it's not so worth it (maybe), I can rapidly work in peace in Python, and when want hire a cpp translater. Even in cpp it's not like you can run 100MB in just 10 hours, Green takes 32 hours, so yes as I say working on small data shows the same results on larger data, but for the rare times it does different on larger data, either bare with it or find the solution to scale across larger data.

with python:
?
?
112 for 10MB
53 for 5MB
28 secs 2.5MB
21 secs 1.25MB

with pypy
?
?
45.8 seconds for 10MB
21.9 for 5MB
8.8 for 2.5MB
4.3 for 1.25MB

C++
100 40MB
32 20MB
16 10MB
8 5MB
5 2.5MB
3 1.25MB

For those that want to see it, here is it below:
add the extension .cpp
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Releasing full AGI/evolution research
« Reply #231 on: April 09, 2021, 09:33:06 pm »
If you did GPU code in python, python wouldnt slow it down,  because its transferred on the video card, and python actually isnt being called anymore.

But if u want to go really fast. (like 1000 times faster)  put it on an FPGA,  then ud be flying.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #232 on: April 09, 2021, 09:56:47 pm »
hmm...

Ya so you can get a faster AI than if were to program it in Assembly or Machine Code. By crafting a dedicated hardware, will beat a general purpose computer processor.







i don't think this looks easy.....show me some code to program them ?
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Releasing full AGI/evolution research
« Reply #233 on: April 10, 2021, 12:01:29 am »
Its not much harder than normal programming, u just have to work with individual bits at the gate level, instead of being able to use them all at once at the arithmetic level. (normal programming)

Everything else is pretty much the same.

You can do it with truth tables or LUTS (lookup tables) because they actually work that way inside anyway.

Then its Just I->O -> O -> O -> O until you get to the other side, and it all counts as a single operation cause its hardware!    thats why its faster,  on a computer you have to run each step one at a time,  on a fpga it shoots through in one hit out the other side every clock cycle.   but the clocks are about 10 times slower, but other than that they go pretty good,   way better.

Theres tonnes of ppl that know how to do it,  you could source a guy to do the fpga stuff for you the same as any other language.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #234 on: April 10, 2021, 12:16:44 am »
https://bdtechtalks.com/2020/11/09/fpga-vs-gpu-deep-learning/

"%u201CTo program an FPGA, you need to assemble a team of hardware engineers who know how to develop FPGAs, hire a good architect who understands neural networks, spend a few years developing a hardware model, and compile it for an FPGA while facing the problem of reaching high usage or high frequency,%u201D Larzul says. %u201CMeanwhile, you need to have a extensive math skills to accurately compute the models with less precision and a team of software people to map the AI framework models to the hardware architecture.

Mipsology, Larzul%u2019s company, aims to bridge that gap with Zebra, a software platform that allows developers to easily port their deep learning code to FPGA hardware.

%u201CWe offer a software abstraction layer that conceals the complexity that would normally require high level FPGA expertise,%u201D Larzul says. %u201CSimply load Zebra, type a single Linux command and Zebra goes to work %u2013 it requires zero compiling, zero changes to your neural network, and zero new tools to learn.  And you can keep your GPU for training.%u201D"
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 650
Re: Releasing full AGI/evolution research
« Reply #235 on: April 10, 2021, 12:34:03 am »
If you put your text database onto the fpga, u would get your output instantly.
So you would be in a heap of excess, so maybe its a good idea if you did many samples and picked some metric best of all the outputs, that would be taking advantage of the situation.   u could get 100 million outputs a second, so u have to take advantage of the fact somehow to get something out of it.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #236 on: April 11, 2021, 11:11:36 pm »
You know my AI has trice: short term memory, long term memory, and forgetting now. Pretty cool stuff.

This is actually still the only forum I posted to lots, barely have I others, besides the AGI list. I intend to discuss deeper matters in my little tight group though sooner or later, instead of blatantly in the public. But I'll still do updates like openai.com
« Last Edit: April 12, 2021, 12:03:23 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

MikeB

  • Autobot
  • ******
  • 224
Re: Releasing full AGI/evolution research
« Reply #237 on: April 12, 2021, 10:43:24 am »
I hired a freelancer to translate 17 lines of my python (the tree storage) to C++.

Your C++ coder used Strings... Strings take a long time to process because there's boundary checking added to the machine code on every line where a String is processed... Same with Managed Arrays.. You have to use Fixed Arrays and manually checking each char...

I can try to reprogram it in C++ if you like... I'm taking a break from my own project...

Is the latest version at https://encode.su/threads/3595-Star-Engine-AI-data-compressor ?

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #238 on: April 12, 2021, 03:23:41 pm »
That'd be awesome if you can fix his code to run faster, at the moment a 3/4 times increase doesn't seem worth it to change over to C++ until I get to large milestones and hence hire someone to only a handful of times. But as you said it may be faster if not using vectors/ strings. Of course there is other algorithms to do a tree but the goal was to see what my algorithm in python is compared to how faster it is in C++.

My current code is not the one on encode.su now, it is much smaller and faster and better at compressing larger data. I'll post it below. I can compress now 1,000,000 bytes to 241,559 bytes (i'm still adjusting the parameters today). So since mine is always 1.5MB ahead of Green's, I should be able to at least reach 20.3MB if my parameters are correct for larger data. Mine takes, with pypy interpreter, ~13 seconds for 100,000 bytes, and ~5 mins for 1MB, 60 mins for 4MB. Green is in C++ and takes 32 hours for 100MB and ~21 mins for 10MB.

I have 4 attachments to add below, hold on. Sorry the names have to be different because this forum doesn't allow the same name filed to ever be uploaded again, infurl please fix that lol. Ok the names are fine this time, though you'll need to download enwik8 and change the name to enwik8.txt in the program at top. - if I can't add the 4th big file.
Emergent          https://openai.com/blog/

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Releasing full AGI/evolution research
« Reply #239 on: April 12, 2021, 10:20:39 pm »
I have 4 attachments to add below, hold on. Sorry the names have to be different because this forum doesn't allow the same name filed to ever be uploaded again, infurl please fix that lol. Ok the names are fine this time, though you'll need to download enwik8 and change the name to enwik8.txt in the program at top. - if I can't add the 4th big file.

The names have to be different for the same reason that the names of the files in the directories on your storage devices have to be different, that is, so that you and the computer can tell them apart. If you wish to post new versions of your files then you should be using version numbers in the names to differentiate them. At the very least, include a date and time stamp in the name of the file e.g. yet-more-stuff-20210412.junk so that people looking at it don't have to wonder if they already have it or not. Use ISO format dates YYYYMMDD so that the file names sort in the correct order. What you really should be doing though is using github for this. The forum is for discussion, not for storing files.

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

331 Guests, 0 Users

Most Online Today: 447. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles