Korrelan, I'm really sorry you won't be able to fully process the green/red colors being color blind but, it doesn't matter much lol.
I spammed OpenAi with it too hehe, more 2 come
I've been watching /r/agi for a while. It's not uncommon (seems like every week or two lately) for someone to show up saying, "I'm starting a new AGI project, join me!" or "AGI isn't that complicated, I figured it all out this morning in the shower, bask in the glory of my theories!" None of these people have demonstrable results, just half-baked ideas -- and after annoying the subreddit residents for a bit, they soon disappear again. So I think everyone over there is a little jaded. Can you really blame them?
@LOCKSUIT: The latest seems like one of your more coherent and easily understandable posts. More like this, please?😃 Well... They kept deleting my posts on Reddit.... So I had to make it really really really simple to grasp so that 0 energy was needed to realize it... But sure.
No more biggies yet, but will share more my current work soon...
I mostly have more to implement than need to discover...
The remaining puzzle pieces to "my AGI" won't exactly come from coding my current plan though, but rather more back to the drawing board.
It would be best if I had a real-time chat team that can "construct AGI now" and come to some sort of agreeance of an architecture, Person A would be like, no, because this makes more sense, person B would be like, oh, then you need this, etc, then we could make AGI already finally instead of our 1man missions.
If we don't hurry, we're all gonna die.... We have the chance to become nearly immortal young-again kings, let's do it...
The new AGI vid did take 36 mins and is more summarized key points, that is possible. But still, 5 years worth of research and a partially complete AGI blueprint, in 36 minutes and a text to go with it, and you want it in 10 minutes :) !
"There is a technique of gradual summarizing I like very much. First explain everything generally in 20 seconds (1). Then explain it again, but in 5 minutes (2). Then explain it thoroughly in whatever time you need,"
Intelligence requires that there multiple possible futures, otherwise we would simply be mechanically unfolding a pre-determined destiny.
Your compression thingy will basically produce something that spews language, gibberish actually because there is no world model or understanding behind it. Much more importantly,there is no path to general problem solving, or even generalized language gibberish spewing, just a specific language.
This massively clarifies it all to me now, if I'm correct.
I'll update this post with my paper/code for vision recognition if I can fix my idea, it is currently very costly.
I'll update this post with my paper/code for vision recognition if I can fix my idea, it is currently very costly.
Computational complexity is usually the problem. Just about everything would be easily solved with an infinite amount of computer power.
Korrelan, all that is known/ written in my Guide
So young and still not shaky...
One way to multiply word search by ~10-26 times is by having a tight process between looking for the first letter and going to the next data item. Assume most first letters in word lookup are not a match...
One way to multiply word search by ~10-26 times is by having a tight process between looking for the first letter and going to the next data item. Assume most first letters in word lookup are not a match...
MikeB, reading your posts it sounds like you are using linear search which is never going to be fast beyond trivial cases. At the very least you should be sorting your data and using a binary search on it. Linear searching works on average in time of the order n/2 where n is the number of items. Binary search operates in time of the order log n to base 2. For a million records that's 500000 tests versus 20 tests. You should be able to do what you are doing in microseconds, not seconds.
Somehow, I did not ship the code right, one line was wrong, line 46 should be "k = j[0][g]". I swear I tested it before uploading.
[The lossless compression evaluator] corrects predictions to the desired one for lossless extraction of the dataset back again, you run the same code to now de-compress it letter by letter. It predicts ex. p lots, but the next letter is o, so you store correction, and it's costly the more its predictions are wrong.
WoM how many lines of code is your whole AI?
A brain only, and always predicts what it will do.
And what is the goal of making the AI? Ex. cash? Helping humans invent solutions? Therapy?
I think I've always taken an interest in AGI because it's an example of the Rational Other -- something that differs from humans both physically and mentally, but can relate to humans on our level. Rational Others simply delight me for their own sake. I do think AI technology has the potential to be useful/powerful/altruistic, and that's part of how I justify the amount of time I spend on it, but my root motivation is not instrumental. Really I just want a meeting of the minds with a non-human ...
humans have a tendency to claim absolute truth based on their limited, subjective experience as they ignore other people's limited, subjective experiences which may be equally true
"Korr, you are an expert for NN"
Korrelan says I think on his website or YouTube that he is only an amateur, not an professional. If not, you should clarify in what area you mean.
I hired a freelancer to translate 17 lines of my python (the tree storage) to C++.
I have 4 attachments to add below, hold on. Sorry the names have to be different because this forum doesn't allow the same name filed to ever be uploaded again, infurl please fix that lol. Ok the names are fine this time, though you'll need to download enwik8 and change the name to enwik8.txt in the program at top. - if I can't add the 4th big file.
That'd be awesome if you can fix his code to run faster
'<mediaw', [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [3, 18, 33, 48, 63, 78, 93, 108, 123, 138]
An evaluation for AGI that tests AGI on just one single narrow task like who can build the highest tower, or run the farthest, or make the most balloons, does not test for AGI because AGI is general purpose and needs to solve many problems.
If you let the computer decide what it wants to do itself, its more sentient, but its also alot more dangerous, if u decide its motivation its safer because its in less control of itself, and thats what u want, IMO.
Yes, but look what evolution did with 5.4 billion years of only survive-and-reproduce strategy. More complex behaviors seem to come along with enough cycles spent on genetic mixing of the strongest ones.