Ai Dreams Forum

Member's Experiments & Projects => General Project Discussion => Topic started by: LOCKSUIT on December 25, 2019, 09:53:06 pm

Title: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 25, 2019, 09:53:06 pm
This is most of all my notes, images, algorithms etc summarized/unified in their recent forms. Here. I am 24 and started at 18 all this work. I make discoveries in my brain using mostly vision (visual language; shapes, cats, etc which has context, each explain each other like in a dictionary, a small world network of friendly connections).

https://www.youtube.com/watch?v=Us6gqYOMHuU

I have 2 more videos to share but not yet uploaded. And 2 more notes. The last note will have some more very recent good data but these not yet given are of less immediate importance. The long file though does have a lot of recent knowledge in it though still. It's better you know all it, at least the movie.

notes
https://paste.ee/p/mcnEk
https://paste.ee/p/kQLCx
https://paste.ee/p/CvSsB
https://paste.ee/p/lJYMP
https://paste.ee/p/EmCZt

Code-only of my advanced-ngram 'gpt2':
https://paste.ee/p/7DG3M
https://paste.ee/p/XvVp5
result:
The software was made on a
The software was made on a wide variety of devices, and operating apps and applications that users can easily read as an app for android. It is a bit of a difference, but i was able to get it. The developers are not going to make it through a web applications, and devices i have seen in the running for the mobile apps. Applications allows users to access applications development tools, and allow applications of the app store. A multimedia entertainment entertainment device, and allows platforms enabled access to hardware interfaces. Using a bit of html application app developers can enable users to access applications to investors, and provide a more thorough and use of development. The other a little entertainment media, and user development systems integration technology. Applications allows users to automatically provide access to modify, optimize capability allows users to easily enable. Both users and software systems, solutions allowing owners software solutions solutions to integrate widgets customers a day. And if you are accessing services product, and mobile applications remotely access to the software companies can easily automate application access to hardware devices hardware systems creators and technologies. Builders and developers are able to access the desktop applications, allowing users access allows users to
((I checked the 400MB, not too long copies pastes, like only 3-5 words at most))
https://www.youtube.com/watch?v=Mah0Bxyu-UI&t=2s

I almost got GPT-2 understood as shown at end of movie but need help, anyone understand it's inner workings? Looking to collaborate.

I recommend you do this as well and mentor each other.

more data by my top-of-mind pick:
AGI is an intelligent Turing Tape. It has an internal memory tape and an external memory tape - the notepad, the desktop, the internet. Like a Turing Tape it decides where to look/pay attention to, what state to be in now, and what to write, based on what it reads and what state it is in. The what/where brain paths. It will internally translate and change state by staying still or move forwards/backwards in spacetime. It'll decide if to look to external desktop, and where to look - notepad? where on notepad? internet? where on internet?

It's given Big Diverse Data and is trying to remove Big Diverse Data (Dropout/Death) so it can compress the network to lower Cost/Error and hence learn the general facets/patterns of the universe exponentially better while still can re-generate missing data despite having a small world network (all quantinized dictionary words explain each other). It uses Backpropagation to adjust the weights so the input will activate the correct node at the end. That node, can be activated by a few different sorts of images - side view of cat, front view of paw, cat ear, mountain, it's a multi dimensional representation space, Since the network Learns patterns (look up word2vec/Glove, it's same as seq2seq) by lowering error cost by Self-Attention evolution/self-recursion of data augmentation (self-imitation in your brain using quantinized visual features/nodes), it therefore doesn't modify its structure by adjusting existing connections (using weights/strengths) to remove nodes, it rather adjusts its structure by adjusting connections weights to remove error and ignores node count Cost.

Intelligence is defined as being flexible/general using little data, but walker bots are only able to solve what is in front of themselves, we need a thinker like GPT-2, and you can see the mind has evolved to simulate/forecast/predict the future using evolutionary mental RL self-imitation self-recursion of data. And intelligence is for survival, immortality, trying to find food and breed to sustain life systems, it's just an infection/evolution of matter/energy / data evolution.
Title: Re: Releasing full AGI/evolution research
Post by: HS on December 26, 2019, 06:28:43 am
Oh my gosh, you're so logical, I did not expect that. Use your power for good.  O0
Title: Re: Releasing full AGI/evolution research
Post by: HS on December 26, 2019, 08:11:00 am
You showed the optical illusions I posted! And talked about redundunduncy :)
Crazy text you is real life me, and real life you is chilled out Richard Feynman... This was too much I need to sleep...
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 26, 2019, 02:17:24 pm
I think you can skip the heterarchy maybe....simply the hierarchy nodes get activated ex. nodes cat, etc, which parallelly leaks energy to their nearby context ex. 'the cat ate' 'the cat ran' 'our cat went' and these handles leak energy to nearby context 'the dog ate' 'the dog ran' 'some dog went' and so on for proving 'some'='our' as well, including if cat=zebra/horse and dog=zebra/horse then cat=dog! Hence no w2V, just on the fly activated by leaking connections. Solves typos, rearranged phrases, unknown words ex. superphobiascience, alternative words, related words, names, references ex. it/he, and blanks. Then for the candidate words, the winner is the one that is most frequent in knowledge (has energy) and in Working Memory Activation Context (which fade energy/leak), most related to story word (activation leak), most favorite (has energy). This is how to recognize/understand a window where you look/how wide, then which candidate Next Word to choose, then you may also adapt it by translating it too.

It can be run faster to do it other ways but to understand it can be easier by other ways like this.
Title: Re: Releasing full AGI/evolution research
Post by: Art on December 26, 2019, 02:54:09 pm
Finally we get to hear the real Lock!! Good one! O0
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 26, 2019, 03:58:54 pm
The hierarchy can self-organize to lower node count Cost (error) by re-arranging the connections that exist, to learn Byte Pair Encoding (segmentation) on the fly too, not just translation or sequence-building on the fly. You don't have to look at all areas/layers of the hierarchy.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 26, 2019, 04:56:20 pm
Attention types: Decides which sense to pay attention to ex. buzzing noising or pop up goals, where/how wide to window on that sense, and which Next Word to predict using more attention.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 26, 2019, 05:38:12 pm
Think if you had a GPT-2 prompt 1 word long only ex. scratching, it could be seen as scratch ing, so let's say we had only scratch,  the word to predict next is the one seen before in the data, and the frequent one is more likely chosen. This word scratch can also match related words itch scrap etc and then see what frequent word is next. So the closest related word, and the highest frequent word is chosen.

scratch the x64
scratch it x58
scratch back x38
itch my x76
itch shoulder x50

So my is predicted because itch=scratch a lot and my is seen after itch x76 times. But we aren't done yet. The candidate words/tokens are given an extra score, now, which is relational score to story words 'scratch' (no itch exist) in your input prompt. So we end up picking shoulder say. We may also then translate shoulder to adapt it in ex. stomach if your story was about stomach itch.

If we have 2 words in our generated story so far: 'Scratch stomach', we can predict the next word using either word but we want to use both, so we see what nodes/related nodes including positional rearranging exists in the data ex. scratch stomach, itch bladder, bladder itch, etc, and the more distant they are positioned the lesser of course a match it is.

So say we got a 'first I go into the lake and stay there, second I cold freeze in the winter', the 'first' votes on 'second' because it could be 1st or etc translated, frequency, relation, but it is far back and there's other words, so it definitely has less vote being 1/10th the weight and fading in energy by this time, yet it does have a strong vote too.

Say we find 'stomach itch but it has a word in the middle or one missing ex. 'stomach really itch' or 'stomach', this also has less score but still has some.

What if I say 'second, this is that, third, i am this, but i won't say the next because i refuse to repeat myself'. Here I pay attention to ignore the word 'fourth' being said.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 27, 2019, 01:43:14 pm
Let me know if you want more, if no one says so then I assume no one is interested.
Title: Re: Releasing full AGI/evolution research
Post by: goaty on December 28, 2019, 10:49:11 am
If your sure this is the future of a.i. you should just pursue it further yourself,   if you were just to release it all early for complete free, whats new about it theoretically that's exclusive to your system?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 28, 2019, 01:08:57 pm
It is free. I am sharing my knowledge/current stage. There's a lot of new knowledge there. My AGI description talks on its own to itself and asks us/tells us knowledge. It unsupervisedingly collects internet data and researches, and learns patterns/building blocks, then it Answers Questions using RL using 4 steps. It is transforming the data, old to new, then analyzing it again, old to new, this is bootstrapped Self-DataRecursion, data evolution, this is how evolution happens. And bigger brains are smarter by more context data. The tools/skills are another part for that and can be described by text/visual language of the universe. I also tell you what will happen in the end etc.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 28, 2019, 04:47:21 pm
Storing, Forgetting, and Recall. Attention does all. Which sense or where to pay attention to. How much attention. To recognize it. To choose the Next Feature by looking at story words and which to ignore (forget). To Adjust the Next Feature. The nodes that store/recall are attentive / pop up as agenda questions.
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on December 28, 2019, 05:25:57 pm
Quote
Korrelan, I'm really sorry you won't be able to fully process the green/red colors being color blind but, it doesn't matter much lol.

Thanks for the individual consideration Lock, it's much appreciated, but it's ok because I can read.

You did leave that comment in though, before you posted it on every single forum known to man.  ;D

 :)

ED: OH! I see you have removed it from one non Email based, editable forum... cool.

 :)
Title: Re: Releasing full AGI/evolution research
Post by: goaty on December 28, 2019, 05:54:04 pm
Looks like you've got a good system (I didn't think of it myself, sounds good technique),  but is it intelligent yet?  If not your work isn't finished,  just somewhere along.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 28, 2019, 06:28:15 pm
I spammed OpenAi with it too hehe, more 2 come

I call this ignition phase, spread it like fire
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on December 28, 2019, 06:30:59 pm
You did remove my name though? Yes?

Quote
I spammed OpenAi with it too hehe, more 2 come

Lock answer me... you have taken my name off this spam posting your doing?


https://agi.topicbox.com/groups/agi/T7cbcba9a1ae63532/releasing-full-agi-evolution-research

https://groups.google.com/forum/#!topic/artificial-general-intelligence/U8-wPGwON_0

https://www.mail-archive.com/agi@agi.topicbox.com/msg03960.html

So not funny lock...

Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 28, 2019, 07:37:55 pm
your not on the openAI's

and wont be in future posts/redirects

my notes pastees have you as K lol



That's where the doctors hang out, now that you infected everyone onto it.
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on December 28, 2019, 07:55:41 pm
Remove my name from all your spams.

 :idiot2:
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 28, 2019, 08:00:50 pm
only 2 allow but i'll try

All other names are have already been made into code name prior.
Title: Re: Releasing full AGI/evolution research
Post by: HS on December 30, 2019, 01:11:05 am
So the meaning of a word is not contained within it but is instead described by the shape of the web of related words as observed from the vantage point of the word in question? Context is the actual word? 
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 30, 2019, 03:12:33 pm
Understanding Compression

To learn how to break the law, of physics, we must understand it better.

https://paste.ee/p/kQLCx

"So the meaning of a word is not contained within it but is instead described by the shape of the web of related words as observed from the vantage point of the word in question? Context is the actual word?"
Yes, a given particle of Earth is defined by all of Earth context (and then it re-checks the all to each again, a self-attentional-SelfRecursion of Data-Improvement like editing a Paper), a exponential explosion of heat is given to the core of Earth and self-extracts free energy from burning fuel. Brains do this, atoms do it, galaxies do it. That's why magnetic domains align and propagate brain waves in brain, team of brains, magnets, etc. AGI will be a collaborative project and already is too, we share data. Let's hug each other (real tight hug).

The big bang was unstable and decompressed. Planets re-compress. Atoms do it. Galaxies do it. A brain compresses to learn the facets of the universe by using data compression, so that it can burn fuel and extract free energy/data from old data (just like batteries, gasoline, and our stomachs). Data evolution, data-Self-Recursion. Lossy/Lossless compression both transform data from one form to another. When you compress a file losslessly, it actually is destroyed and gone because it isn't the same file/data. Compressing/firing employees do this too. Luckily, being lossless, you can re-generate it back at a click of a button (or if you destroy a drawing on your desk and re-draw it from memory), however it takes time to evolve it back, sometimes VERY long time. Brute force to find the smallest compression of the Hutter Prize file would take extremely long. Intelligence is all about speed, evolving domains of nodes (cells, neurons, brains, cities) to find which out-pace each other. This aligns the domains of the brain/group to propagate brain waves faster through the cluster and have a bigger electro-magnetic potential. If we use lossy compression, you can actually get the exact file back but takes much longer. A system in space will collect data to grow, then decompress, a self-extracting drive. This decompression is exponentially explosive and results in smaller agents that evolve to compress-extract so they can resist change. Energy (photons) propagate forward but can be pulled in by gravity and will loop around like in a motionless, cold battery. Change=energy release. Unstable. Equilibrium is the opposite. We seen a algorithm can be run perfectly many times, compress, decompress, compress, repeat. To do this requires a form of equilibrium. Wear and tear affects it though. Yet our sperm/eggs has seen many generations. If the universe contracts back, Earth can emerge again by this self-organizing/attention physics. Different systems and their size evolve different but is based on electromagnetic compression/decompression, Earth if became nanobots would simply grow in size and resist change/death approx. better. Lossless compression is so fast because it's all contained in such a small place like a cor rod and is very hot/related, lossy requires ex. the whole Earth, a form of brute force and exponential hints/data evolve it back faster. Lossless, locally, without brains to discover the data, requires only little data. The bigger a system is the bigger file you can re-create from nothing - a human brain can re-generate back almost anything. Lossless, based on how many particles are in the defined system (uncompressed file size which needs a computer to store/run it), has a limit of how small it can become and so does lossy because Earth is finite in size during a given period quantinized and a file can be re-generated back quite fast if some of it is still around - the lossy file, even if incinerated, can be re-generated back based on how many particles make up Earth. Here we see a file can be compressed deeper the bigger the file is or the bigger the Earth is. With such little of the file left (even just the remaining physics if incinerated) it can come back based on large context but has a limit/need (size of Earth/fileData, time, and compute).

We see the communication/data tech builds on itself exponentially faster, bigger data = better intelligence and extracts exponentially more/better data (per a given system size). Earth is growing and heating up by collecting more mass and extracting/utilizing exponentially more energy like nanobots will when they come. We will harvest Dyson Spheres. Our goal to resist change by finding/eating food and breeding (Darwinian survival) could Paperclip Effect us and explode ourselves! A cycle of compress, decompress. Our goal is to compress data in our files, brains, teams, but also to expand our colony of data. Why? To resist change, to come to equilibrium (end of evolution fora given system exponentially faster). These colony mutants/tribes have longer stable lives being so large and using its size to extract so much. The bigger a system is the less it changes. Imagine destroying all instantly-repairing nanobots superOrganism? Can't. And, the bigger a system the more weight/vote/context interaction (heat) is transmitted/infected, not just to extract free knowledge/heat (motion/energy) but also to fix issues/damage. My body/knowledge  stay the same almost yet my cells/blood all change their spots for new ones, the air stays the same yet it blows around Earth, the heat in my walls stay the same yet the heat moves around, Earth is a fractal of pipes, veins, roads, and internet connections to propagate energy, ideas, blood, waste, traps, and negative electricity, simply to loop it around and re-use it. Distribution of data allows global, not just local, flow/alignments. It moves around and the system can resist change/repair/or, emerge. Or goal is to resist change by using large context/collaboration by aligning random domains to get free energy/knowledge. We have to collect/grow big and digest/extract it so we can resist change better. We are doing both compression and decompression of data/energy and possibly are trying to equal them out so we can come to equilibrium jussst right in the middle of the 2 opposites/attractors. The system we become will be exponentially repairing/immune to change - compression and decompression, however we may be growing larger but less dense as it does so to become approx. more immortal. We will likely need a exhaust/feed though, we will need a fine tuned food source and radiation exit for our global utopia sphere/galactic disc loop string.

So we should be very interested in compression, and decompression, i.e. Bigish Diverse Dropout - which data to destroy and remove/ignore/forget, and Big Diverse Data collection/creation by extracting free data using old data context vote/weight in. In the brain, we do compression and can basically still re-generate the ex. Hutter Prize file despite having a small decompression brain. The need to do both ignore/attend are the same process in Dropout or data collecting/harvesting, and the decompression process when ignore/attend which to extract/collect new data from old data is also the same process, and the compress/decompress processes are the same process too - which to remove and which to attend however to attend fast we need to remove fast, hence these 2 steps are not really the same process. However when you do compress data and create a brain/team, it is easy to attend to the remaining keys. During extraction, you use what you Learned (patterns) to decide what to Generate. So they are both 2 different processes I guess. Btw, when you build a heterarchy you need the hierarchy first, and may not even need the heterarchy! The connections of context handles are already laid. I was going to say, making relational connections doesn't compress data on its own yet in effect does, though.

Some concepts above were compression, decompression, equilibrium (no change/death), exponentialality. We seen how we grow mutants that resist change better by using both compression/decompression (destruction of neurons/ideas/employees/lives/Earth, and creation of such) so we can come to equilibrium exponentially faster by large context weight (which exponentially helps compression, and extraction during Generating (ex. GPT-2's 40GB and 1024 token view)). I'm still unsure if we are just growing and exploding. If the universe only expands then we will likely radiate.

Compression looks for patterns and leads to faster domain alignment/propagation and exponentially faster large brain waves/free energy extraction/re-generation from nothing. If we want to compress the Hutter Prize the most, we will need to stop it from generating multiple choices from a given context (it still uses the context). We could sort all phrases in the file like 'and the' 'but the', 'so I' 'then I', and force it to discover the concept that leads to the re-used code 'the' or 'I'.
Title: Re: Releasing full AGI/evolution research
Post by: HS on December 30, 2019, 09:19:07 pm
Resisting change is still change though :P. I'd say the goal is to resist entropy.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 30, 2019, 09:30:43 pm
Taking the right path is a lot less change than bumping into the burgler with a shotgun 0O. They simply breed/rejuvenate more than they die. The agent stays most similar when from statue it bends down to grab an apple.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 04, 2020, 04:55:32 am
I've got 10 YTbe subscribers now lol.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 08, 2020, 01:37:01 am
I have my man on the compressor algorithm for 25USD from india. I am learning how they work currently and will shortly post my formal formula for AGI. In the meantime see my entries here: https://agi.topicbox.com/groups/agi

Layer Norm......see it now is just >
https://knowledge.insead.edu/operations/warning-do-not-just-average-predictions-6641

GANs compress data...they generate realistic data...so does lossless prediction...the data fed to it, allows it to work on unseen data...because its so similar

https://royvanrijn.com/blog/2010/02/compression-by-prediction/
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 09, 2020, 06:58:38 am
Me and my employee got the compression working. It is 5 bits per character, normally each char is 8bpc. So the 100MB wiki8 would be about 63MB. Good for a start.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 18, 2020, 05:35:34 pm
My order-2 made the 100MB wiki8 file compressed into exactly 40,572,450 bytes. Took exactly 12 hours lol in python. The dictionary (I included it into the 40MB) was 2,069,481 bytes. The decompressor was 4,910 bytes (also included in the 40MB). Code is attached for the non-believers. It's in python so you know it was me cus they are usually in C++ for speed. You can try it on the small input I uploaded. https://paste.ee/p/Cd7Va

The world record is 15MB. 25MB away lol!!!
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 24, 2020, 04:18:48 am
It will move incredibly fast. AGIs can think/move a ton faster and replay skills perfectly and erase bad parts. Can nest deep into thoughts "the girl who saw the guy who wanted the man who said to him to go was here". Recall perfectly, have more memory, longer attention, don't sleep eat poop nag etc. AIs live longer than humans, can clone/download skills etc. Many sensors/motors, many types of them, 3D vision using MRI and sims, wireless communication of visual thoughts, full cooperation, fully times updates shared, can store facts when read them instantly and fast, can see/feel nanobots to control them - we can't, and a lot lot more I won't list here. Advanced nanobots will eat Earth in a day. It's really cheap to gather microscopic data and make small replicators to up your computer fabrication and data intake and manipulation accuracy. The more data/ processors/ arms/ eyes they get and better ones they get, the more such will they get!

Inventing 1 AGI and cloning it on a mass fabrication scale is all we need. The most powerful thing will not be inventing 1 AGI per see, it will be cloning workers on cheap replicating computer hardware, data, arms and eyes. I.E scaling AGI and inventing AGI is all we need.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 22, 2020, 01:46:48 pm
Lol

https://rule-reasoning.apps.allenai.org/?p=The%20squirrel%20is%20young.%20%0AThe%20tiger%20is%20rough.%20%0AThe%20tiger%20eats%20the%20bear.%20%0AIf%20something%20eats%20the%20bear%20then%20it%20is%20red.%20%0AIf%20something%20is%20red%20and%20rough%20then%20the%20squirrel%20likes%20the%20tiger.&q=The%20squirrel%20likes%20the%20tiger.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 26, 2020, 12:57:39 pm
Huge breakthroughs I made.

See the link below if you're new to neural realistic_future generators for text....aka AGI attention:
https://aidreams.co.uk/forum/index.php?topic=14561.75

Distributed networks are the most powerful Systems. Brain and city government for decision forming. They are most robust. And larger Systems are the most powerful Systems. Big brains (big diverse data) and big teams. They are most robust. Both allow you to go deep, fast, building a large concept/prediction based on many parts. With these de-centralized networks, you have duplicate data so that no human node or brain memory node has to be accessed/used by billions of tasks nor take a long time to complete/reach from all nodes globally. The sum of nodes recreates a node. Prediction makes the future based on the past data/world state, and the human brain keeps a energized dialog state in its local agenda focus while a global sub-conscious attention votes on more-so un-desired nodes as well. Prediction is the decision process that is based on surrounding context in an "environment", be it a womb or a neuron. There's many factors/conditions that trigger actions / thoughts (same thing). To make a prediction of the future, you use the past context. Text generators do this. An exact match is the most basic way to see what occurs next. Word/letter frequency is used to choose more likely predictions. The brain is a physics simulator, with its image and sentence "thoughts". Just the act of a word or image/object appearing next results in truth. In big data, you can get exponentially more out of it using intense/deep "translation" instead of exact matches only. So even if the truth appears to be said many times, it can be overrided by invisible truth deep in the data that the data barely says it wants in life. It's all based on the frequency of what comes next in text. Deep translation let's it gather all the truth it needs. It's a simulation based on real data. This "deep translation" is the very evolution/"AGI" we seek. Data self recursively evolves itself and we do this in our own brain as well until com to a settled down colder equilibrium. In the world before brains that simulate the world, the instinctive short term direct response primitive brain and especially the environment itself like ponds and wombs, use context to evolve itself by making decisions. But the first doesn't remember the past, and the second only remembers the past. The third compares the past to previous states.

So, all based on direct frequency (truth), Deep Translation (for human brains that simulate, not primitive, not raw physics) can extract new data from old data (hidden truth) and decide the future prediction (new truth), evolving the mass of data your using to do this. Desired reward guides this to desired outcomes.

Deep Translation improves prediction for the Hutter Prize in all ways. And notice that attention for deciding which question to ask yourself/others or to act it out in motors for real, is based on, past context - the current state of te system/world.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 26, 2020, 01:24:33 pm
See last post.

Oh and see, told yous, regenerating/repairing is used in all AI, here it comes up again:
https://www.youtube.com/watch?v=bXzauli1TyU
try it:
https://distill.pub/2020/growing-ca/
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 26, 2020, 01:40:17 pm
they even mention "Embryogenetic Modeling"
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 26, 2020, 02:09:40 pm
i found that link btw AFTER i wrote all meh text. See I'm spot on in every way lol....translation, context, etc
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 26, 2020, 03:10:04 pm
Similarly we inhibit questions when see the outcome we wanted, which is like cells build/branching during organ creation/regeneration.



notice the link says

"The biggest puzzle in this field is the question of how the cell collective knows what to build and when to stop."

Based on context and hardcoded desires, it grows its way forward.

"While we know of many genes that are required for the process of regeneration, we still do not know the algorithm that is sufficient for cells to know how to build or remodel complex organs to a very specific anatomical end-goal"

“build an eye here”

"Imagine if we could design systems of the same plasticity and robustness as biological life: structures and machines that could grow and repair themselves."

"We will focus on Cellular Automata models as a roadmap for the effort of identifying cell-level rules which give rise to complex, regenerative behavior of the collective. CAs typically consist of a grid of cells being iteratively updated, with the same set of rules being applied to each cell at every step. The new state of a cell depends only on the states of the few cells in its immediate neighborhood. Despite their apparent simplicity, CAs often demonstrate rich, interesting behaviours, and have a long history of being applied to modeling biological phenomena."

"Typical cellular automata update all cells simultaneously. This implies the existence of a global clock, synchronizing all cells. Relying on global synchronisation is not something one expects from a self-organising system. We relax this requirement by assuming that each cell performs an update independently, waiting for a random time interval between updates"

Both local, and global shape of the context (what, and where (position)) affect the prediction.

"We can see that different training runs can lead to models with drastically different long term behaviours. Some tend to die out, some don’t seem to know how to stop growing, but some happen to be almost stable! How can we steer the training towards producing persistent patterns all the time?"

Sounds like GPT-2. When to finish a discovery sentence. Keep on topic until reach goal.

"we wanted the system to evolve from the seed pattern to the target pattern - a trajectory which we achieved in Experiment 1. Now, we want to avoid the instability we observed - which in our dynamical system metaphor consists of making the target pattern an attractor."

"Intuitively we claim that with longer time intervals and several applications of loss, the model is more likely to create an attractor for the target shape, as we iteratively mold the dynamics to return to the target pattern from wherever the system has decided to venture. However, longer time periods substantially increase the training time and more importantly, the memory requirements, given that the entire episode’s intermediate activations must be stored in memory for a backwards-pass to occur."

That sounds like Hutter Prize compressor improvement. Takes more RAM, takes longer, for better regeneration to target from Nothing (seed, compressed state).

"it’s been found that the target morphology is not hard coded by the DNA, but is maintained by a physiological circuit that stores a setpoint for this anatomical homeostasis"

We want to regenerate shape (sort the words/articles), and grow the organism/sentence as well. But avoid non-stop growth past the matured rest state goal and stop de-generation.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 27, 2020, 08:49:13 pm
A good AGI predictor can answer this. Can you? Try!

"Those witches who were spotted on the house left in a hurry to see the monk in the cave near the canyon and there was the pot of gold they left and when they returned back they knew where to go if they wanted it back. They knew the keeper now owned it and if they waited too long then he would forever own it for now on."
Who owns what?
Possible Answers: Witches own monk/witches own canyon/monk owns house/monk owns cave/monk owns gold/cave owns pot/there was pot/he owns it
Title: Re: Releasing full AGI/evolution research
Post by: krayvonk on March 27, 2020, 09:05:41 pm
I think that sentence is confusing and if the a.i. couldnt answer it probably wouldnt be so bad a thing.

The keeper of what?  u should dictate it.

Uve got something like word2vec in yours dont you - did u see my post?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 27, 2020, 09:16:51 pm
omg! someone answer it lol....it's EASY.........easy!!!!.........plz?

no w2v yet, but i'll get my own in there and i'm coding it! :)))))))
Title: Re: Releasing full AGI/evolution research
Post by: krayvonk on March 27, 2020, 09:31:52 pm
If ur understanding the text so literally,  the robot would be really easy to trick.
Yeh, its fine if your catering the sentences for it, and its still pretty cool i guess if it could parse its way through that mumbo jumbo you wrote.

Giving the robot a bullshit detector is very important,   its the big "assimilation of lies" problem.   There is a solution tho... thats what more important to me right now.

But I guess all NLP has that problem just about,  so no biggie.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 27, 2020, 09:40:19 pm
it DOES flexible to the text.....to get the best answer probable to the answer can get.....u just cant answer my test above :)
Title: Re: Releasing full AGI/evolution research
Post by: krayvonk on March 27, 2020, 10:01:39 pm
Monk owned the gold?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 27, 2020, 10:04:56 pm
Huuuuuuu! U got it!

GOOD JOB! AGI pet deserves a kibble treat. Pats heads lightly. Perr. Open mouth now.

Now we need the AGI TO DO THAT!!! Come on you guys! This is our last stand.
Title: Re: Releasing full AGI/evolution research
Post by: krayvonk on March 27, 2020, 10:08:17 pm
What I want to know,  if its this easy to make A.G.I  how come it wasnt done years ago.  and what about this a.i. winter - lies?  To me it seems.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 27, 2020, 10:11:38 pm
Cus you gotta grab it by the balls and say "I'm gonna do it!".

I don't hear anyone hear saying it with enough courage!

That was a real typo btw!
Title: Re: Releasing full AGI/evolution research
Post by: krayvonk on March 27, 2020, 10:42:49 pm
Yeh well what were ppl doing in the 80's  *scratching* their balls?!?!?

The whole vietnam war went thru killed everyone, and why didnt they have drones back then - makes NO sense.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 08, 2020, 12:48:03 am
Way I see motivation/ life is this. Young humans and animals are hyped to enjoy life, not bored of anything yet, and we should stay that way. Try to imagine a really incredible new gift just appeared in your back bedroom, and you feel really excited and want to go rush to go see it. That's true motivation. Dead motivation is giving up or feeling like you've finished the video game. Just start a new game! A reason to live is to have motivation to do so, because staying alive / motivation to do so comes 1st, not 2nd. If we were this motivated, maybe more people would understand we can get things we can't even get right now. And a longer life. Maybe then we could work hard on AGI like we really mean it.
Title: Re: Releasing full AGI/evolution research
Post by: Art on April 08, 2020, 03:16:34 am
Because they didn't have the technology back then. The Vietnam war wasn't officially declared over until 1975 and there was no such thing as a PC or home computer, let along small enough technology to understand or implement a small enough form factor to allow such things as Drones to be produced or even conceived!!

We have come a long way in 40 years, from small to large and still can't decide which is better! Small Tv's, Small radios, Small ICs, transistors, modules, etc. then larger TVs, Big Screens...Big cars then smaller more efficient cars and the ping pong match continues. But drones in the Mid 70's afraid not.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 28, 2020, 12:41:28 am
Linking my last big threads of progress:
code - https://aidreams.co.uk/forum/index.php?topic=14561.90
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 28, 2020, 10:27:44 pm
Reddit removed my AGI book thread, perhaps it was the spiritual kid I was arguing with or an ignorant "leader". Mods fault either way.

anyway new post
https://www.reddit.com/r/agi/comments/g9vzeo/how_the_human_brain_does_math_with_examples/
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on April 28, 2020, 10:55:46 pm
The reason they removed your post is because you had an odd number of Elf Heads, most people confuse Elf and Elve, the latter being an upward electrical cloud discharge.  An Elf Head however is awarded for each logical reference in a thought, after reading your post you only managed an odd number (-3)… so they removed it.

 :)
Title: Re: Releasing full AGI/evolution research
Post by: infurl on April 29, 2020, 01:46:18 am
Don't give up genai. You are improving, albeit very slowly. It is a long road and you are starting late but it will be enough to have traveled a little way along it.

You need to eat better food or you won't be alive to enjoy it much longer. I'd also recommend researching the concept of empathy. Even if you don't have any, you can always fake it. When people actually like you, they are more inclined to listen to what you have to say.

PS I'm not going to unblock you here yet, but you may take some comfort in the fact that I haven't blocked you anywhere else yet. Given your track record, consider that as a win.
Title: Re: Releasing full AGI/evolution research
Post by: WriterOfMinds on April 29, 2020, 04:18:52 am
I've been watching /r/agi for a while.  It's not uncommon (seems like every week or two lately) for someone to show up saying, "I'm starting a new AGI project, join me!" or "AGI isn't that complicated, I figured it all out this morning in the shower, bask in the glory of my theories!"  None of these people have demonstrable results, just half-baked ideas -- and after annoying the subreddit residents for a bit, they soon disappear again.  So I think everyone over there is a little jaded.  Can you really blame them?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 29, 2020, 05:14:21 am
Ya I noticed the kids there, it looked like me in my early days, they are/ were me, yet distasteful as well isn't it - it isn't good they have less experience. I noticed kids wait on every question and are precise, adults will pick only the cherries, kids are much more free timed... good while bad, no free lunch.

It was actually, utterly hilarious, that one kid was talking just like me when I began, like oh ya this is simple, it's do all this by doing this, [overlooking things], etc
Title: Re: Releasing full AGI/evolution research
Post by: infurl on April 29, 2020, 05:33:53 am
I've been watching /r/agi for a while.  It's not uncommon (seems like every week or two lately) for someone to show up saying, "I'm starting a new AGI project, join me!" or "AGI isn't that complicated, I figured it all out this morning in the shower, bask in the glory of my theories!"  None of these people have demonstrable results, just half-baked ideas -- and after annoying the subreddit residents for a bit, they soon disappear again.  So I think everyone over there is a little jaded.  Can you really blame them?

I just assumed they were all the OP of this thread, creating a new account every time he got banned.

We used to get a lot of those types here, but it's been a while. I don't miss them at all because they would get abusive when they didn't win any converts to their cause. The Reddit moderators typically ban them before they even get to that stage.

Due credit to our moderators, being crazy here won't get you banned, but being abusive will.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 29, 2020, 08:08:46 am
Consider video. Humans recognize object features, like apple, stem, curve, even a dot. Some apples are green, some are distorted, some may look like a piano (almost never lol). It is the shape of a piano, and the color and texture zig zags of an apple, so depending on which you focus on you will notice one in isolation. Of course to recognize 'apple' fully will require not just texture and color but also shape, meaning 'apple' doesn't activate fully because it has the shape of a piano in our case here. But it can be sure it is most likely an apple for various reasons, and edible. We can recognize an apple being thrown as 'motion' and 'motion of apple'. Human seem to write text as they see vision, like I lift my arm and throw a ball to Tom. So humans will see a sequence of these features like apple...moving...hits wall....apple disappears....wall remains. And humans will remember important features like food and will talk about them on their own to creating related data. Humans use text as their gateway to say what they are thinking. Apples, motion, and shapes appear on Earth multiple places, there's re-occurrence. Vision can store a sequence in an image or a video of images. Position matters less in a single frame unless you pay attention to a let's say pirate map painting a trail to an X like a>b>c>d>X...in that case you see a sequence in a single image! However how long did you look at the image? Ah so it was a video, of what you saw in order. You always see 1 feature at a time, it may be 'apple', or 'multiple fruit', and when it is 'apple' it may slightly activate 'multiple fruit' if the surrounds seep into your attention by accident.
Title: Re: Releasing full AGI/evolution research
Post by: HS on April 29, 2020, 03:07:21 pm
Its cool to realize that there are more things than things, because motions and arrangements of visible things are useful too... So, you memorize those motions and arrangements, then recognize them one at a time! But how do you realize what’s useful to recognize? We do most of it with reward systems and the guidance of symbols, (which can indicate indirectly rewarding things). We have no inherent reward system for a lamp, but lamps can lead to things with direct rewards (limbic system), which is what we’re after. We can adapt a decreasing limbic response to indirectly rewarding things to help us remember them, and their order.

An arrangement of symbols can immortalize, and transfer, a useful (leading to rewards) separation of features from the background, * (which I guess would be the very first noticed thing) *. But then how do our reward systems know what’s useful and warranting a reward? Evolution causes everything to have reward systems aligned with the survival of its species. Evolution for AGI would = changes which cause survival. The exact nature of the reward system should match the capabilities of the AGI. The reward system should morph to maximize survival, which should prompt alterations to the AGI’s capabilities to maximally satisfy the new reward system. By then the environment will have changed the reward system even further, which would indicate new useful changes to the AGI’s capabilities. Does this go on forever? Things can't improve forever...
Title: Re: Releasing full AGI/evolution research
Post by: Yervelcome on April 29, 2020, 05:26:50 pm
The "lamp" example is instructive. If you want to classify it based on rewards and sub-rewards, you can split it into two types of utility:

1. How to use it - "I'm in the dark, and I want light to see".
2. What to call it - "I want communicate this clearly to others, to tell them to turn it on, or to tell them to buy one, etc".

These two are two different types of utility.

In the case of (1), I actually don't care if it's a lamp, a chandelier, a standing lamp, or a wall light. I want to take action that helps me see.
In the case of (2), the specific name and type is important, because I have to communicate it to others using words.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 29, 2020, 11:00:46 pm
NOTE: just read my book lol.... latest version >>> https://workupload.com/file/ZK2zt4PJDev

HS says "so, reward and...?"

The 4 golden supreme importance factor/determiner in recognition, and prediction after the recognized context, is based on reward, frequency, energy remaining in nodes, and relation to story words. The prediction is affected by story words, the words in the story are clarified from 'it' to what they refer to or from 'bank' to 'river', and words in the story are recognized as other similar words if need during finding matches. Once gather matches, you take the end word to get the entail prediction candidates.

HS your thoughts are all over the place.... :) Not that bad this time I guess though.

The 2 golden tricks in the brain is How to talk truthfully in the maze of sentence planning, and where to go in the maze of thought paths (reward). Frequency and reward and relatedWords works all this....it'll talk like GPT-2 but about stuff like dirty waifues and nanobots. Truth=good prediction. Trust = context....same thing, all decisions are based on context, be it leaf falling or brain worrying about a tree leaf.

HS asks "how long does evolution and goal updating last for"
The God sphere is the final technology. The final iphone. The ultimate machine. The instant regenerator and best predictor of the future. A hive of cooperative nanobots. It can morph and create anything at high speeds, like TV screens but in 3D. The whole thing is a wireless distributed network that acts as neuron nodes, hands, and eyeballs. The brain is the body sensors and motors simultaneously. Goal is always Survival. Units have sub goals like find food or attack enemy ship or find where John went in the fridge basement and need his iphone and find the oldest msg on it.... goals update yes, by related words, food=money, money=programming, etc....which allows reward to transfer node to node and leak just like energy. Root goals are harder to change. Your higher layer goals do change until get to the final form that survives most probablistically.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 02, 2020, 01:29:28 am
Note: GPT-2 trained on 40GB of text, but I show below what also makes it tick besides dataset frequencies.

In this video I do tests to GPT-2 and show proof that the word to predict is heavily influenced by story words no matter where they are in the story. Watch as I force GPT-2 to change the %s as I add words to various places. Notice also, as I show at the end using "bab7" (y4Fa2b works too), if it seen it follow in the story, it makes it even more likely, even though never seen in the dataset. Also, when you talk about trees etc it will also boost up not only trees candidates but leafs grass etc etc. It uses something like Word2vec not only to recognize the sentence (it also focuses on important words so to summarize it), but also for voting on the prediction candidates, see!

https://www.youtube.com/watch?v=RWd-LBgC9UM&feature=youtu.be

It allows it to consider/ recognize very very long unseen context by using something like word2vec to find context matches that aren't exact and by focusing on important words that are more rare, and then to stretch even longer and become more accurate as well it boosts candidate predictions using remaining energy from prior activations. More recent words boost the predictions the most, they have not lost energy as much.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 02, 2020, 12:55:38 pm
Hehe they are upvoting meh!!
https://www.reddit.com/r/agi/comments/gbv8bg/video_my_tests_show_what_makes_gpt2_tick/

it's at 5 upvotes
Title: Re: Releasing full AGI/evolution research
Post by: krayvonk on May 02, 2020, 01:36:31 pm
You know what would be a good form for a robot - in a text based system?
Try plugging in AD&D second edition missions,  and u have 2 ppl in the "dialogue" ,  The doungeon master, and the player.

It would only have to be 2 components,  because the dungeon master counts for all the other players maybe.
This way, it can occupy the position of both the doungeon master, and the player, in the "dialogue"

Then if you wanted it to be a real robot,  you need to do a computer vision thing, where it turns into the doungeon master text,
and then the robot acts as the player, so you have to convert the player text, into the robot actions.   At all the moments where the player succeeds, add a doggy treat,  and itll always search for these moments.


Then you just have to non-stop type in fast forwards for a couple of years to build up the 40 gigabytes of text,   and then it should be ready to go!  =)
Title: Re: Releasing full AGI/evolution research
Post by: Yervelcome on May 02, 2020, 10:52:50 pm
I'd like to give that robot DM the knowledge of what it feels like to walk through a forest killing goblins.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 03, 2020, 04:47:56 am
Iv'e been coding my own text predictors and researching them hard and I notice the key is frequencies of what usually comes next after your context [th] ex. you see [e] has high count observations. But you want the past window to be very long and grab your entail frequencies from those recognized features ex. [my dog ate the][k]. Which allows good letter prediction. GPT-2 predicts BPE words. You actually mix multiple windows because longer context matches are rarer but you can at least get some frequencies of what's seen to follow them. To help find longer matches you'd want to "translate" words cat=dog, accept different position appearance, and focus on rare words to summarize the context, so that when you look at the past 30 words you can find multiple matches in memory even though there is of course no experience exacting matching it - the alternative words, position, and filler content is in there but is similar or doesn't matter. So in the end, frequencies is running it, and even the recognition cat=dog is based on discovered shared contexts, based on frequencies. Probabilities run it and if a match is not exact then it's predictions will all get less weight.

Yes, what I show in the video appears to helps prediction by making the predictions more "similar" to the *rare story words (especially more recent words), it can look at ALL the past context. The main prediction in these algorithms however is from looking at the past ex. 3 or 20 words to get multiple "similar matches" to see what usually follows the matched contexts. You can look farther back if you 1) attend to only rare words and ignore ex. 'the', 2) can use similar words 'cat/dog', and 3) use similar position.

When you know "Paris is the capital of France" and see a new prompt "The capital of France is " you predict Paris with o-k accuracy because the context mostly matches this (and a few other things in your brain), and the 2 words that are switched around exist but with similar positions.

A good question is, do story words actually take their own vote on the prediction candidates? Or do we only use context matches to see what does come next? Well, if I keep adding the word 'cat' to the start of my prompt, it makes 'cat' more probable, inch by inch the probability rises, which would be unlikely that matches are finding this is what commonly follows. Below is a new video testing it out to see if the prediction is influenced from context matches solely or if it does actually use as well all story words to mindlessly vote on the next word (if the input is all cat, it's likely it will continue saying 'cat' or the similar).

https://www.youtube.com/watch?v=kF8U2FD9JXc&feature=youtu.be

I could try in Allen these inputs: 'bat' or 'bat bat' or 'bat bat bat', or, 'wind' or 'wind wind' or 'wind wind wind'....and no matter the word used, it will predict the same word, with more probability the more times it occurs. In the dataset it trained on is only briefly similar phrases, and I don't think they predict the same word that occurs in them. Yes my input matches them more and more because of similar words and hence the prediction will be similar, but, I don't feel out of 40GB there is enough "matches" to achieve that.

Keep in mind it predicts the *same* word, you'd think 'bat bat bat bat bat bat' would match things like 'my bat saw a bird on a bat but bats fly in bat caves' etc and would often predict only similar words like cave or bird....how many matches could you get that incrementally improve the prediction of 'bat'!? Impossible.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 03, 2020, 07:03:16 am
Thought Experiment - What does a body do for a brain?


In this thought experiment, I take a look into how the brain uses input and output from its body.

Let's imagine we have a brain in a body, and is stationed in a large laboratory that enhances its research and exploration. The reason it is not stationed in a normal home is because technological advances come from experiments and data processing, not sleeping eating and sitting on home furniture.

The lab and body won't intelligently move on its own, so let's direct focus to the brain now.

The brain cannot make any non-random output decisions yet, as it has had no input experiences yet. To do better output it needs input first. So the first thing we can do is either feed it lots of diverse image/text data, or let it try to walk forward and receive data about which motor actions got motion. So far so good, we just made it store its first experiences.

Up to now the brain has only received random input data (note the real world data isn't random but the source is) from all sorts of sources. It didn't decide where to collect data from, as it couldn't output any non-random decisions.

Now our brain can decide to tweak its walking skills to further improve its speed, or can decide to collect text/image data from certain sources such as particular websites or particular real life experiments. For example it may be trying to invent better storage devices and wants to see if it's predictions are correct or may want to collect data from there simply. Testing it's predictions is also data collection because it boosts it's already existing beliefs's probabilities.

The trend here seems to show that as it collects data, it is getting more precise where to collect it from. Without output from the brain, the brain could never collect data from specific areas. The brain is learning where to collect data from.

The 2 uses the brain has for output is 1) specific data collection, and 2) implementing solutions ex. a new product to market and seeing their mission completed (this is also data collection).

Our brain, if it had a solution to a big problem on Earth will all road-bumps covered, could just tell us it of course. It wouldn't absolutely require a body to implement a "plan".

The "coming up with a *new* plan" is done in the brain, it needs a lot of on topic data also. The output of the brain to the body is just to collect more desired data.

What is most interesting is when you have a lot of data, you can make a lot of connections in it and generate new data by using a method called Induction.

So what do yous think about the idea that we could make a powerful AGI without a body? I mean it still would have output to talk to us and input from various websites and experiments it asks us to do, but it wouldn't need a LOT of real life experiments if it has a lot of data because it can mostly generate its own data at that point and fill in gaps.

So most of its output in that case would be either a solution to a problem or a few requests of where to collect new data from if its solution isn't ready yet. Other than than it would be mostly doing induction internally using huge amounts of data. After all, experiments are only to collect data, we can give it lots even if not from precise tests.

My point here is AGI creates new data and is an induction engine, and works better with huge amounts of diverse data and on-topic data as well. That's all its input does - is provide data. The output is to collect certain data. But AGI *needs to generate *new data using all this data and/or find part of its solutions in data. For example finding a device in nature or an advanced civilization would be a solution that eliminates many sub goals. It could read about it in data too, if it trusts that data.

In that sense, AGI is about re-sorting existing features/data into new data/ devices or skills. To do that it needs a lot of data. AGI generates the future using a Lot of context.

What do yous think? Can we sort of get away without a body and just make AGI in the computer and talk to us using its text/vision thoughts? And can we get way with doing lots of specific experiments from the right locations and times and just use the slightly-more random big data? To me it appears to be yes we can. The AGI could still extract/form new answers from the large data. Like you know how Prediction works right? You can answer unseen questions or image hole fill in? So AGI can just 'know' answers to things using related data. And, what if it can just watch microscopic data and do all sorts of random experiments to see what "happens" and build a better model!? It is true though brute force is not computable in our case, but it's an idea.
Title: Re: Releasing full AGI/evolution research
Post by: HS on May 03, 2020, 08:18:50 am
To the degree and distance that things can be predicted, we should try to build a predictor AGI to do that. Its a visionary project with potentially great value to humans and sundry.
You are so devoted, and have invested so much time into this project that its probably difficult to talk to others about it in a productive way, because you are the only expert on this specific hypothetical technology. Its mostly still in your brain, as far as I can tell. I'll be curious to see how you will be able to manifest it for real, and how good such a large scale predicting machine will be able to get. Its definitely possible.
Title: Re: Releasing full AGI/evolution research
Post by: WriterOfMinds on May 03, 2020, 04:41:24 pm
@LOCKSUIT: The latest seems like one of your more coherent and easily understandable posts.  More like this, please?

It's my opinion that AGI does not need to be embodied, though I know that there are many who would not share that opinion.  However, I suspect that there's more to making an effective bodiless AGI than just "feed it a ton of data." 

The prediction machine you're describing could easily be a Chinese Room.  If you don't know what that is, I recommend reading up on it. For any input it is given, the Chinese Room produces reasonable output, but it doesn't really "understand" anything because it has no symbol grounding ... i.e. it can effectively manipulate data, but it has no idea what the data "means" to itself or anybody else.  So it speaks coherently, but does not communicate; it has activity, but is not really an agent.  A Chinese Room could be useful, but whether it qualifies as an intelligent entity is debatable.

Then again, given what your particular goals are, so long as it produces "discoveries" in its output you may not really care?  However, providing your AI with some symbol grounding could still be a faster and more accurate way to achieve results than relying solely on pattern-finding in otherwise meaningless data.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 04, 2020, 06:39:00 am
Quote
@LOCKSUIT: The latest seems like one of your more coherent and easily understandable posts.  More like this, please?
😃 Well... They kept deleting my posts on Reddit.... So I had to make it really really really simple to grasp so that 0 energy was needed to realize it... But sure.

But in text, there is meaningfulness/ grounding. Words share context, you'll find in big text data that cats and dogs both run, eat, sleep, have tails, etc. And you can describe what a word means by using Prediction or Translation ex. 'loop' is a cyclic iteration of some process repeated many times, or 'loop' = 'iteration' etc.

All sensory data can associate, all a body/output does for a brain is collect yet more data but from non-random sources. AGI is all about collecting/ creating new desired data from old states of Earth/ old data...self-modifying data...Earth evolves/ generates itself. Intelligence at its root is how long a machine can maintain its form for, hence it seeks to understand the universe and grow in size to create an army.
Title: Re: Releasing full AGI/evolution research
Post by: Art on May 04, 2020, 04:23:33 pm
"...So I had to make it really really really simple to grasp..."

To borrow a quote from Einstein: "Everything should be as simple as it can be but not simpler!"

Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on May 12, 2020, 07:57:21 pm
Hi Lock :)

Any new discoveries lately?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 13, 2020, 07:44:03 am
No more biggies yet, but will share more my current work soon...

I mostly have more to implement than need to discover...

The remaining puzzle pieces to "my AGI" won't exactly come from coding my current plan though, but rather more back to the drawing board.

It would be best if I had a real-time chat team that can "construct AGI now" and come to some sort of agreeance of an architecture, Person A would be like, no, because this makes more sense, person B would be like, oh, then you need this, etc, then we could make AGI already finally instead of our 1man missions.

If we don't hurry, we're all gonna die.... We have the chance to become nearly immortal young-again kings, let's do it...
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on May 13, 2020, 08:17:55 am
No more biggies yet, but will share more my current work soon...

May I propose to try to compose a bit shorter video this time (if that's your favorite form of expression). It would be nice if you structure your form of expression into chapters, sections, paragraphs... It helps understanding. Also, there are templates of expressing that researchers usually exhibit, like:


It doesn't have to be this exact template, but some form of structure would be desirable.

I mostly have more to implement than need to discover...

The remaining puzzle pieces to "my AGI" won't exactly come from coding my current plan though, but rather more back to the drawing board.

Yeah, those are cycles I found in my work too. It exchanges between thinking out, implementing, then thinking more thorough, then reimplementing more again, ...

It would be best if I had a real-time chat team that can "construct AGI now" and come to some sort of agreeance of an architecture, Person A would be like, no, because this makes more sense, person B would be like, oh, then you need this, etc, then we could make AGI already finally instead of our 1man missions.

It is very hard to gather a crew, and there are issues about being a leader, and being a follower, all the way down the command chain. I generally don't like being a part of that kind of structure. It's more like everyone has their own plan, and others have to abandon their plan to follow the leader. I don't want anyone to abandon their plan for me, and I don't want to abandon my plan either. So I choose to work alone.

But it might be a good idea to create a collaborative thinking platform software where people would have a structured chalk board and choose on their own whether they want to bring up a new issue-subissue, or help solving existing ones.

If we don't hurry, we're all gonna die.... We have the chance to become nearly immortal young-again kings, let's do it...

I believe It is not that much bad. Dying could be a bit of an itch, but we won't be neither the first ones, neither the last ones to die. Anyway, somewhere in the future, someone might find a way to bring us back from the death, but it may be a case that it is better to be what we call dead, then to be alive. But it is worth of checking and being aware of the difference anyway. Personally, I would want to know what it's alike to be dead. My presumptions say that it is the only ethical form of existing, and that being alive is a gift we got from someone, but we would want to return it one sunny day to that someone. This is just a feeling, I might be wrong.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 13, 2020, 08:53:00 am
It is well know in Evolution that smarter machines emerge, but what makes them smart is that they can survive longer by modeling the world with inborn reflexes, schooling, and discovery. We evolved vocal cords and hands, to communicate ideas and build on them. Everyone's root goal is survival, shelter, food, and breeding, sure you can like race car boats and skydiving but a regenerative, tool-building, army is better at cooperating/ competing over limited resources, The ultimate technology will be very smart, it can instantly make you anything at whim, but it isn't just anything that goes on in physics, but things that maintain the form; duplication of information, mutations, food finding, radiation (waste). So, because humans can't live forever and are perplexed at how to achieve it, they criticize the ability to achieve it, even though they'd take it right up if they never had to age! It probably wouldn't even be a thought, no one would know ageing existed one day, they just carry on living. Machines can be programmed to erase memories and be happy always. Most humans say they want to live at the moment, some do say they don't but most won't. It's because of physics.
Title: Re: Releasing full AGI/evolution research
Post by: Art on May 13, 2020, 02:30:24 pm
...and to dust, you shall return...
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 13, 2020, 05:28:05 pm
Had to re-watch my new video below to make sure it has everything most important in it as intended. Looks good. Do watch the full video, some things are further ahead/ is never a dull moment. First read the text below until you find the video, it's important...

I guess I'll give a little text speech below to go with it as well to solidify the presentation, just in case. Do note that below is also more many important paragraphs, this is just a presentation one for the video though:



As shown in the video, the brain is a collection of mini hierarchies, re-using shared feature nodes linked as contexts.

My working code uses a trie/tree and each letter and phrase seen up to now has frequencies stored based on how many times it (a node) was accessed. Larger phrases have fewer appearances. These are strengthened connections/ weights in the hierarchies!! They decide how much energy goes to a parent (how open the channel is). Multiple nodes in my code get activated and each has multiple parents that get activated too, and some are shared by other recognized nodes ex. 'the a' and 'he a' and 'e a' and ' a'.

You can see as well if It did recognize multiple similar nodes too (translation) they would as well share similar prediction candidates. All this works on its own. No backprop! > We increment node frequencies based on node accesses, and related nodes cat/dog are discovered/activated based on shared context leaking energy from cat to dog nodes on its own.

The prediction candidates that entail the ends also retain energy from prior activation, more if more recently, large proof is below BTW, and lesser if was a similar node that activated it. That's Temporary Energy, it can look at all the last 1,000 words!

Permanent Energy is always active memory, Rewards, look at Facebook's Blender chatbot that uses a Dialog Persona to make it talk about its agenda! It can have multiple goal nodes.

My design allows goal node updating by leaking reward to similar nodes ex. food=money or food=dinner, and now I will start predicting/ talking the next words about money now. Root goals are not as changeable, the artificial rewards are sub-goals.

The energy in the net defines itself over time, energy from multiple activated nodes leaks to a single candidate prediction (top k predictions, usually the most probable one, especially if very high probability than other candidates). My code stores Online the predicted Next Letter right now and that is what generates new discoveries that are somewhat true, depending on how confident the prediction probability is.

A good brain exploits and uses the likeliest nodes as prediction, not random data collection. But in dreams you can see we do not generate/ talk about our desired/probable nodes, but random ones, especially activated ones from the last day. It wants you to explore and generate more randomly by not using Permanent Rewarded goal nodes, and look around the last day's experiences to search for a while for discoveries.

The goal is to see how to predict/get to the desired outcome... It needs a lot of data to be sure it "made it" and isn't simply being told "aliens arrived, u can stop working no artificial organs now". It has to search sometimes for a while, and go through many sub goals, until it fits with the data... This part massively confuses me, how it actually knows how it reached the answer/ implemented in real life or has a solid discovery..... Or I mean how it knows which sub goals to make and which get met.... Pretty much we want it to make many desired discoveries, and listen to us if needs more data (either time to implement its idea or IOW feed it new data....to get new sub goal question rewards)

--------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------

(video) How can I improve my AGI architecture?

In the video below, I walk you through a fair amount of my AGI architecture I've been working on for 5 years. I'm looking for if I am missing something or if am on to something. The design is meant to be very very simple and explain a lot of how thinking occurs. Below is how my text predictor code works (100MB compresses to approx. 21.8MB), please read it twice before jumping into the video, you will learn some fundamental things all good text predictors are doing using frequency. Frequency is also used for discovering the word cat=dog. Note that that compression is for evaluation and is different than compressing a network to learn a better model. I should have also included in the video that Summarization, Translation, and Elaboration would be controlled by how much energy is allowed - you only say important features when you Summarize,  not frequent or unrelated or unloved words.

How my text predictor/ compressor works (100MB>21.8MB):
My algorithm has a 17 letter long window step along the input file 1 letter (byte) at a time, updating a tree as it sees new data. The tree's branches are 17 nodes long because it adds a window to tree (after it finishes its search process described next), and updates node counts if passes any node. For each step the window takes, the algorithm searches the tree for 17 different searches each a letter longer. The children leafs (the final letter of a searched branch) are the predictions with counts seen so far in the file. Layer 1 nodes are children too and need no match. The tree is storing the frequency of all 1/2/3.../17 letters seen so far. The children are what allows you to predict/compress the next letter accurately. These 17 sets of predictions must be mixed because while the longest set is more accurate - we have less statistics, sometimes only 2 counts. We start with the longest found. Ex. 14 letter match in the tree. The 14th set of predictions may say it seen come next a=44, b=33, f=25, w=7. I sum a set's counts up to get a total of (in this case) 109, then I divide each count by the total to get %s that all add up to 1% ex. 0.404% 0.35%.... Now for all these predicted %s, we still have 13 sets to mix and must remove some % from them each. So what I do is I check the total counts of the set against a Wanted Roof ex. 109<>300 (maybe we don't even need to mix lower sets if we got enough stats), and so I cut each % of each prediction by about 1/3rd then in this case. And in this case we still desire 66% more stats. For the next set, if say we have 200<>300, I take away 2/3rds from the 66% - meaning we still desire 22%, not 66% - 2/3rds = 0%! I take away the % got OF the % still desired. A little bit of lower sets always leak in therefore, which is better because we can never be sure even if surpass Roof by lots. Besides, it gave better results. But Roof is decided by how many predicted symbols are in the set (total unique symbols being predicted), so if i have 2 then Roof may be 8 counts wanted. Also, while the Roof is based on how many different symbols are seen in the set, we get a slightly different Roof if we are on the ex. 5th set, i.e. if we have 4 letters in the set #14 then Roof is ex. 33, but if it is set #5 then Roof is ex. 26. Also, based on the Roof's size, a curve's bend is modified. This Activation Function curve/threshold gives small/large total counts in a set an even smaller/larger total (but it isn't used in the Arithmetic Coding, it's only used for deciding how much % this set gets in our mixer). This is meant to be a exponential activation. Finally a global weight is given to each set ex. the 14th set is always given 0.7% of the weight it was going to get lol. I hardcoded the numbers for now but the code isn't grossly large of course. If they were adaptive and were based on the data then the compression would be even better. I just noticed I do exit the mixing before reach lower sets if the Roof is ever surpassed, I'll have to test if this is useful. The Arithmetic Coder takes the combined sets i.e. the prediction %s are combined a, b, c + a, b, c + a, b, c ..... = a, b, c (softmaxed so all the predictions add up to 1% i.e. a, b, c = 1%), and the AC then takes a high and low bound 1-0 and takes the middle between the high and low, and starts misusing each % of the set, until matches the final letter in the window (same process whether compress or decompress). So say we stop once reach b in our set ex. a, *b*, c, we are in the float precision now of ex. 0.45-0.22. WE take middle again (0.23) and start misusing (once the window on the file takes another step. The encoding decimal keeps getting more precise, storing the whole file. To work in 16 byte float we need to carry away locked digits, meaning if the high and low are both now 0.457594-0.458988, we store '45' and get now 0.7594-0.8988, and we are going to be taking the middle of these 2 to make the decimal more precise then. This long decimal is then stored as a binary bin number ex. 6456453634636=10100011100111010011. I didn't implement the window to store the last  few letter as branches i.e. the 17 letter window adds itself to tree but before predicting next it could add the 16, 15, 14, etc as shorter branches which would help just a 'bit' more. I didn't implement the removing same counts from lower sets that are just from the higher set, because it hurt compression, i.e. if there is 9 counts total in set 3 and 99 total in set 2, 9 of the counts in set 2 are the same observations and 'should' not help us reach Roof. I'll look into it more. Lastly, escape letters, my first set we mix is a dummy set that has super small weight and has every possible letter, in case we need to encode/decode one and hasn't yet seen it in the file, hence requires a small room in the AC high low bounds. I also hardcoded each probability in this dummy set, common letters get more weight. Compression/decompression takes 2 hours and 16 minutes for 10MB, but Python is slower. Ram is fairly big because I didn't implement the pruning. My algorithm handles incomplete/noisy information (uncertainty) unsupervised Online hence the mixing of window models. Better net or net compression and/or file compression and insight extraction (not decompression of FILE !), faster code and less RAM Working Memory used, all lead us closer to AGI, and smaller code does (a bit).

My code is in Python but for now I'm linking Shelwien's Green in C++, it's very similar. https://encode.su/threads/541-Simple-bytewise-context-mixing-demo

Video:
https://www.youtube.com/watch?v=-9mGm6175BQ

I think one key difference in ANNs for text is the network doesn't store nodes that can be displayed as solid letters and phrases as mine can, for example the lowest layer nodes a b and c may all point to the parent node 'abc', which has ex. a count of 5 times seen so far, but the 'a' that builds it has only 3 accesses seen so far. So instead of blended words or phrases, like 'cotg' made from cat/dog you might even get 'cOtG' where some children affect the node less. I'm unsure yet if that's useful.

From testing GPT-2 and making my own algorithm last year, I have strong evidence that nodes retain energy and the frequency predictions are helped out by already existing energy sitting in related nodes. As you know, when you hear something, it remains on your mind for quite some time. The last 80 words read are all energized, stored in order but as chunks, and are not on paper anymore but in your brain! They *need* to remain active in your brain. The more activated a similar phrase node - the more activated its prediction parents will be. But word nodes may also leak energy to other similar word nodes as well. The energy sitting around definitely will add to the prediction energies therefore, see? If 'leaf' is activated 40 words ago, and our prediction predict letters from word nodes, the leaf and grass etc nodes will also be pre-activated some bit. These energies eventually fade off your mind exponentially.

We can see Facebook's Blender uses also Permanent energies using a "Dialog" as they call it, making it *always talk/ask as if it has an agenda for being a communist. These nodes are hard reward coded from birth and *should update other related nodes to create new sub goals for the food node goal it will never change since is more reward hardcoded, you know you can't change the food node as its critical for survival.
https://www.youtube.com/watch?v=wTIPGoHLw_8

My main points here is frequency in predictions runs my code, and recognizing similar phrases will increase counts (found using frequency, closest affect it most in delay time), using energy to boost related predictions helps a ton, and permanent reward does too. See how all that and more work in the hierarchies? What more can we do!? Can you add anything!?

I'm really excited if even just one of yous can advance the AGI design I'm at. I've seen a lot of ANN variants like variants of GANs, LSTMs, Autoencoders, etc etc, they seem to have things like residual connections, layer norm, convolution windows, many feedforward networks stacked, etc, while my design just sticks to a single collection of hierarchies. Of course you can get the same result by similar ways or by breaking it down into multiple tools with math tricks to get same result. But I'm looking for a more explainable architecture that unifies everything into the same general network, and can worry about the math tricks later. That's why I say in my work that to predict the next word, we ex. look at the last context (like GPT-2 does) and activate multiple similar phrase nodes in the hierarchy and see what entails them all, they are all little judges/hierarchies. I don't hear many people saying this, just RNN this, GAN that, no actual straightforward theory.

Transformers have been proven tangibly better than LSTMs in all areas (check out OpenAI and BERT etc), and the Attention Is All You Need papers says, well, it in the tittle. and was written by Google researchers. Transformers are parallel and can process much more faster than RNNs. And you don't need the recurrentness or LSTM schema, which is confusing.

I've read many articles on Transformers, they have a long process and many things used, and after reading them all there is no explanation how it actually works, anywhere, I'm the only one no Earth saying how GPT-2 works. There is some explanation if you look at Word2Vec or the Hutter Prize algorithms like PPM, but no one "knows" how GPT-2 works.

Energy remains in nodes and fades....see:
Improved edit:
Our brain is always dreaming, even when not dreaming or daydreaming. We actually recall stored features (especially energized or loved ones ex. when asleep from the last day, but we can do that in day too it just makes sure you explore, though you rarely exploit in dreams ex. work on AGI only) and recreate/create an experience, in the brain - we can't feel the real world.

Even more proof:

The proof that shows Temporarily Energized nodes do affect prediction is not just the fact that nodes recently heard must stay in memory active, but also if you had only a small dataset ex. 0KBs and were shown a prompt "the cat and dog cat saw a cat and the " - the next word is not going to be predicted well, much, but out of the 10 words in that prompt, 3 are "cat", so our probabilities can slap on 0.3 probability to predict "cat" next! This is much more powerful if discover cat=dog by shared context, we can see cat/dog appears 4 times in the prompt, often the past paragraph will talk about grass, leaves, trees etc if is about trees. Because all paragraphs will always contain "the" more than any other words, we ignore common words.

Also, Permanently Active nodes and Semi-Permanent Active nodes have reward on them which makes you talk about question goals you love/desire. So our "GPT-2" would talk about likely ways, to get what it wants. Mental RL. If nodes have Permanant activity which affects predictions, it's more likely that Temporarily Active nodes also affect prediction.

With my viz in the video (the image of the hierarchy), input goes up and activates nodes, and as well the predicted next word (parent nodes), but only the winner node (usually top candidate probability). The energy chaining the text in my design does not need to flow back down the net (generate output) to do this, because energy just leaks and keeps leaking, as you talk to yourself in your brain you hear the next word predicted and loops back into your net from bottom but maybe not as I just explained why...it would only loop back and activate the same node anyway. Another thing I said was you could duplicate the net and flip it so input goes up, output goes up out, not back down, but again, unneeded duplication and doesn't make it faster in this case.

Also, humans read text word by word level usually, not parallelly, hence far back nodes are losing energy, you must implement that in a parallel approach. Also as it talks to itself and humans it can only generate 1 word of the future and doesn't have it all yet. So for training on big data, you could do a parallel approach, but not for new data. Also the brain learns a a bi-directional context around a word feature, when it predicts the next word it only uses the left hand side past but its memory let's it see into the future before write next words, so in this sense learning a whole sentence fed in in parallel doesn't make the hierarchies any different, it increments frequencies (strength), adds nodes/connections, etc, same way as non-parallel approach, and the brain is predicting by looking ahead and is also using bi-directional network storage to recognize the feature it is looking at too.

So, learning data in parallel seems to work (storage-wise, all data/ relationships are/ can be captured), and prediction of new words/data is done in the net by leaking activity, bi-directional translation and future look ahead still work for prediction too.
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on May 14, 2020, 07:32:47 am
Why only one static image through entire video? I'm trying to suggest some form of presentation turned into slides. Also,you might want to turn your algorithm into a pseudocode (https://en.wikipedia.org/wiki/Pseudocode) and show it in one of the slides. If it's too long, try to replace blocks of instructions with a single descriptive lines that you can develop on further slides. That would make an awesome improvement to your presentation. I think the goal is to describe what you want in the least number of words necessary. Simplicity and conciseness should be an advantage, and it is easier to achieve it through sound/vision than only through sound.

Also, planing  of what is to be said/written could be outputted on paper as the first working version that you polish/shorten/optimize over days/weeks. Sometimes making a ten minutes video could take weeks if you are after perfection. As you approach perfection (you can come near, but you can't reach it), your final video should make a better overall impression, you would be taken more seriously, you would have more viewers, and mods should finally have a true insight in your work instead turning their head on the other side after the first minute of seeing the video.

Let me show you something:
I hold that (1) gives a much better impression than (2), and I got much more positive reactions on Reddit, comparing those two. Some of us, less educated (including me) are not blessed with eloquency that people usually reach at university, so we have to spend more time to achieve a visible result.

Someone told me that people averagely spend overall 5 seconds on a web page once they visit it. So, when you are making a page, those five seconds are the most important to attract visitors to stay longer. Likewise, it could be a case that the first 10-20 seconds of video decide whether a viewer will click stop and go away, or continue viewing. So it might be worth to spend some extra time on those first 10-20 seconds.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 14, 2020, 08:45:08 am
Everyone is suggesting to be clearer. True... Still I hope yous tried reading/ watching....it took me a full 29 days of work to make sure a lot of things where in my AGI/universe book intuitively - that thing is honestly not something I'm going to re-write for a while.

The new AGI vid did take 36 mins and is more summarized key points, that is possible. But still, 5 years worth of research and a partially complete AGI blueprint, in 36 minutes and a text to go with it, and you want it in 10 minutes :) !

Can anyone say from my new video above what confuses them? Maybe I can see your reactions... Where (and what) do you say "ohp, I'm lost, why is this useful or how does it do x" ?

The below is a format for Papers but my work still presents the right things at he right times I feel....I summarize, elaborate, share related work, show code and results, and the whole thing is very intuitively explained. I really don't want to explain my work that way that I'm not comfortable with, my way is best :D When I bring up OpenAI's  GPT-2, I bring it up.... All u have to do is collectively store it all, the order is there, it's not like I say the cat bed food ate slept in bed bowl then went to -_-

Title
Abstract.
Related work.
Your contribution (theory).
Experimental setup.
Results.
Conclusion.
References.
Optional appendix with supplemental details.
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on May 14, 2020, 09:33:06 am
The new AGI vid did take 36 mins and is more summarized key points, that is possible. But still, 5 years worth of research and a partially complete AGI blueprint, in 36 minutes and a text to go with it, and you want it in 10 minutes :) !

There is a technique of gradual summarizing I like very much. First explain everything generally in 20 seconds (1). Then explain it again, but in 5 minutes (2). Then explain it thoroughly in whatever time you need, but by repeating (1) and (2) on each sub-section of the thorough explanation.

Five years is a lot of time that deserves a careful thought about presentation. I'm not saying this or that way is the best, but I think freezing one still image over 30 minutes does not instill confidence about something that might turn into a serious advance. My advice is at least put some thematic textually structured slides behind the speech, and please, make an effort of creating the algorithm pseudocode. That way you'll see more clearly strengths and weaknesses of your approach comparing to others.

I'm just trying to help, but it's your project after all, and you probably know the best way of how to behave responsible about it. I may only wish you well.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 14, 2020, 09:51:18 am
Quote
"There is a technique of gradual summarizing I like very much. First explain everything generally in 20 seconds (1). Then explain it again, but in 5 minutes (2). Then explain it thoroughly in whatever time you need,"

I literally was just thinking about that on my own, cool eh? True.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 15, 2020, 09:54:31 am
Reminder: my big last posts on previous page(s), don't miss!

Something I posted on OpenAI's group chat on Slack with the Clarity team (ROFL):

I'm still trying to find a good angle to share/ intake, as I'm sure I have insights and sure yous know things I don't too. Circuits's goal is Clarity in understanding existing vision networks, and ultimately AGI in the long run, because they want to improve the design and therefore need more answers to how the brain works. Let's try this question for now: Do the existing vision nets yous are focusing on use Backpropagation? I'm sure it results in a similar weighting that a no-backprop approach would create! So why not a more natural/efficient way of Learning? In my design for AGI I stick to wiring together features recently activated ex. 'h' and 'i' to get the new node 'hi' and I use node accesses to update the connection weights so that if 'hi' was seen 8 times - the connections would each have 8 strength. This very idea runs my simple trie-based Predictor. Further, when 2 features like dog and cat both have snow around them, this would cause dog to light up when cat lights up, hence wiring cat to dog and updating the connection as well. Nodes with few frequencies get pruned/ forgotten or blended with other nodes ex. 'well hi there' and 'hello my friend' becomes 'hi friend my', or 'hi' and 'hello' become 'hielo'. Sometimes (I think) weights are rightfully unevenly stronger and some letters trigger it more than others ex. 'hiElO'.

BTW did everyone know this Deep Learning trick to prune weights?
http://news.mit.edu/2020/foolproof-way-shrink-deep-learning-models-0430 (edited)

Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 17, 2020, 10:26:37 am
Intelligence just rised again;

Convolution, Pooling, and RELU can be done in my design by time delay of individual nodes firing - if "the cat ate food" is heard it may activate features "the cat" and "ate food" which may activate the node "the cat I saw ate food". Little parts get matched, sometimes not fully as shown because of time delay/ order is different, which ni turn these partially activated nodes activate yet higher nodes partially too! Basically it's ok if not exact match, order of parts of parts of parts is resultingly similar. For Pooling, we can reuse an already existing mechanism - when prediction candidates are activated in my hierarchy design, only 1 usually top prediction is heard/spoken, meaning even though others do have energy, they get a lot less weight. And RELU is exactly that.

&

1) All neural networks compress themselves to learn the latent salient features, they prune low frequency nodes/ connections, blend nodes, store features only once and increment frequencies (strengthen axon weight), trigger related nodes (translation), etc. Once a network is compressed/ learns a good model of the data fed to it, it can predict/ generate True data from the same distribution by using top k predictions softmaxed.
2) Lossless Data Compression is another thing, which is the best Evaluation method for testing Predictors, a neural Predictor is really good at guessing the next letter/ word and can store a separate file that stores error correction (steering the top k predictions to the correct one) and that compresses a file.

Object recognition is used in vision, and text neural networks. The goal is to work with strings of objects; sentences in time. AGI is all about updating where to collect or generate data from, it uses large context in a model to make decisions/ the future state of Earth:
https://www.reddit.com/r/agi/comments/gcln3p/thought_experiment_what_does_a_body_do_for_a_brain/

Elaborate? Maybe summarize haha. In my movies you see my network has a lot of things for Prediction; frequencies, translation, robustness to typos etc, activation functions, energy remaining, etc. Prediction is truth. It models the data distributions. Trust is based on context; truth. The 2nd major thing in my network is Reward, deciding what website or what mental thought to look into or which motor action tweaks to try is a recursively updating process of where to collect/generate data from and evolves its own goals to achieve the root goal Survival. This goal finding steers the prediction to a path that it wants to meet so that it know "How" to get what is "Wants". The brain modifies long term memory nodes that exist etc, short term working memory (temporary energy), and permanant energy (reward). It updates all 3, to reach the reward answers.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 18, 2020, 06:08:25 pm
May 18 2020
Locksuite posts on OpenAI's Slack channel:

The article mentions Nick looking for curves in images and couldn't just choose 1 of 4 classifications for a given image, and says "We were surprised when we saw that activations fell naturally into different levels of activation.". Really? I knew that. I actually know something deeper. Check out this image: https://ibb.co/x5J7s9k

If my hierarchy schema reads "cat", it activates the "cat" node including other nodes ex. "cattle" and "tac" and of course parts of itself ex. "at" and "t" to variable degrees. Each node has predictions of what comes next. They combine predictions by shared parent nodes. I have a real algorithm that does this. Energy flows rightwards only, you can't repeat the alphabet backwards naturally. Only how it was stored in order.

As shown in my image, the "cat" node activated will activate neighboring context nodes, and through these local channels and only through these channels will trigger/discover nodes like "dog" that share the same contexts - cats and dogs both eat, sleep, run, lick, etc. The cat, cattle, dog, etc nodes are activated by variables amounts and all mix predictions. It can recognize unseen sentences plus use many matches/judges for prediction.

My hierarchy schema can also discover/trigger my=your if it stores "my dog" and "your cat", because the shared contexts are, while not exact, similar, hence leaking energy still! Further, if both rabbits and horses are dogs, animals, 4 legged, cute, and have 2 eyes, then rabbits triggers horses. This is getting deep now, very parallel, each node leaks energy and each of these then does too...
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 19, 2020, 12:40:19 pm
A natural and explainable brain? Another visit to my design.

New pictures are provided too.

I have been working on AGI for now 5 years full-time (I don't get paid), on mostly text sensory but it turned out to be very very insight-full, I have a very large design with many of the "bells and whistles", while the architecture itself that can run it all is very simple and has made some of my friends cringe. Below I lay down a good portion of my work. I really hope, you can help my direction, or that I help you.

Nearly every part of my design/ architecture can be found in the AI field. Hierarchies, yup. Weights, yup. Rewards, yup. Reward Update, yup. Activation Function, yup. Word2Vec, yup. Seq2Seq, yup. Energy, yup. Pruning, yup. Online Learning, yup. Pooling, yup. Mixing Predictions, yup. Etc. It's when I unify them together I start getting a new view that no one else shares. I'm able to look into my net and understand everything and how they all work together, that's why I was able to learn so much about the architecture.

I've coded my own Letter Predictor and compresses 100MB to 21.8MB, world record is 14.8MB. Mine updates frequencies Online and mixes predictions, and more. I still have tons to add to it, I will likely come close to the world record easily. How it works is in the supplementary attached file.

So I'm going to present below a lot of my design, showing how I unify a things together in a single net. And you can tell me if there's a more natural way or not. I've tested other people's algorithms like GPT-2 and they can accomplish what I present, but the natural way to do it is not shown in an image or ever explained like I explain it, they just stack black boxes on each other.

See this image to get a basic view of my architecture. It's a toy example. https://ibb.co/p22LNrN It's a hierarchy of features that get re-used/ shared to build larger memories. The brain only stores a word or phrase once and links all sentences to it ever heard. That makes for a extremely powerful breeding ground. Note the brain doesn't store a complete pyramid like I show in my image, just bits n parts; a collection of small hierarchies. So think of my image as a razor tooth saw, not a single very tall pyramid triangle. https://ibb.co/d4JVm55

Notice all nodes are too "perfectly" clear? Well nodes can be merged "whalkinge" and have variable weights "wALkinG" and be pruned "my are cute" to get a "compressed" fuzzy-like network but we can for now keep a clean hierarchy so we can easily see what is going on!

I have a working algorithm (trie/tree-based) that updates the connection weights in the tree when accesses a feature (in the same order time ex. a>b>c, cba is a different feature), so it knows how many times it has seen 'z' or 'hello' or 'hi there' in its life so far! Frequencies! This is my Online Training for weights. Adding more data always improves my predictor/model, guaranteed. I tested using not Perplexity but Lossless Compression to Evaluate my model's predictions. So now you can imagine my razor tooth hierarchies with counts (weights) placed on connections. Good so far. Starting to look like a real network and can function like one too! https://ibb.co/hC8gkFC

Now for the cool part I want to slap on here. I hope you know Word2Vec or Seq2Seq. It translates by discovering cat=dog based on shared contexts. The key question we need/ will focus on here now is how does the brain find out cat=dog using the same network hierarchy? Here's my answer below and I want to know if you knew this or if you have a more natural way.

https://ibb.co/F4BL1Ys Notice I highlighted the cats and dogs nodes? The brain may see "my cats eat food" 5 times and then, tomorrow, may see "my dogs eat food" 6 times. Only through their shared contexts will energy leak and trigger cats from dogs. There's no other realistic way this would occur other than this. The brain is finding out cats is similar to dogs on its own by shared strengthened paths leaking energy. So next time it sees "dogs" in an unseen sentence like "dogs play", it will activate both dogs and cats nodes by some amount.

We ignore common words like "the" or "I" because they will be around most words, it doesn't mean cats=boat. High frequency nodes are ignored.

Word2Vec or the similar can look at both sides around a word to translate, use long windows, skip-gram windows, closer words in time have more impact, and especially the more times seen (frequencies). My hierarchy can naturally do all that. Word2Vec also uses Negative Sampling, and my design can also use inhibition for both next word and translation Prediction.

Word2Vec uses vectors to store words in many dimensions and then compare which are closer in the space. Whereas my design just triggers related nodes by how many local contexts are shared. No vectors are stored in the brain... Nor do we need Backprop to update connections. We increment and prune low frequency nodes or merge them etc, we don't need Backprop to "find" this out, we just need to know how/ why we update weights!

There's a such thing as contextual word vectors. Say we see "a beaver was near the bank", here we disambiguate "bank". In my design, it triggers river or wood more than TD Trust or Financial building. Because although "near the bank and the building" and "near the bank with wood" both share bank, the beaver in my sentence input triggers the latter sentence more than the financial one.

Word2Vec can do the "king is to queen as man is to what?" by misusing dimensions from king that man doesn't have to find where queen is dimensionally without the king dimensions in man to land up at woman. Or USA is to Canada as China is to India, because instead of them lacking a context they both share it here but the location is slightly off in number. But the brain doesn't do this naturally, just try cakes are to toast as chicken is to what? Naturally the brain picks a word with all 3 properties.

To do the king woman thing we need to see the only difference is man isn't royal, so queen is related to woman most but not royal, hence woman. This involves a NOT operation, somehow.

Ok so, when my architecture is presented with "walking down the" it activates multiple nodes like "alking down the" and "lking....." and "king...." ..... and "down the" and "the" and also skip-gram nodes ex. "walking the", as well as related nodes ex. "running up that" and "walking along the". My code BTW does this but not related or skip-gram nodes yet! What occurs now is all activated nodes have shared parent predictions on the right-hand side to predict the next letter or word. So "down the" and "the" and "up this" all leak energy forward to "street". This Mixing (see the Hutter Prize or PPM) improves Prediction. You can only repeat the alphabet forward because it was stored that way. Our nodes have now mixed their predictions to decide a better set of predictions. https://ibb.co/Zz91jQQ

My design is therefore recognizing nodes despite typos or related words. It can also handle rearranged words like "down walking the" by time delay from children nodes. Our "matches" in the hierarchy are many, and we have many forward predictions now, we can take the top 10 predicted words now. We usually pick the top prediction, mutation makes it not perfect on purpose, it's important.

You may wonder, why does thinking in the brain only hear 1 of the top 10 predictions? All 10 nodes are activated, and so are recently heard nodes kept Active! If they were heard, you'd hear them in your mind, surely? If you imagine video in your brain, it'd be very odd to predict the next frame as a dog, cat, horse, and sheep, it would be all blended like a monster. The brain needs precision. So Pooling, as done in CNNs, is used in picking from top 10 predictions! Other nodes and predictions still are activated, just not as much.

Also, Pooling in my architecture can be done for every node outputs! Not just the final high layer. Pooling helps recognition by focusing. Pooling can be controlled indirectly to make the network Summarize or Elaborate or keep Stable. It simply says or doesn't say more or less important nodes, based on the probability of being said. Like you may ignore all the "the" or you may say a lot of filler content that isn't even rewarding like talking about food (see below).

When given a prompt ex. "What do you want to eat? What?" you may first parrot exactly the start, and some may be said in your own loved words I, fries, etc. Or you may just say the entail. You might just say what they said and stop energy forward flow. And you might just say fries in replace of "What?". Why!? Because their words, and your loved words fries, I, etc are pre-active.

One more thing I'll go through is Temporary Energy and Permanent Energy in my architecture. You can see Facebook's new chatbot Blender is like GPT-2 but it has a Dialog Persona that makes it always say certain words/ nodes. So if it likes food or communism, it will bring it up somehow in everything. Just look at what I'm writing, it's all AI related! Check out the later half of this guy's video: https://www.youtube.com/watch?v=wTIPGoHLw_8

In my design, positive and inhibitory reward is installed on just a few nodes at birth time, and it can transfer reward to related nodes to update it's goals. It may see contextually food=money, so now it starts talking about money. Artificial rewards are changeable, root goal is not modifiable as much.

For Temporarily Active nodes, you can remember a password is car and forget it, but of course you retain car node. This is a different forgetting than pruning weak weights forever. GPT-2 is probably using the last 1,000 words for prediction by this very mechanism. The brain already has to keep in memory the last 10 words, so any predicted nodes that are pre-active from being held in memory get a boost. If you read "the cat and cat saw cats cat then a cute" you predict cat, and the cat node is already activated 4 times just recently. You're holding the words in your hierarchy nodes, not on paper anymore. So yes energy is retained for a while and affects the Probabilities predicted!

I once played Pikmin for half the day, and when I went in the kitchen things looked like Pikmin faces or I seen them faintly but still somewhat vividly running around things. It causes dreams more random predictions from the top 10 or 100 predictions. It's not really good predictions in dreams.

You can see how this helps. Say you only read 100,000 bytes of data so far, and you now read "the tree leaves fell on the root of the tree and the", you have little data trained on so far, but you can predict well the next word is Probably a related word to tree, leaves, etc, so leaf, tree, branch, twig all get boosted by related words from recently read words. And it's really powerful, I've done tests in this area as well. The Hutter Prize has a slew of variants I presented. Like looking at the last 1,000 letters to boost the likeliest next letter. That's good but not as commonly accurate or flexible as word prediction using related words, instead of Exact letters! Big difference.

I look forward to your thoughts, I hope I provided some insight into my design and tests. I hope you can help me if there is something I'm missing, as my design does do a lot in a single architecture. I don't see why it's a good idea to study it as a stack of black boxes without fully understanding how it makes decisions that improve Evaluation (prediction). While my design may be inefficient it may be the natural way it all fits together using the same nodes.

To learn more, I have a video and a different but similar run through my design in this file (and how my code works exactly): https://workupload.com/file/Y4XhZPYHzqy
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 22, 2020, 10:14:05 am
Earth formed about 4,500,000,000 years ago. The first duplicating cell on Earth emerged about 3,500,000,000 years ago. Humans, capable of improving their own tools and own design, emerged on Earth just 6,000,000 years ago. Since the last couple of thousands of years on Earth we have been radically improving our vocal/ phone/ computer communications, data storage/ computation, and transportation technologies to name a few big ones. We now have huge skyscrapers and every year or so a better iPhone or AI. The data exchange/ combination and mutation and specialization lead to the ability to more quickly do it again but faster. It's all going to go down now (for Earth) in the next ~100 years. We will invent an artificially duplicating hardware/computer that is efficient and programmable, even adaptive. Simple nanobots. It will allow us to improve the nanobots further and make bigger faster computers and collect more diverse data. All of Earth will become a grey goo nanobot swarm that can predict well (especially being a formatted terraformed planet being a fractal now, knowing where, when, and who all is using least data to know so is most efficient), regenerate super fast, create or become anything at whim, and will continue to grow in size by eating planets (although will have to be not so dense or else becomes a star/uranium and explodes radiation). We already can see Earth becoming a fractal, stores are built near stores, homes lined up...

The trend? Evolution is evolving longer living structures. Longer Lifespans, is the way Evolution works and prefers. Humans already seek longer lives.

I'm therefore not working on the wrong technology. AGI is the next species in Evolution. I have found hundreds of capabilities they will have that put them way ahead of us easily. Humans were smarter than apes by far, and AIs will be exponentially way more than us.

And my approach to AGI is not wrong. AGI needs not just more existing data/compute but a smarter discovery Extractor/Generator to create NEW desired data to its held questions (duplicating old data with mutations). The output of AGI is only to either implement plans or update where to collect data from, those silly RL walker robots do this and GPT-2 should if we improve it to do so. It is specializing in where to collect new data from, which question, which source. I don't really need a body for my AGI therefore. Output is just for implementation or data collection specialization updates.

For example, my algorithm I made from scratch, compresses the dataset enwik8 (100MB) to 21.8MB, which means it predicts pretty ok, and my net predicts better the more data it sees, for example if I used the dataset enwik2 (100 bytes lol) it'd compress it to only ex. 70 bytes. Get it?

SO, with the same dataset enwik8 of 100MB, how can I predict better if I don't have more data? Add more data. WHAT!? Yeah. Let me show you. When you find discoveries in the enwik8 dataset ex. cat=dog by shared contexts, you can recognize longer unseen sentences more robustly, and more! The world's best compressor can get enwik8 to 14.8MB. See?

To give a clearer example: If I window the last word of my context, to predict the next letter or word, ex. "the [cat] ?_?" > "the cat ate", I know, from up to 100MB, with experience, what follows it. BUT, i I look at it like "the [horse] ?_?" and "the [dog] ?_?" etc these words share the same en-tailing words usually so they will surely be helpful. And THAT, gives me more data/insight. Patterns are in the enwik8 dataset, some words are inter-exchangeable!

Oh, so here we see now: Basically, evolution moves faster near the end because of more data mutation. Hence, more storage, communication, and compute improve immortality of data lifespans. And not just more compute/data, but virtual/extracted insights, is where you get the most data. Hence, AGI and a faster/bigger computer both advance evolution! But AGI is much more potent at doing so.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 22, 2020, 08:49:46 pm
https://www.youtube.com/watch?v=TF5cJqXBwhc
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on May 22, 2020, 10:26:06 pm
If you ask me, it's better than before, but try to pick a real example of compressing, like a short meaningful sentence, showing what happens to which variables/arrays on each loop step. Like a kind of showing what the algorithm does, step by step, accompanied by clearly shown input position/variable states/generated output. Just a suggestion.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 23, 2020, 08:08:51 pm
From another forum:

Quote
Intelligence requires that there multiple possible futures, otherwise we would simply be mechanically unfolding a pre-determined destiny.

First I'll start with the obvious side of the coin. We do know our universe is at least somewhat predictable, that's why we can repeat the same lab experiments around Earth, and build neural models of the world to learn patterns. The laws of physics make our world at least partially unfold along a deterministic path. A computer simulation or calculator is also replay-able - it's predictable. We are able to understand things because they are not random.

On the other hand, the word "random", by definition, means an outcome/result that varies maximally. So instead of a computer algorithm spitting out the number 5 predictably, you may get 1, then 8 next time, then 3, 0, 8, 6.... If it wasn't [maximally] random you'd get outputs like 6, 4, 5, 5, 6, 4, 4, 5. So random just means a wider variation. It doesn't mean it disobeys the laws of physics. It just means the view we look at something is ignoring fine details. For example, a woman can write down which color dresses she'll show you, so she knows the order of colors, but you don't, so to you it appears unpredictable, but to her, maybe the dress order is down pat and she remembers it like the alphabet. Another example, you fill a glass with milk until it pours over the rim, but the side that leaks first is different each time. Why? The glass is perfectly flat at the top let's say. It's because of the direction the human poured the milk in that caused it to pour over different sides. The human didn't stand in the same spot each time! The definition of random, that I gave here, basically equates to: you don't know something, so you output the wrong answer. But someone else can know what will happen! Lol. In other words, in the physics/laws we have, you can get an algorithm that outputs 5 each time, or outputs 4, 6, 5, 5, 6, 4, 4, or outputs 7, 1, 3, 0, 8, 4, 2, 8. And there's an actual reason behind it. Not magic. The definition of "random" I gave here, therefore, is when you lack information. You don't know what will occur. But once you learn what will occur, you know in the future what would result. This is assuming you can look in a computer or brain to see the stored algorithm. If you can't know what's inside, then you don't know if it will output 5 every time.

So, do we have another definition for the word "random"? Yes. I call it True Random. It would need to break the laws of physics. For example, an atom or particle would be shot into space, travelling, and after 45 minutes, decides to change its direction! There's no reason it should have, though. Nothing touched the system and nothing left the system. Now, we already know our world is at least 50% not True Random, but predictable. And in computers there's a thing called redundancy that stops errors from popping up. You can run a car simulation perfectly each time, the same way each time! You could run a human simulation, with no True Randomness! Unless it makes us act the way we do. So, True Randomness may exist, and it may be helpful in making more robust predictors that handle uncertainty. You could just make your world/borg garage larger. Larger systems can avoid errors and damage more than brittle delicate small systems. It takes longer for the errors to show up. So the borgs could more easily predict where things are at the high level. Now, one could argue that if particles acted truly random 50% of the time, it would show up in computer car simulations! But it doesn't. So the real reason we get errors is because there is faults at low levels we don't know about. That's all. Not True Randomness. Now, can we solve this? Yes. We already are. Humans produce babes without the DNA information disappearing. We can repair cars indefinitely. But we can't know where every particle in our system is, for to do so would require knowing where the particles (that make up our knowing) are, which is impossible. You could make everything into solid cubes, but you still can't model your world perfectly, only approximately.

The 4th side of the coin is magic orbs from God herself. Unfortunately, if you were hoping for this to be a valid thing, you are mistaken. Magic has no place, magic has to be either True Randomness, Randomness, or Laws of physics. There's no, such, thing, as magic. Either a particle moves as expected based on its and/or other surrounding context/conditions -OR- it pops into/outof existence some "move" or "particle" or "law" that truly is random. Say we had a genie ghost waving its hand with Free Will, granting wishes. The way it works is not by a existing predictive mechanism, but by popping into existence stuff, and must be non-random stuff. But why non-random? Because the genie would not exist, it'd be illogical soup. But what sort of "dimensional ether" is remembering or directing non-random creation in real time? We need something already existing to do this. A designer who creates a designer who... So it's impossible.

Quote
Your compression thingy will basically produce something that spews language, gibberish actually because there is no world model or understanding behind it. Much more importantly,there is no path to general problem solving, or even generalized language gibberish spewing, just a specific language.

"Your compression thingy"

This shows you lack understanding. Gosh. Lossless Compression is just an Evaluation for my neural net predictor I made. I could use Perplexity. Same algorithm, just different test of how good my algorithm is at predicting data in the distribution.

"will basically produce something that spews language"
"gibberish actually because there is no world model or understanding behind it."

Again, you're lacking here. Neural networks learn a model of DATA. Be it text or vision. - Both are language. Which means they CAN learn PATTERNS. Patterns mean frequency, because in a dataset you may see the letter 'z' or word 'grommet' appears not too often! Maybe nothing re-occurs! Maybe the whole dataset is tttttttttt. So you can predict/generate the likely future, being the letter 'e' or word 'the'. Now, because of these re-occurring letters or words, words like cat & dog can be found to share the same contexts. Dogs eat, dogs jump, cats eat, cats jump. Thank god the word "jump" appears at least twice lol. Else no semantics! SO: A neural model can learn the letter 'e' appears very frequently, 'z' appears infrequently, 'cat' is very contextually similar to 'dog', and 'cat' is very different than 'jog'. Neural Models help organisms to survive longer in Evolution. Even if you don't believe text data mirrors human vision_thoughts data, you can still trust the algorithm can work on ANY dataset by "finding" patterns. In FACT, the Transformer architecture used in GPT-2, works on vision and music datasets.

"or even generalized language gibberish spewing, just a specific language."

First of all, the algorithm I already coded from scratch can predict the next letter of any language/ generate other languages too, like Hindi, French, etc. You just feed it such dataset and it learns the patterns. Currently I use enwik8. Now, my future algorithm, and the already existing GPT-2 made by OpenAI, can already learn cat=dog semantically by shared contexts, cat/dog are interchangeable and it can recognize unseen sentences. It helps it knows what entails a given word or phrase by looking at many many similar situations from past experience. As well, it can learn hello=bonjour, if it is fed diverse data that has enough French words! This works for vision too. And if you use text + vision you will need to associate them in the same time they were shown.

"Much more importantly,there is no path to general problem solving"

You've literally just asked me how to create AGI. AGI needs to solve many different types, of Hard Problems. To do so, it needs a large/diverse model, not just so it can solve various domains, but so it can use all sorts of domains when solving a problem in a given domain. It needs to know frequencies or IOW Cause > Effect probabilities of our physics (dogs usually breath, not eat) to logically think about paths it COULD take. And must take a path it desires too, to reach the desired outcome. It must wait at steps, until they are completed. It must update goals through induction/semantics. Food = money = jobs = truck = wrenches. It will ask new questions and seek new data from specialized sources or questions. It may need to search/mutate answers before mentally generates a good well-backed/aligned answer. It needs to be told when you look at 2+2=, it must be a precise answer, not 8, even though it kind of answers the question. It needs to be told when you are unsure of the prediction for 2+2=, you must look at it a different way or collect more specific data, if it is unsure about 573+481= it can look at it a different way (assuming you are sure of 2+2=4 etc etc). You are told to resort to look at [5]73
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 26, 2020, 02:14:59 pm
Little update. I'm using all chars now in my predictive model, just a few letters in my input were all ???? no matter the special char that *should* have been there. My code is approx. 114 lines of Python. Could make it a tad smaller. Also a python pro could make it yet more smaller. It's made of 4 parts; tree storage, tree searching, mixing/weighting predictions, arithmetic coding (evaluation of the algorithm).

Reference: world's best compressor gets 14.8MB for 100MB (enwik8 dataset)

1,000,000 bytes in
Shelwien's Green --- 256,602 compressed
My algorithm: --- 252,591 compressed

Green: 100MB > 21,819,822 bytes
I should at least reach then: 21,478,751 bytes

update:
NEW: 251,699
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 26, 2020, 09:37:21 pm
Managed to remove 6 lines of code.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 26, 2020, 10:18:08 pm
How to use my code:
https://www.youtube.com/watch?v=3wTOSLOA9GM

My code:
https://workupload.com/file/Sbx7a5q77r3
The hardcode can be minimized, it has patterns.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 28, 2020, 10:21:13 pm
Confused? Someone was.

Now you feel the frontier of hard work not presented properly. Don't repeat this. Most do this.
You can adjust the number at start of video I'm on to compress how many letters, mine's set to 10,000.
I change the yes/no at top to compress/decompress.
I change the bottom thing to print the input2 to get either the encoded file compressed or the output decompressed. You plug in the encoding to top where shown.
My dataset is in folder shown. Enwik8. Well, part of the start of it.
I was showing at that part of the video how the out.txt was same as the input file.
BTW to decompress you need to modify as shown the input file so only 16 - 3 letters are in in....up to the x letter shown lol.
At the end a do a little calculation to get the actual bytes being compressed.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 29, 2020, 12:37:38 pm
mostly same but clearer rewrite:

Here is my [latest] python code I made from scratch.
And a new 1min video of me using it.

https://workupload.com/file/9x5Ft5EfBfn

https://www.youtube.com/watch?v=q0m-v9192o4

The hardcode in it can be minimized, it has patterns, I just left it for now. Most the prediction accuracy/ compression is not done by the 3 long middle columns, actually.

Mine can compress 100MB to approx. 21.4MB. World record is 14.8MB.

My code is explained in my book. Basically the more text my code sees the better it can predict which letter follows the last ex. 8 letters, and I mix up to 17 such ex. "The cat ran_" "he cat ran_" "e cat ran_"....

If my code found matches robustly with similar/rearranged position words, I could mix 8000 matched experiences instead of 17. It would help compression a ton. Maybe I'd get 17MB.

And if I held recently activated words, they would be pre-active and require less data to predict what word follows "cat cat cat cat", by just using the already activated nodes themselves! Similar words also become pre-active. Candidate nodes are already pre-active in the brain because the brain holds onto words it heard, so it's a natural thing, they are pre-selected/triggered for softmax output. I examined how GPT-2 does this and I once made a code that worked by doing this, it helped a lot.

And I also know yet more ways to get the compression down / prediction more accurate.

I link in my book also Shelwien's version in C++, I followed how his basically worked. His compression is 21.8MB.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 31, 2020, 01:43:16 pm
To do zero-shot or one-shot learning means you are getting [information] from somewhere. Same for multi-shot learning.

Learning (finding) patterns in big data improves intelligence.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 02, 2020, 05:36:32 pm
Did anyone here ever realize 1) more data makes AI make smarter decisions, and 2) you can get more data from the same-size dataset? Here's 2 ways how (I know many more ways):

What do you think?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 02, 2020, 09:34:54 pm
I suspected this for years but here's bit more proof:

Say you only see a prompt "cats" and can't generalize to dogs. With the predictions eat, run, etc, you can actually translate these. This helps you predict better. While you've only seen 'eat' and 'run' follow 'cats', you know eat is dine, very similar actually let's say, so you could say 'dine' has a higher chance of appearing than 'bike', all by looking at just the predictions!
Title: Re: Releasing full AGI/evolution research
Post by: HS on June 02, 2020, 10:17:07 pm
Maybe it could benefit from some larger higher-level generalizations and heuristics to then predict lower level ones. Think big sine waves made of smaller sine waves etc. Fractal sine waves. You can get good top down predictions, better than trying to get a correct big picture by studying the nature of details. Like having a general prediction for what a book would consist of, then dividing that idea, and elaborating on each part, etc.

The most efficient process for putting rocks into a bucket is biggest to smallest, that way all the pieces will fit together. The biggest pieces of data make a rough approximation of the idea, then progressively smaller pieces make increasingly accurate and precise approximations.  If you put small ideas or rocks in first, some parts of your model may be precise, but the whole thing won’t end up being accurate to life. 
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 02, 2020, 10:47:05 pm
Ah. I see what I've been doing wrong.

I should have started with a blow up doll. I knew it.

Oh wait, already been there.

That's a good discovery HS. Big rocks go in the bucket first. Then you shake it as well to use up all space. Wahahahaha. You literally do nothing to make the bucket heavy. I heard a company I forget which is letting AI improve circuit boards like that, big components go on first.

Well, HS, that's exactly what my current code does, [long] context matches in the tree shown in the image above - they get most weight during Mixing. You can also do Byte Pair Encoding top-down too but too high is not needed plus costs way too much resources. Hmm, perhaps, reward or temp energy etc can take weight first during Mixing, I never thought about that, even though I should have lol. For example you may consider your friend's opinion on the future word to predict, and ignore attention to other information, you just don't look at it.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 04, 2020, 03:27:36 pm
Big new discoveries. Big thing on the way. First some preliminary findings of less importance:


Crosspost:

I used to lol a lot myself, I later realized my research should not include too much lol. It [can] let others know you mean well. Scientific papers don't say lol a lot though. So I guess that says something.

Oh. That's (scientific papers) akin to cops. Cops are trained not to smile or laugh on duty. They mean business. All emotion is gone. They focus on reason /science /ideas /thoughts, not the primitive food/breed agenda. Hmm, but that isn't fully good now is it? Our root goal is very important to us, and all future systems in Evolution. Today many people are busy working (well, maybe not today lol), but we need to remember we have/need love inside and that all our hard work is for the food on the table and long-term survival. We need to not lose focus on high level reasoning or caring about others.
Title: Re: Releasing full AGI/evolution research
Post by: Art on June 05, 2020, 03:29:10 am
Everything in moderation.

There's a time and a place for everything. O0
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 06, 2020, 01:39:59 am
Everybody has got to spread the word that we need to keep the Economy going, to progress to the future technology before we die. We have these computers and advanced algorithms just recently we didn't, we are getting very close. We can't take vacations. Something very powerful awaits us, so big and fun you probably don't realize it. Too many humans so concerned about short term things but not the actual long-term survival! Just sad. They all disappear in the end after all that attempt. We as a species will get much longer lives - just look at the future human AI species, but we ourselves can get in if work hard. There's a huge gift, but it's not free, you're on Earth still!
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 10, 2020, 12:04:23 am
Isn't "Pooling" basically "Activation Function" !?


So, Pooling puts the largest variable "on show" and puffs it up bigger than it already is, big gets bigger faster aka Pools. The small don't really get included/ Attention.


As for Activation Function, it can be an S curve function, a large sum may be well over a threshold and gets a big boost therefore. It puts the big "on show", as well, like Pooling.

Big get more weight. Small might not even trigger much activation output. Only if big inputs enter does it activate a lot.


So, small variables don't affect the output really as much as they say they will. Big gets extra biggness.


?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 10, 2020, 04:09:41 pm
Hmmm, convolution is able to recognize features no matter where they are in the image. Essentially, it goes bottom-up, local features decide the activation of higher features. If they match poorly, it piles up in higher activation in the hierarchy.

Max Pooling is the most common type of Pooling and it ignores all smaller values but that tells me it could mix them Average and yes there is a such thing and I believe a Exponential Mixing would be best so that small values are mixed but barely to act "like" Max Pooling. Activation Function typically does that too. If you have a big value, it ends up bloated much larger than it really is.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 10, 2020, 04:55:19 pm
Pre-Release: Building On GPT-2's Successors "Blender" and "PPLM"

I'm writing something bigger, but I really want to release something new I discovered. I hope you can wrap your head around it for now.

Semantics/ Embeds/ Translation allows GPT-2 to recognize a sequence of text to various memories and "see" what word follows next. This helps, so much. It let's it generalize and gather more experience. It's very real and really works.

Blender and PPLM built on GPT-2, at least it looks exactly like that. They force it to talk about certain topics like politics or flowers. They essentially gave it desires / a focus instead of talking about all sorts of domains, of which won't help it Survive.

My big discovery is that we start off at birth wanting food and mating, and we recognize similar words by shared contexts, like money, farming, shelter, cars, science, etc, which get us our Survival needs to spread our genes, or AI designs (the new evolution). This infects the "money" node with reward now, and forces the AI to start blabbering about money all day....It makes it specialize and evolve new goals, so that it collects/ generates new data from a relevant domain. All output of the AI is only to control input, so that it doesn't intake data from random sources such as websites, topics, or lab tests. It specializes it's source so it's non-random inputs.

What this reward updating is, is Semantics/ Embeds/ Translation. The only difference is it is saving checkpoints where to exploit, then searching there. So instead of GPT-2 asking "I will cure cancer by " it focuses on viewing that semantically as mostly (or starting off already at) "I will repair cells by ".

RL for learning to walk does the exact same thing, it specializes its motor actions until it gets most reward/ acceleration. In our case, our reward is Prediction Accuracy.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 13, 2020, 07:05:34 pm
Another way of saying it but clearer:
I have realized a very large next step for Blender/ PPLM. I want to keep it short here but fully detailed still. So you know how GPT-2 recognizes the context prompt to many past experiences/ memories, right? It generalizes / translates the sentence, and may decide bank=river, not TDbank. Well this is one of the things that helps it a lot. Now, you know how humans are born with low level rewards for food and mates, right? Well through semantic relation, those nodes leak/ update reward to similar nodes like farming/ cash/ homes/ cars/ science. Then it starts talking/ driving all day about money, not just food. It specializes/ evolves its goal / domain. Why? Because it's collecting/ generating new data from specific sources/ questions / context prompts, so that it can answer the original root question of course. It takes the installed question wanting an outcome ex. "I will stop ageing by _" and is what I said above: "recognizes the context prompt to many past experiences/ memories" except it permanently translates into a narrower domain to create a "checkpoint(s)". So during recognizing a Hard Problem context prompt / question we taught it/installed like "I will stop ageing by _" - it jumps into a new translation/ view and creates a new question / goal "I will create AGI by _". It's semantics, it's gathering related predictions from similar memories, same thing, just that it is picking specific semantic paths, updating, just like RL. RL for text (prediction is objective).
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 14, 2020, 03:20:54 pm
What I'm saying is:

Blender can be forced to incorporate certain topics into what it says. So it talks about ex. cars all the time, no matter what. Humans do too, but evolve it. They start off wanting food or mom, then they can discover food = farming semantically and now talk about farming a lot more.

This agenda/persona updates/specializes, into a narrow domain. It evolves its question. Output of AGI controls input source to collect/generate data from. > It decides which lab tests to try, and those determine which next lab tests to try. At first, input source is random data collection, just like those robots that learn to walk.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 16, 2020, 02:13:36 am
Short Prediction Survey
https://docs.google.com/forms/d/e/1FAIpQLSeQJGn07OF4jp3shUBlb3jVkq_R3ZcLipyDyaEfNbbyZIGP0w/viewform?usp=sf_link
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 16, 2020, 05:06:05 pm
Results are all over the place. The true answer was 'eagles'. I was testing if humans would find the pattern in each sentence that the 2nd word is always related to the final word.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 21, 2020, 12:01:00 am
Wrote something sweet today so will share it:

The brain loves to collect lots of data (sight-see) because more data allows it to solve future problems better.

Dogs love new toys (new data) because it let's them explore new problems. New data is more data.

Talking about exploring, exploiting is when you work all day in some domain you love (ex. AI) because other new data is not actually so useful. We evolve this filter/ our goals, we start off focusing on food and then move attention over to cash then to jobs if they have similar contexts. The brain makes "checkpoints" or filters where to collect new data from in the manifold space. Then explores there. Blender and PPLM both do this (they don't evolve it though)%u200B

As for your questions, browsing Instagram with color turned off feels worse because of missing data. Or perhaps you want to look at food directly and don't need data, but the same problem remains that you are not seeing what you await/ forecast to see because you lack sufficient data. The brain always wants to see a future.

If you play a game, the visuals may make it more relatable to understand the knowledge.

And when treats are paired with other domains, you can make the agent "get into" that domain.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 22, 2020, 06:13:29 am
DNA and Brains both store information and mutate it to create new organisms and new ideas. We pass down our genes and memories to our kids. And today you can easily see the rapid pace of evolution now starting to become faster, and it's clear it's all about computation of data and AI. That's because evolution *is* computation aka phones, communications, AI, etc.

DNA and brains both model a ton of data, they can do a LOT with very little. The model DNA and brains have is modeling patterns. We have many hairs, fingers, eyes, cells, but only one code for it. The universe has patterns because its laws of physics are few. Structures like DNA and brains and other mechanisms exploit this and model the world so that they can predict or successfully reproduce or maintain their own form. So because of patterns, organisms evolve into getting better at being "immortal".

Too large planets become suns/stars because of too much context in the core. The bigger your DNA or brain is the exponentially more it can model. Data relationships are combinational. So, bigger systems can evolve faster and survive much longer, maybe even pass the probability at some point, assuming our universe doesn't die of Heat Death.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 23, 2020, 05:03:58 am
Oh, also not only does DNA model patterns ex. code many fingers using 1 template, but also models patterns of things in the environment. For example, it finds that it's common the organism transverses the ground, so wings are most useful.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 25, 2020, 11:55:13 pm
https://community.singularitynet.io/t/pre-release-building-on-gpt-2s-successors-blender-and-pplm/2958/3

Notice by the end I solidify the concept that 1) more data improves prediction, 2) new data (exploring) does even more, and 3) favorite data (exploit) does even more! And we evolve/update our filters, unlike Blender/ PPLM which don't evolve/ update them. The same concept is done in RL for learning to walk, but it's more powerful if done for text/ vision!

To make Blender/ PPLM more AGI-like, you force it to talk about food/breeding (survival) most, then it leaks in the embed space to related nodes. It's just generalization to past memories to help prediction, like done in GPT-2, BUT it must save/ update checkpoints! These desires/ forcing like in Blender/ PPLM, drive (as they call it) the model prediction/ attention.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 27, 2020, 05:39:04 am
If I didn't work on AGI, I'd work:

1) In the computer industry, making computers faster, waste less energy, smaller, and mass manufacturable using replicators. General Machine Tools, sensors, motors, energy production.
>>>
If you don't want to make the algorithm/AGI more intelligent, you can just train it on more data to improve accuracy. And if you don't have more data, you can just throw more compute at it! Why? Better AI/ more data both control Attention during "searching" for answers. You can, stumble upon a cure for cancer, if you try every single possible drug, pill, or device (ex. nanobot), brute-force style. It's slower, but if your computer is fast or runs parallel, then you can more-so skip the "AI/more data".

2) Cryonics, drugs, reversing ageing. Cryonics is most interesting, because it already works on humans preserving [most] information and works very well on frogs and spiders that have evolved and have natural antifreeze. It's up for debate whether that lowest level of information lost in the brain is critical, but like said, [most] information is preserved! Fascinating domain.
>>>
Evolution is all about Generators. Any system (a rock, fridge, human, Earth) transforms itself into its future self. Its current state/ context decides it future self. This is Computation. Both DNA an brains model/compress tons of data/patterns using very little storage. Through mutations/brainstorming, new DNA and new ideas/models are created. Today evolution moves fast because so many brains are communicating using better iphones etc etc. Anything related to computers, rules. TVs, speakers, games, phones, etc. It is the AI on the computer that can store the past and make futures. It's searching.

Then, once we reach "utopia", I can finally make the greatest video games etc I have in mind.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 06, 2020, 12:39:59 pm
Distill - MINE GOLD
https://distill.pub/2020/bayesian-optimization/
Does anyone here know how to apply this to text prediction? Hint: I already told yous.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 07, 2020, 06:50:04 am
1_Question survey:
https://docs.google.com/forms/d/e/1FAIpQLSdEBf5QsBFNjreiiGj6_TVpaU8t4Dm_r2kQC-9pq8nNKQABdA/viewform?usp=sf_link
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 12, 2020, 05:37:14 am
You can see more of these strange tasks at bottom (images) https://arxiv.org/pdf/1911.01547.pdf

My AGI blueprint/net predicts/recognizes text and uses frequency, similarity, recency, reward...but these strange tasks are modularities/dynamics of the net, for example counting and checking 5>4 are just hierarchy node sequences stored, there' no calculator or counter, just replay...

If I show you 4 blue objects and put yellow line around them, then show new image of new blue objects of different shapes and sizes, how it outline them? It must recognize a Byte Pair Encoding segment "blue object" and translate black square to yellow as long as is touching blue. If I want a every other yellow outline, then same rule but as long as yellow square is not touching yellow. If task is fill in all objects, rule is that translate black square as long as touches a colored square and do so to same color as that colored square as long as can't reach image wall.....mind boggling
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 12, 2020, 01:44:17 pm
https://arxiv.org/pdf/1911.01547.pdf
See those images? I know it's prediction, like GPT-2 or IGPT. But how do these tests guide us to AGI? I know I can do them, so AGI must. But. How can they help solve cancer etc? GPT-2 says answers. IGPT for video would too. These tests, however, seem more like labor or or repair than answering big questions. Wtf? Count objects, move and flip, stack objects, group objects, outline them, draw maze path, denoise, fill-in, bag all objects, change color, change shape, link, upscale, laser mirrors, gravity, etc, how does these help solve cancer or other inventions??? I get that if you're in a star-ship and 1 of 10 engines break, you can repair it by looking at the others, but....
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 12, 2020, 07:17:04 pm
I'm still trying to figure out how this can be incorporated into a GPT-2/ IGPT. It's like an Internet of Things. It's prediction, and it uses patterns, but it doesn't seem like it can answer questions. Yet my brain can solve the tests. Ok I gave it thought. Some of the tests are not physics based, hence useless, for example the  cycadelic pattern fill in, that isn't a sequence prediction or even object repair, just art repair... There is probably a maze test, and that and the laser test seems to be a tree search/video prediction than a static prediction. Good for predicting a string threaded through or a video of a man escaping a cave system. You can ask it to de-noise or rotate or summarize or translate objects (cat2dog). There's probably a stack/group test. I'm not sure how these helps answer big questions like GPT-2 "can". It seems like the dynamics of the net are controllable and hence many of the tasks can be useless, some rarely used, and some often used. Is it confusing to anyone else? How often do you rotate objects or solves mazes in GPT-2??? Rarely, right? And what's the stacking for, I know an invention may stack memory cells in rows, I guess the word "row" can be, in vision, actually modifying the old object, ex. GPT-2 may write the apple turned brown and was cut in half and stacked, then melted in an oven. Generating video would require morphing/ re-arranging the object. But that's based on data/objects's relative locations fed to it.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 13, 2020, 07:26:52 am
You can see here I wrote a dozen tasks: pattern, untilHits, denoise, move&flip, change color, rotate, duplicate, scale, keepPositinosAbit&countButChangeClor&shape, inflate screen as object that has most counts ignore position size etc, copy pattern, laser, advanced laser, fill-in, outline, ev oth outline, conect objects, stack objects, group objects,

I think asking it to do one thing is one thing, but having it "talk" about a plan/story is another thing, or many should I say, I mean I could generate a video saying "to use the lawn mower, push left, flip your self around, then keep moving, stop at the wall of course, find the shortest route around your home, fill in the holes with soil, and line up some plants of the same type too"

Which incorporates many of the tasks above.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 13, 2020, 08:11:43 am
Ya, you could think of every of those tasks as being a word, like color, shape, move, scale, duplicate, and if you see it enough, you write about it more, so, if you see a new cube and saw "move left" many times, your new cube will predict next "move left", either the cube decodes itself/ transforms itself or the next item predicted is "move left". Naturally a video playing can do all the tasks shown above. The difference in the paper is it is *suddenly" changing, it doesn't show you a cube falling down or being rotated, just the final frame! And it's just activating your videos in your brain. So upon seeing new cube, you see it rotate, and you only only draw the final frame i.e once is transformed fully or the Byte Pair Encoding ends (write next word or phrase, you only draw the final word OF that).

This massively clarifies it all to me now, if I'm correct.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on July 13, 2020, 08:34:22 am
This massively clarifies it all to me now, if I'm correct.

How do you propose to find out if you're correct?

Define what you mean by "I wrote a dozen tasks". Did you implement working software which you could demonstrate, or is it all in your head?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 13, 2020, 08:44:27 am
I haven't implemented much of it, but what I have implemented so far - even though little (and having seen others code so far ex. openAI's etc's results, innerworkings, explanations of innerworkings), grounded 80% of my work. My AGI work is mainly a collection of my discoveries/ knowledge from others, I move very precisely forward once my prediction confidence is high enough, the more data I store the farther I can extract new data out of what I got so far. It works, I need not yet code anything to be sure. So having coded just a bit etc was more than enough evidence, I already didn't need that data. Though it did fill in some big gaps. I actually was enlightened prior to coding it, not coding it lol, I was studying the Hutter Prize PPM algorithm and was then I learnt how the algorithm worked. Coding it didn't change much after that.

"I wrote a dozen tasks"
I mean the google paper above, I wrote down the tasks they show in images, and news ones, as text, is all...
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 13, 2020, 06:58:04 pm
How Evolution Works for kids:
https://www.youtube.com/watch?v=ck4RGeoHFko
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 19, 2020, 06:22:03 am
https://www.youtube.com/watch?v=7wqmXo0Jqa4&feature=youtu.be

The universal adversarial trigger is to place some words or objects at the start to force it to talk about that. Ex. "cat dog rat The movie was: cat". I knew this already. I also know how to avoid it. You need attention stronger than it always, look to Blender - it drives the model! Blender is better than GPT-2.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 19, 2020, 10:22:21 am
However, this is not universal! You can make a ANN that doesn't use semantics, temporary activity, etc, and use just plain exact matches, and will not even see the words farther back and will only use exact matches.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on July 21, 2020, 02:35:53 pm
Ranch always said if it works u got a sellable product, well he and gpt3 are right!

And yuo should code like its your last day on earth to get stuff/ immortality done and solved, gpt3 is right again!

And don't be perfect and be lazy, the brain is a efficient jack of all trades thing and so is every task it can solve, life is like that, gpt3 is right!

https://www.youtube.com/watch?v=vZalOEmdHFo
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 04, 2020, 06:41:23 pm
I think I've found the anti particles in AGI ::))

My AGI blueprint has 4 elements to it that play roles.

NORMALs:
#1 Connection strength. Ex. dog>barks.
#2 Relation strength. Ex. dog=cat.
#3 Energy. Ex. likely said again soon.
#4 Reward. Ex. say what you want to happen.

ANTIs:
#1 Weight may have inhibition weights, saying it never seen it follow.
#2 If cats are red, ugly, hairy, and dogs are mice, fun, and cool, then maybe both will continue to have no same contexts.
#3 Maybe you want to temporarily ignore something.
#4 Bad reward, it hurts, don't want to talk about it.

? :)
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 07, 2020, 04:26:20 am
If you have an empty brain with just the question "I will cure cancer by" it may just answer "getting rid of it". It needs more data to know how that totally won't work. Perhaps you say place cells closer on discs to make storage higher but actually this causes cells to collapse by overheating. Not only does it need more data to know its boundaries in reality, but also to answer the question. Through this giagigabhamoth, it can find a path. But now the question is, just how much dataset do you need before you can answer "I will cure cancer by"? The answer is reality. Does the solution work. Also you can check how the accuracy isn't improving much anymore as feed it more data, but still is tied to reality checking.

Strange how this goes against me myself...I know lots and use that to find my way through the maze of AGI, I become confident/satisfied as I evolve towards the end outcome. But that confidency threshold/criteria I have is based on the data I know, if I was 2 and said "cure cancer by telling it to go away" I would think it sounds correct, my criteria is based on what I know, it doesn't tell me if my criteria is correct though. Or can it? (knowing I know enough is based on what I know though) Otherwise I need to check reality. Or compare it to our accuracy. Yet, I seem to be able to know if I am there or not. Right now, I KNOW I'm missing data. How's that possible?

It's as if I see walls stopping me from saying I see a path. I could say anything right now, and I know it won't solve AGI.

Must think on this.
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on August 07, 2020, 07:17:10 am
I think if you add more qualifiers... EG what type of cancer ? A specialist would have a hard time answering "how to cure all cancer at once permanently". There's lots of information on specific cancers. Or the fall back answer is correctly "I dont know / not possible" ?

Love your work.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 08, 2020, 08:13:39 pm
Oh here's a better way to explain what I mean by energy, and new discoveries as well.

The more text you have the better you learn the frequency of features ex.
a
an
and
and w
and we
at
ate
atm

See attachment if the following cnofuses you. This allows you to predict the next letter or word. Given a prompt, you recognize the last letter or letters ex. "zzzz[rbage smells]_" and even just the single frequency of 1 letter (the probability of single letters, or words with no context clue) "zzzzrbage smells[_]" and you mix these to make a prediction. In "zzzz[rbage smells_]" we recognize 13 patterns each telling us what comes next "zzzz[r[b[a[g[e[ [s[m[e[l[l[s[_]]]]]]]]]]]]]".

Well, we can add a 14th pattern, energy. A neat video I made below shows that the same word or domain sticks together and is NOT evenly distributed among text. If we really thought using the above methodology that "a" and "an" and "and" and "andy" has a average frequency of 'y' then we'd have a dataset like this basically, if "andy" really makes up 25% of the dataset aka is very common frequency ex. "andy but yes when andy saw that we andy then no one".......but in reality our probability / prediction here is not that good, the ground truth isn't evenly distributed like that, the clue/pattern is we can tell when they start sticking together by when we see andy appear more. Just look for yourself:

https://www.youtube.com/watch?v=OCuA_ynCoL4
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 08, 2020, 08:19:37 pm
(and i'm thinking if they have their own probabilities...hmm)
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 08, 2020, 10:02:29 pm
Ok so yes even for the other patterns it matters. If you see/pay attention to 'new' or 'feed new', dog may come next, but if you seen dog lots recently, you may know you're in the middle of the sticky storm:

dog.......dog..........dog.......dog...dog..dog..dog.....dog.......dog...............dog

And humans don't naturally predict the next bit or letter, but rather the  next word.

I'm thinking maybe the word/domain "USA" has its own sticky storm:

USA....................USA........USA....USA.USA.USA.USA.USA.USA.USA..........USA................................USA
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 08, 2020, 11:11:32 pm
oh wait, prompts aren't as long as a dataset, but they can sometimes hold sticky storms
***********************************************
wait on my wait above, no, when bench-marking on the Huter Prize dataset you do actually run through the whole wall o text like that lol. And so do humans as we live each day, I say 10,000 words in my brain each day, I may change from music to Mars simply and the domain changes. So yes, even though you have a probability of when to see x come next, it's more likely if seen more such recently - our thoughts and articles are like that!!

also, it still holds that the increase of 'dog' makes it more likely even if the occurrence is steady.
***********************************************
n/a

also:
***********************************************
its clearly useful when look back 80 words at Trump and see at the end 80 words away "and the president i.e. [Trump]"..........and so more likely if see it more.....not sure if gradually enter/leave of a domain occurs yet...........and I will look at this later, may be attention heads actually
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 10, 2020, 04:20:39 am
complex multi motor synchronization

featuring me!

footage is in real time, no speed up

Part 1
https://www.youtube.com/watch?v=ednhjD8E6fE

Part 2
https://www.youtube.com/watch?v=VSTG3lvqyNQ
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on September 02, 2020, 08:23:49 am
Is interesting watching these dog challenges:

All that toilet paper...
https://www.youtube.com/watch?v=nhNjV1EOOwc
https://www.youtube.com/watch?v=PkQDek2A29I
https://www.youtube.com/watch?v=OTrtbz1kxf0
https://www.youtube.com/watch?v=nBICYDItEy4
https://www.youtube.com/watch?v=YbgFxQ5LPoU
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on September 03, 2020, 09:09:07 am
https://www.youtube.com/watch?v=4ArjlPAU_X4
https://www.youtube.com/watch?v=-B3vs_p0Pyc
https://www.youtube.com/watch?v=BMXNRFxjHfs
https://www.youtube.com/watch?v=55G4XwtpNak
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on September 14, 2020, 02:02:43 am
I TALK TO GPT-3
https://www.youtube.com/watch?v=B3Q1-Fn1kyU
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on September 20, 2020, 09:14:07 pm
I'll be soon creating a new AGI group so we can get the job done. It will be a better way to work and connect the brightest minds. It will be the most powerful AGI group. It will focus on AGI only and survival. I wouldn't say it's only for experts, but it's for the right mindset. I find there is way too many beginners that can't explain their AI and don't know how to connect or why to connect tightly, they don't even have a real end goal. I want to put a lot of time into creating AGI and building a team to do so.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on October 23, 2020, 06:09:42 pm
This year has been my most productive year. I've learned to program in Python and make my first AI a year ago, I've much better understood KNN/K-Means, Random Forests, I already had the random forests idea but the KNN was amazing it took me 2 days to get so much out of it. I've made a dozen large discoveries and 100 micro discoveries. My AGI guide is much more on the way now and incredible than ever before. Among things I do every month I went through all of Kaggle courses in 2 days just now https://www.kaggle.com/learn/overview, I didn't bother with the code because its expensive time-wise to code but I still read all they said and understood it well. They always go through what others do and they certainly did a more thorough one they had reinforcement, NLP, computer vision, word2vec, backprop.....they didn't say so much in details but its clearer what they know (and share, they never share everything and clearly). And NLP and RL are not to be separated do not underestimate their relationships, computer vision is also very similar to NLP AI. I also went through lots of other readings of course, and generated my own discoveries of course. And I'll soon be creating images and code and vision-versions for my guide to AGI and an exotic AGI group so we can cover more ground and be surer/ clearer.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on October 23, 2020, 10:13:42 pm
Also,

In some way, I feel there is 2 approaches to life/progress, and you could actually put it like this: you could take the Satanist or Religious approach. In the godly method, you re hopeful, but this is too hopeful, there is no grounding to reality, just a single book, or, what you could call "a thought". The satanist way is grim and truthful, too, but too little hope, they tell you we die, they promote death, they are crass and nasty. I like to stay in between :). I'm very hopeful so that I have goals, I may seem blissful at times or "detached and in the future already", but I'm also very grim dark and honest and know we are machines that will die if we don't evolve so that I can reachhhh those future goals, I may seem evil at times or "far in the past of history". In AI terms these 2 things are Frequency of observations, and reward, it causes you to say things that likely/ usually/ should happen. You predict your future. Will it be nearby death, or a distant future? The biggest rewards tends to be sparse, how long do meals or love making last? How long does victory last? We really want reward, but we really want frequency to walk you to the reward, and the walk needs reward to have a reason for the walking. When you lack one of these, you either don't reach the future because you can't walk or you can't reach the future because you have no clue where to walk.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on November 03, 2020, 07:07:33 am
So I've been thinking. What if ANNs learn physics functions? Are they learning more than just entails and co-occurences? Are they acting like a Turing Complete neural computer?

I mean, if you count what word follows the word 'dog' in text, you can find out what dogs usually do, to predict the future in text better. The more data it sees, the better it gets. This models physics, in a cheap, but easy and pretty helpful way. And instead of taking all particles locations and speeds (which is impossible), you take 2D images of the world.

But you can also run a physics sim and get a really good prediction of the future rolling out, just it's super, super costly. But there is one thing interesting, you can take refractions in an image off of wood etc which are all reflective/refractive some amount and get back the reflections as there were before refracting by merging all that data and see something that is not shown in the image directly, ex. a cat face even though only a tail is shown.

How would a ANN learn that on its own though? A net, in the shape of a hierarchy, able to adjust its connections to make up rules, could it do it itself?

If you have a net using Backpropagation to find a mapping between input and output nodes, hmm, i mean ya, its adjusting the net erm rules, but, hmm, you tell it it has error and to tweak the weights again so that it predicts it sees a cat face even though there is only an image of a tail?

When we say backprop makes a mapping between inputs to outputs, we mean it will find the patterns and know what input activates what output, but, this can't find all patterns, it seems like a blind way to find patterns.

Let's take a simple dataset and see how/why it'd form a net some way.

cat ran, cat ran, cat ran, cat sat

So far the net would, given 1 word, weight strongerly to ran, more than sat. The net is trying to say ok this is the right output and you have the wrong output so let's get rid of these and up these weights, continueing through each layer, and the idea may be it gets rid of the not pattern nodes and merges the pattern nodes where is most value or at least that's what's "in" the net with simply wasted nodes that can be pruned afterwards. Still not seeing much here hmm....

And what about double sided backprop i.e. forbackprop? Isn't building a bridge better if done at both sides? And middle zones too no? Is that called a Hopfield net? Or SVM? Does it exist?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on November 03, 2020, 11:48:16 pm
If the laws of our physics were random, there'd be no patterns/laws. Simple AI rules like counting FREQUENCY captures any pattern/law even other physics our universe doesn't have, at the cost that it isn't so accurate. Other rules are more precise but less general, it may predict the future in a physics sim perfectly but won't be able to predict other physics at all, and the reflections rule to see a cat face that is hidden only works on light, a subset of physics, but is more flexible. It's more likely we are merging the general rules like frequency/ recency/ related, and the rest are made from those ex. how we came up with physics sims ourselves is an example. Backprop and neural Turing Computer machines seem to want to find patterns/target functions on their own, but do so by using key parts like recency, long term memory, relationships, just like my AI can learn deeper patterns once has the few common ones! It seems backprop is only but a way to learn FREQUENCY and RANDOM FORESTS. FREQUENCY etc are universal rules and work well together ! and well on any "mapping" or functions/laws that need to be understood.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on November 22, 2020, 11:57:48 am
Lol I'm reading this guy's research notes next month and he says exactly what i said:
http://www.adaptroninc.com/BookPage/1969-and-1970

"98.  Sight and attention: I can pay attention to a spot on the wall and my attention is on a specific very small area of the retina. I also can pay attention to something out of the corner (side) of my eye. I can stare at one thing but not see it but see something out of the corner of my eye but not in so much detail as if I looked straight at it. So my attention can switch not only to sight sound and feel but to a specific area of sight or even just the general picture of what I’m looking at. Now when you imagine something you combine the small specific sight areas into a general picture. Like a man with green togs a straw hat walking on a beach, each specific thing is seen in memory and then combined to form a general picture."

He sounds very precise so far, writing about how the body has wires, I/O, electricity (the "blood"), etc. That's something I do too.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on November 28, 2020, 06:58:33 pm
Geoffrey Hinton has said about backprop (which he helped bring into existence): "My view is throw it all away and start again".

Sometimes you need to go the wrong way to get to the right way, there's no "clear" path or else we would have an easy walk! A common coach will tell you "keep going, get through it, don't give up".
Title: Re: Releasing full AGI/evolution research
Post by: HS on December 03, 2020, 06:57:19 am
https://www.youtube.com/watch?v=oXyQU0aScq0&ab_channel=Numenta
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 04, 2020, 01:20:19 pm
No one mentioned my architecture even one bit in that video (and no one on the internet has my full architecture).... What's funny is mine is energy-based too but the energy only forms the network/ accesses nodes. The energy is even part of the prediction score.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 15, 2020, 04:54:31 pm
After studying Lateral Inhibition I believe now it does not exist. Check out the Mach Band Illusion. And White's Effect Illusion (the 2 big blue balls version). For the Lateral Inhibition, imagine two 1-pixel-wide lines side by side, both on the ground of the image editor so only one end (the tops) of the two lines is shown, and one of the pixel tower lines is 2 pixels shorter in height. It is noticeable, but the farther apart the lines are, the less you notice the height difference, because the distance between the 2 lines is not anything you seen before as often, and looking at one line at a time matches each other so similarly - we know this isn't because of retina focus/location offset because it happens even if look at both lines when they are really close (when you look all of both lines, not tips), but thankfully you can look at only the top tips of both at the same time and doing this means you won't be able to compare the complete lengths of the 2 since you no longer look at all the line. The only way to recognize which line is higher is when they are very close side by side and when you look at both at the same time and only look at the tips. The idea here is the Mach Illusion is you recognizing the difference but only when closer the more you do so. The brain does have a strong emphasis for edge detection, it builds all objects, you don't need lines to make a toaster you just need different shaded blobs of grey. But we don't change the sensor truth, and the brain already has a pool effect where if it sees a shape evidence enough it is like ok ya know that is definitely a cube there I'm positive fully. And while the illusion looks real, right in our vision, so does motion illusion, spinning ballerina silhouette illusion (which way she turns is based on your primed thoughts), shepard's illusion (vertical table looks longer), ponzo illusion (farther back same size man looks bigger), the brick road illusion (same image but right one looks much slanted), and everything else you think you see lol.

White's Effect happens because you pay attention to a sphere or both and the white or black is mixing in with what you have your retina "eye" on in your retina image.

Since I've incorporated vision and cortical columns into my AGI design, I strongly believe there is no Place Cell and that all senses come from other senses or sensors. When you move your hand around a cup with your eyes closed, multiple points of touch will weigh in and vote on what object it is. But when you move your hand, then feel something, you seem to know where it is, which helps you, for example you feel a pointed tip (of a pen), then you know you move your hand 4 inches and then feel a clamp (of the pen, for shirt wearing), i.e. you know the distance between the tip and clamp, as well. But this is not happening because of place cells or anything motor related, simply your vision is dreaming the image as you visually think of moving your hand 4 inches and feel the air on your hand hit it, you end up with an image of a tip 4 inches away from a clamp, and touch sensory of pen parts too.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 17, 2020, 05:01:38 pm
When I saw this I was like wuuuut, I didn't know I had 2 blind spots! And one is seriously, BLIND.

wiki:
https://en.wikipedia.org/wiki/Fovea_centralis#/media/File:Human_photoreceptor_distribution.svg

test!!!:
https://lasikofnv.com/try-these-3-fun-tests-to-find-your-visual-blind-spot/
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 25, 2020, 05:22:53 pm
156,000 humans die per day (before covid). And the causes are below. 49,000 deaths a day are blood vessel related, it simplifies things, we need nanobots to go through them and repair us. Covid is 11,600 deaths per day on average as of today, it's almost half as bad as cancer. https://www.visualcapitalist.com/how-many-people-die-each-day/
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 26, 2020, 09:34:56 am
Can anyone's AI - Korrelan's, Google's, etc, recognize the letter A in these images below? And how sure is your AI A is there and how many training examples of what A looks like was needed?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 30, 2020, 07:22:10 pm
One said you'd need full AGI to solve that above. Remarkably, I say you need only a small part of AGI to ace it. How odd isn't that? To get 2 very opposing views in the same week! I'm thinking about coding it in the coming months. I believe I can ace it in accuracy using only 1 example of what A looks like. All these distortions of the A are just location and brightness offsets. Rotate an A, stretch that A, blur it too, flip it, brighten parts of it, upsize it, remove color, rotate parts of it, etc. A human, given 1 example pic of a never before seen object ex. a elephant-dragon-frog, will easily be able to recognize it later despite many distortions. All the mentioned distortions to the A ex. rotate/ blur/ etc are solved by a few tricks. To illustrate this, imagine we see the A now, it is same, but is much brighter, now when each pixel is compared to our stored copy, it is off a bit, not same brightness, but the amount off error is same for the rest of the pixels, they are ALL 3 shades brighter, so not so bad sanction it will get then. Do this for each layer and you have an efficient network. For location it is same. The simplest part to my to-be AGI is recognition, everyone's just doing AI wrong.... Recognizing the A is simple per see but the distortions make it less matching, right, so, but most (99%) the A is there and relatively no different between its parts. That is the pattern in recognizing A. > Similar brightness, location, and similar relative error expectation ex. hey i off by 2 shade and my bro is off by 4 shades we similar bro! If this works, it will start everything I think!! Lock finally tackles Vision omg! It' be my 2nd algorithm I coded then.

It's once you merge the 2, brightness AND location, you get the full ability.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 01, 2021, 05:50:47 am
Would be best if we make a new, from scratch, full Guide to AGI, there always too much go look here, link this, that... just mentor us, but instead make it a very intuitive guide, that makes sense, and short. I unfortunately work on another area, not exactly modern AI / backprop. I have learnt lots after 5 years straight of work towards AGI, and yet still if I had a full guide of all we learnt so far from CNNs, Transformers, etc, I could be maybe much smarter, after just a 1-day read. But nobody can organize such an Guide To AGI. I'm beginning to think more and more nobody really understands the connections between architectures, nor can simply say what the tricks are, there's too much math behind it, even though it's fully understood, the math is hiding the actual understanding of what Patterns in any dataset are.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 02, 2021, 12:06:40 pm
I think I'm onto something big. So in my big AGI design there is a smaller part for just recognizing objects. When I tried thinking about how it'd work for images instead of text, I realized something new was needed. And it should help text recognition too. A human can see a new image they never seen before, only 1 example of it, then given dummy images can easily spot which is the same image despite being rotated, brighter, stretched, blurred, parts rotated, colorless, flipped horizontally, bigger, etc. Same for music, a slowed down, higher pitch, louder version of Jingle Bell Rock sounds very recognizable despite being a different "image". It's because really there is not much difference, relatively all parts are the same, let me explain.

So, originally you store a node in the brain for the text word ex. "hello", so if you see "hell" you recognize it some amount at least because of #1 how many parts expected are there (ex. 4 of 5 letters are there, so it is 80% triggered/ recognized), and #2 the time delay where it expects the parts ex. "olleh" / "hellzzzzo" / "hZeZlZlZo". So it's flexible, it recognizes typos and delays in location. "you how arrrre ? doing" lol.

So with an image of a stop sign, say we see one but just is much brighter. Obviously if we total up each pixel to match, the global total sum of brightness difference will be huge, each pixel is 6 shades brighter than the pixel we compare to in the original memory image. Yet an image of a frog, not so bright, will have a very similar global sum of brightness (how bright the image is) by just simply shuffling around the pixels. Ex. 3 pixels of brightness > 3, 6, 2......and if the image is 2 times brighter it is 6, 12, 4.......still looks like a stop sign image just brighter.....but if we take the original stop sign image 3, 6, 2 and shuffle them, we get an image of a frog ex. 6, 3, 2! The time delay is of course off but it won't help now. In fact you can see here in this video they cut out real wood pieces and by just rearragning them they get a different image! Clearly their arrangement is not usable to see a stop sign when it now looks like a human face. https://light.informatik.uni-bonn.de/papers/IseringhausenEtAl-ComputationalParquetry-TOG2020.pdf

So how do we realize that a much brighter stop sign, although much brighter than a frog (frog gotten by shuffling the pixels around, not any brighter than the original stop sign!) is actually a stop sign, and not a frog? Time delay has some help but clearly won't help us enough and will lie to us (frog=stop sign), so how do we realize the brighter stop sign is actually a stop sign? Because each pixel is of a much different brightness, they could be arranged differently for all we know. Again, the frog image has less brightness difference in global sum total. So with the brighter stop sign, we need to not just compare pixel to original pixel, we need to look at it like this - suspect image pixel 1 is 6 shades brighter than original image pixel 1, and the other pixels when compared are also 6 shades brighter. That's the pattern. In text this looks like this - we see "hello" and then see "hzzzezzzlzzzlzzzo", obviously the location delay is lots, but after the first painful wait of 3 z (zzz), it sees another same wait time (zzz again), it is less upset and expects the location of the rest of "hello", it is hello, if it were (without the spaces and CAPS) "H sfw E uowfg L rtl L opywds O" it's obviously not spaced evenly like "H rts E jui  L dfg L awq O". Although both look really silly, the latter 2nd one has the pattern "hello" in it because there is 3 random letters between each of the letters we want, the other is random letters all over it doesn't say hello. It is expected error. And really useful for image and music recognition I believe.

For image, it uses both location, brightness, and color, relative expected error. So if allllll the image is brighter, or stretched, is sees the same gap in expectation across the whole image. Still works fine if the stop sign's top right and bottom left patches are inverted brightness, it will process each the 4 squares with less upsetness, then when compare the 4 squares themselves to each other it will se square 1 to square 2 are very different brightnesses cuz inverted, but so is the other 2 corners.

For a music track, it can be stretched longer, louder, higher pitch (size, brightness, color ;p), and it should work fine seeing the expected error across it. If we feed a small hierarchy network the music track but flipped played backwards, it doesn't really look at it like that, it just puts each pixel into the bottom of the network hierarchy, then if the end of the song has expected brightnesses and locations and pitch it will light up parts of the node in the higher layer of the neural network (the start of that node, despite being at the end).

Does it make sense to you?

If you blur, stretch, brighten a stop sign, it is very different to a computer to match to a memory, but obviously it is really, little different, it matches lots, because relatively between each parts of it it is relatively expected to each other, the parts are relative, each is as off in error. When it comes to pattern finding, you start at exact matches (only I know this, others don't...), it roots all other pattern matching. For example if you see "cat ran, cat ran, cat ran, cat sleeps" and given the prompt cat>?, it predicts the frequency is probably 75% likely, sleep 25% likely next word predicted. And if we want to do better we can recognize translation ex. dog ran, sleep etc as much, so cat-dog, and so maybe cat barks, dog meows, they are extremely close words. And look at this rarer pattern problem "hat man scarf, book building library, he frog she, wind dine blown, truck rat ?" it is vehicle that comes next, every 1st and 3rd is similar, so we match each 3 to each other then to the last triplet and see the 1st and 3rd should also be a translate.....lots of translate matching here lol!!!! When it comes to images, the pixel may be brighter, so it isn't exactly exact text matching, but it still starts at exact matches as evidence for anything further, pixel a is bit brighter than pixel b. It's rooted at something. Exact match. So back to image distortions, there is lots of relativeness in them, there is really not so much distortion, clearly a stop sign, this uses pattern matching to reach the activation / conclusion. It's very simple. But the AI field is not showing how simple it is, and I think that's where they really lack, and withholds better understanding.

I'm going to try to code it in the coming months, but first am busy on finishing the guide to AGI to a more stable version, so it may have to wait a bit to get worked on...... but thought I'd share with you some of the idea ahead of time as it is very interesting / simple and something the AI field can't even do as well, they need many examples of a stop sign and still fail over silly small 1 pixel attacks.

When I ask myself why is it still a stop sign image, I start at the pixels we start with, and then know we will do worse it missing some parts of the stop sign, and I look at the difference in brightness, location arrangement of parts of parts in a hierarchy network, color too, then from there what next can we do ?, the relative error difference "how similar is the error" in brightness, etc. So although there is error/missing parts during matching a stop sign, we can also *relieve the error lots by seeing the pattern across errors. When you stretch an image object, it adds lots of error (though not tons, else we wouldn't easily recognize it), but it can be resolved by my trick by seeing the pattern of error across the object.

Do note there is other pattern helpers for prediction ex. every image I showed you for the past hour is a cloud, so the next image is probably a cloud even if not looks like a cloud, however, this is if have a history, if no history, you rely on the image alone. And the reward system simply weights prediction to loved words/ images, it is not telling you what really is in an image.
Title: Re: Releasing full AGI/evolution research
Post by: HS on January 03, 2021, 07:54:10 am
Possibly there is no meaningful difference between symbolic language, like ''table'' written down, and sensory data like seeing a table. Both the word and the image could be symbols which represent the concept of a table in the mind. We might even be fully symbolic; each sensory input could be treated as a symbol. Maybe that's why language came so naturally to us, our brains were already doing the same thing with other shapes and sounds. If that's the case, it seems like we'd need to build a visual vocabulary, eg ''hinge, rectangular plate, frame''; and then rearrange it, based on the combinations we encounter, to create new concepts such as ''door.'' So, we need to figure out the recognition of symbols, and then the mechanism for their internal representations.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 03, 2021, 09:51:54 am
Hierarchy automatically builds bigger memories for each sense type cortex. Text looks like ex.: https://imgbb.com/p22LNrN

Text is just a way to share the same visions. Cuz we can't share vision, unless link text then say text then hear it then link to its images. So text is really similar because of that.

Nonetheless any dataset has patterns in it if it does, and the better pattern finder will compress it better. We need not worry if its vision or music. Though working on music/ vision etc each shed light on others as I proved. OpenAI also realized this.

A brain network only merges data into pattern clusters, we don't store the same word twice, only strengthen a connection to represent it. Same for similar nodes, we link them close to each other clustered by strength if they both are fire together wire together if share similar contexts ex. in the net dog and cat both link to same parent nodes eat sleep eat run play love cute animal etc. Allows both translation and swap use ex. cat barks, dog meows predictions.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 19, 2021, 05:49:51 pm
Wait is korrelan using polar morphing of images to solve recognizing objects upside down and bended? Just a thought and inquire.
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on January 19, 2021, 11:08:37 pm
No, not polar morphing per se, but the human retina does use a polar rod/ come morphology.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 20, 2021, 12:04:54 am
Yes yes, I think I see what you do below, you might not only be storing features with bending to them, you also might be doing it to inputs so they can better match to known memories!

At least it appears that way, I may be wrong. My work though, does not need to store any more than 1 rubber duck though, in its normal form upright, no polar bending. If input comes in of the duck upside-down, again I do not need to bend the duck so some it is upright and hits my upright-memory I have stored. Simply the upside-down duck matches that upright duck. How? Well, I''ll tell you soon. I've been learning lots more on Vision.

https://www.youtube.com/watch?v=tMMHr8u0ccc
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on January 20, 2021, 07:38:39 am
That vid demonstrates that with a polar schema there are no rotation or scale invariance problems, these are 'man made' hurdles derived form our insistence on using Cartesian schemas.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 20, 2021, 08:27:57 pm
LOOOOOOOL So you are doing that! It seems very obvious especially with the title of the video. Well, objects in real life, like a Face, really do have the nose above the mouth and between 2 eyes, your video shows the duck/ face/ etc actually bend and squish, and rotate. The duck in real life is not bending or even moving, why make the poor robot see something not happening!? Constantly feeding it a new morph and rotation!? Not good...

You mention you use a non-Cartesian schema, that the image really loops around to itself, but as said this changing input is unrepresentive of the actual object. The only agnostic thing that occurs is the pixels go up the network regardless of their location in the image, so all the dark pixels hit the dark pixel node, yup, and a bit of the lighter pixels nodes they trigger too a bit. You've seen my christmas hierarchy, right? Well all the a and g and z go up the same node upwards...

You're simply rotating the input so it matches better. I strongly disagree with Data Augmentation though, I have found a more interesting approach instead of distorting it to all possible combinations...
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on January 21, 2021, 12:09:02 am
No I'm not doing that...

Ah, I haven't conversed with you in a while and I was hoping that you might have changed but alas... you still don't read what is written... and still only see what you want to see.

😀
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 21, 2021, 12:29:10 am
I see you/other AGI pioneers still won't explain AI either, same boat :p .... it's so frustrating...

hmm i found it....
https://microship.com/log-polar-grid-rotation-scaling-vision/

this is so funny omg...
https://microship.com/cromemco-z-2d-review-kilobaud/

i got it, let me explain it now:
see the smiley face circle? Look at it as rings. The outer rings are lines but beside the other lines....so a big smile circle is a ring/line far to the right, hence the horizontal change of location it mentions for scale. If you rotate the mouth line, it is still on the same ring/ line, but not moved up/down ;p

hmmmm, so...surprised it actually works. Well, not bad. Hmm, so if we're building AGI, you store the image input as, um, a row of rings laid out side by side hmm, but is actually only the lines of the image (see sobel filters)...then store a hierarchy made of parts faces, arms, curves, lines, pixels... But it also requires we align the smiley face in the middle, thinking...

Oh boy, so not only do you need to align the laid out rings as said above, but also keep the input smiley face in the center where you saw it, it's not location invariant...if you move the circle over so it is not a ring on a ring, the circle will be, when laid out the rings, a circle still.

How do you solve both problems?

And is it stretch invariant? Hmm, no by the looks of a small test in paint i looked at.

And flip invariant? b=d. If we show it a cat, one ring would have the tail and eyes, middle ring the body parts, middlest the body center....nope, fails this too.

Also, seeing a bigger smiley face can't give the same input, it "has" to be a bit different, human notice it is bigger and not exactly the same also. Hence, making every ring the same size is bad cuz then each sized circle looks identical indeed, you make it not the same size a bit i guess...

It might look like it solves rotation and scale but really it is just making the image into rings, then unfolding them into a row of lines, and the pixel locations are only what it seen.........it is not flip invariant....maybe i'll draw a pic: (yes, i drew the cat using 1 line, free hand by my finger on my slate lol)
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 27, 2021, 02:32:08 am
After 1 month I have implemented from scratch a novel algorithm that can with the accuracy of a human recognize image/music that it has only seen 1 time in the past no matter rotated while scaled while stretched while brighter while blurry while noisy while flipped, it tells you how similar the 2 images are exactly, however is very expensive, only works on 7 pixels, for now. No Training, no Data Augmentation. I know exactly how it works too. I think nearly for sure I found the best way to do recognition for distortions. Still thinking about it but it looks interesting. I learnt a lot on Vision too.

This is the distortions problem, obviously if you show it pics of cats then ask what is the next image it won't catch on.

I also wrote a research paper, but is in holding ATM...
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 29, 2021, 08:06:25 pm
Ok the code and paper are ready. Anyone here have an arXiv so I can get the approval to post there? Also looking for someone specialized in CNNs and robust recognition in case anything I wrote is a bit off.

My code can robustly recognize image/music after seeing only 1 example even if flipped and brightened and blurry and noisy and occluded etc.
Title: Re: Releasing full AGI/evolution research
Post by: HS on January 30, 2021, 01:25:00 am
I don't know, maybe you could post it in the trusty members only section. Then if someone wants to volunteer some help they can do so.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on January 30, 2021, 03:25:18 am
Why do you need to get "approval" to post on arXiv? I'm not familiar with their policies so perhaps you could explain them to us. Also I feel beholden to point out that it would be very foolish of anyone to enable you to post something without seeing it first.
Title: Re: Releasing full AGI/evolution research
Post by: WriterOfMinds on January 30, 2021, 03:52:02 am
I was just looking into that ... to submit a paper to arXiv as a new author, you need an endorsement. https://arxiv.org/help/endorsement

It is likely that they only accept endorsements from people who are already arXiv authors and/or members of known research institutions. I am neither of those, and I'm not sure if anyone here is.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on January 30, 2021, 04:19:57 am
It is good to know that they have such a robust procedure for vetting submissions. There is enough garbage posted on the internet already. However, as the page points out, there is nothing stopping anyone from posting a paper on their own website.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 30, 2021, 08:15:00 pm
Some humans have doubted the singularity could move so fast at the end. But the truth is light moves much faster than neural signals. Neural signals travel 0.07386 miles per sec light: 186282. There is so many years we miss out on thinking and moving and sensing between each second in our time frame. And as for level of intelligence, we don't know what is the speed limit here, but it is clear humans really need machines to help them out, so probably room for improvement. We do actually have a speed limit on intelligence. If you look at the Hutter Prize you will see the more intel mechasisms you add the more the compression of 100MBs goes down....50MB....25MB....20MB....18MB....17.5MB....17.4444499MB......it's clear it is slowing down and beginning to be useless. Eventually you'll be coding a solution to each individual problem instead of any grand pattern! The last pattern to give it is the ability to make its own rules. It's clear we are as humans close to the limit but there is still room. Also, the abilities AI will have and the cloning they will do are the only other way to grow in "size". A fast army with high res cams and ability to erase thoughts etc. You may ask but this must improve compression, no? Well, yes, it has to. It helps find patterns. It makes us live longerly stretched, a repeating pattern. Data compression is one way to indirectly measure the true goal, pattern of structures in the universe. Cloning yourself doesn't compress more but it makes you less likely to die! So that's one that is more sub surface to the goal than compression metric like.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 02, 2021, 08:33:31 pm
I'll update this post with my paper/code for vision recognition if I can fix my idea, it is currently very costly.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on February 03, 2021, 01:54:58 am
I'll update this post with my paper/code for vision recognition if I can fix my idea, it is currently very costly.

Computational complexity is usually the problem. Just about everything would be easily solved with an infinite amount of computer power.
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on February 03, 2021, 02:04:24 am
I'll update this post with my paper/code for vision recognition if I can fix my idea, it is currently very costly.

Computational complexity is usually the problem. Just about everything would be easily solved with an infinite amount of computer power.

List of unsolvable problems (https://en.wikipedia.org/wiki/Lists_of_unsolved_problems)
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 12, 2021, 05:11:23 am
https://www.youtube.com/watch?v=MbZ0ld1ShFo
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 16, 2021, 08:41:19 pm
We're about 2 days away from moving forward :)) Don't worry I'll invite some of yous to my new AGI group, but I do have to be proper about how I do this, it's part of getting far in AGI that makes you understand safety. Ya it's creepy openAI is so big on safety, as if they know they are far. Though they're about as far as me, though I'd say I'm farther because I have a better foundation, they're as far cuz they got their's workin. I plan to get back to my PPM algorithm very soon in ~2 days! Gonna beat that record I feel!! Or at least get real close for starters!
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 16, 2021, 09:34:52 pm
Hey infurl, I lost an image you linked me to months ago, do you have that image that looks like below about a cycle of data/model cycle? It was from a link. It mentioned Feature Engineering I think.

The image had a line going from the 3rd sphere to the 1st at top though, cant ind it online either.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 19, 2021, 05:35:42 am
Finally done. It's been 9 months full time each day I've edited the AGI Guide. While I've been working for 6 years straight, it's been 9 months specifically on the new structure I found for organizing my work. I do have a few more notes to integrate, but all the AI inner workings is in there now, mostly a few future world things and other immortality ways are not in it, but No biggy. I'm going to begin inviting tonight. I'll leave yous to read it while I take a breath getting back to my PPM code and perfecting it before implementing harder AI onto it. I'm going to invite a few tonight, maybe 10, do understand the ideas in it should get some safety handling, hence I'm being extra cautious by inviting only 10 or 20. Yes, there is some big paragraphs, but that's just the first top and end sections, for now...the rest are juicy AGI mechanisms. Yes even those are a bit chunky but hey it's a 1 man band ;p ... at least I'm this far age 25!!
Title: Re: Releasing full AGI/evolution research
Post by: HS on February 19, 2021, 06:14:18 am
I'm happy for you.  O0
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on February 19, 2021, 08:14:08 am
You got yourself, but lost the world.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 19, 2021, 08:24:06 pm
What does that mean ivan ?
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on February 19, 2021, 09:38:48 pm
It means that you did what you want, not what the world wants. I just hope you'll manage to link your work to the outer world.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 19, 2021, 09:49:06 pm
All that exists is patterns, all of AGI is patterns, our very goal is to be a pattern too - life extension. It's a hardcore AGI group yes, to last there you'll need to be able to explain AGI that not just makes sense, but is wicked and helps prediction/ compression.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 20, 2021, 06:18:01 am
I think some of yous don't understand neural net prediction evaluation, let me explain, including why Lossless Compression is the best tool for such. A dataset of text or images only has patterns in it - nothing else but re-occurring letters/ phrases.... so when we learn cat is similar to dog by X amount - it is based on co-re-occurences ex. dog predicts eat, cat predicts eat...shared contexts. So, patterns is the key/ all there is to using past experience to predict an answer...we count these re-occurences, and the translation similarity from shared such reoccurences of features. So when you predict, we can measure how well based no how much you can compress the dataset - the more you can find the same letters or same words (cat/ dog/ rat/ horse), the more you can compress the dataset. If you see dog, you can predict now dog>meows. The closer they are the more they share predictions.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 21, 2021, 07:07:37 am
@Korrelan/ and others, if our sensory predictions are not accurate each time we Predict the next letter/ word/ other, isn't that why our motors jiggle when I hold my hand up? I envision where I want it, but there is body error, prediction error, error in sense2motor association, and error in what i recognize when my eyes are open - if i shut my eyes i get big error my hand will move away and i will not be able to correct errors to keep it in a spot.

I assume our eye microsaccades are just this, small errors, just low motor speed saccades. All our body jiggles, we can't keep anything still.
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on February 21, 2021, 11:34:23 am
Do you mean shaking (adult) or directed motor babbling (child)?

There can be many causes for shaking, being cold, stressed, old, hungover, etc.  It's caused by small timing discrepancies in the feedback networks assigned to the opposing muscle groups.  Shaking has nothing to do with prediction.

Directed motor babbling is when a young child moves/ waves its arms and legs as it learns its body plan relative to self.

Your sub-conscious already knows the muscle movements required before you are consciously aware, it may make adjustments on the fly,  ie catching a ball, but they are fluid... not shaky.

Micro-saccades are not errors, they are guided and enable the retina to extract more information from the scene.  It's not what the eye senses that counts, its what doesn't change between saccades.  You don't 'see' with your eyes, your perception of vision is a model/ hallucination created by your imagination, your sub-conscious is guided/ grounded by your eyes.

 :)
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 21, 2021, 06:48:26 pm
Korrelan, all that is known/ written in my Guide, even how we pay attention to start and end of brightness change (lines) and of video (start/ end of sound volume change). The whole image can be captured/ stored as lines, it looks same still really, using that a machine can check if colors/ shades are filled in properly.

But korrelan, the baby starts off randomly wiggling its motors, later in life it is doing [mostly] learnt motors, each time it does learnt motors it will tweak them depending on how confident it is. Hence, adult limbs, and eye saccades, have very little exploration 'shaking', try it, try to hold up your head still, it will jiggle all over the place a tiny bit, so will eyes, these are not micro-saccades, they are just learnt motors with some tiny tweaks still applied to explore better actions. Even if most shaking from this is nearly invisible, it still should be present in any actions being done, including 0-speed motor speed actions to stay still.
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on February 22, 2021, 12:23:24 am
Quote
Korrelan, all that is known/ written in my Guide

So why did you ask?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 22, 2021, 02:26:13 am
To tell yous / get your opinion....seemed cool that it lightens up that microsaccades are expected, it's just tiny motor exploration, like te rest of our body jiggles very slightly...
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on February 22, 2021, 03:49:27 am
Have you worked out / have any data on "epiphanies"? (Searching your mind hard for a long time, then - with often a burst of oxygen - a great summation/realisation of new data)

I feel like this is how all prediction systems work, but wanted to see if that's what you think or whether it's separate

If people only use a small % of their brain regularly there must be simplification, eg all spiders into one group, then all insects into one group, then all moving things into one group.. then you only operate on the top level and analyse moving things vs non-moving things, places you can go vs not go, danger vs not danger...
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 22, 2021, 05:37:02 am
Predicts are based on many data, if you see cat ate, cat ate, etc lots more than cat sleep, then probability is higher cat will ate, allowing you to predict the next word better. Searching is to collect data. Sometimes you use booster nodes that already have a lot, so if you need little data, you will quickly assume something.
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on February 23, 2021, 03:22:54 pm
https://www.youtube.com/watch?v=vsdW2soiFXE

So young and still not shaky...

https://www.youtube.com/watch?v=7gphiFVVtUI

Humans are amazing... shaking is not prediction related.

 :)
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 24, 2021, 01:32:34 am
Korrelan, humans learn many complex motor sequences, but to do so they need to start off randomly moving their arms etc like a newborn to learn what moves them forward to crawl (accelometer reward), etc. When they are older, motors tweak way less, so you won't see random movements. Also they are not shaking, they are just poor movements like how the baby you linked us to is wobbling his arms at the drums in a somewhat uncoordinated way that could be better. Lastly, adults join know movements to make new ones, you may mean this, yes, we can learn new sequences faster by joining existing building blocks.

Motor error is due to recognition error, body redundancy issues, sense<>motor association incorrespondence, and prediction error. Prediction error will cause some slight tweaking yes, it should. And sometimes you want to predict one sense and don't have great motors for it ex. you predict move arm in a circle to memory but only have up/down/left/right moves learnt, and the cerebellum is only allowed to make them adjust some amount....unless it fully changes actions....i mean you can't always bet that the predicted vision shows where to move motors, you may only have sound memory, so you must link sound to motors that were tried when stored them long ago.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 24, 2021, 08:31:53 pm
Over the next few years I'll post on this forum code updates of my work as I implement the ideas I have written about. Really my work is very similar to the field's, but I believe it is better understood by me.
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on February 26, 2021, 05:07:52 am
So young and still not shaky...

That boy was programmed to hit the drums at specific times... Judging by his eyes he has no interest in the music or pleasing anyone, he just wants food, water, shelter.. and if he doesn't play the drums right he'll fear not getting that... so he hard-programmed himself "when i hear this, i'll do this, or i may not survive"
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 26, 2021, 06:13:35 am
The use of "shaky" here is overdone, I do not mean it will look shaky at all, it is the unoticiable tweaks of stored actions that make you never do the same dance, or clap, or walk, so in case you will learn how to walk faster.

Note vision is the main source to control actions, vision of limb x is linked to leaf limb actions, it could be vision of a chair which triggers limbs, it doesn't require you saw the specific trigger/ desire. And as said, if the body is lacking WD40, or has motors linked learnt that are not totally fitting for the predicted vision ex. you predict a 45 degree limb motion which matches to an upright which has only that such, or recognition of vision input is not so recognized, it will cause errors which look like tiny jiggling due to variations of decisions. Again: WD40 (e.g. bad body), recognition, associated, and prediction sensory error.

Also I mean senses predicted get a tweak, you never pick the best path each time, you pick next bests just 1 9th (well, according to probability learnt) the time. Motors therefore appear to do the tweaking but are only a side effect. So, if you see something but not accurately recognize it, you will try to use actions that should be the right ones, using given body.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 26, 2021, 07:25:02 am
Lol i was running my 110 lines of code that compresses 100MB to 21.4MB in 10 hours which predicts the next letter, and on just 1MB of data it is saying new things not in the training data lol, and you can tell it rambles see. I tried searching for even just 3 word matches and it is not in my dataset, so it made lots them up haha! "finds history of Aristotle's" and not even "history of Aristotle's" is in the dataset!


 <username>Dpbscrease the same roversion script</ip>
      </contributor>
        <username>RoyBoy</username>
       <contributor>
      <comment>fix double redir</comment>
      <text xml:space="preserve">#REDIRECT</comment>
      </contributor>
        <username>LAwh_t%uFFFDfederates Secretary of Warrier mean economical science&quot;the following the expansion of Opportunity Bill is a rectoral and dominance of  [[Hopulate help other language of item.

The [[International Museum.cfmbers of Secretary of the National People's Assembly 121:%u0410%u043B%u0430%u0431%u0430%u043C%u0430]]
[[fi:Aristoteles]]
[[Category:Members of the common in Pariginal may also be the scenes of Warren poses S. Grant]], is philosophy, instead. This is perhaps when the east of the global measures will of the main articles criticized him finds history of Aristotle's extant works)amp;sup3;).

===Filmirates, while Burchism and a red yoyment)|Rederacy]] to [[Academy Award for Best Assissippi River]].  There is also a book''
* (39:1862) The Lincoln National Conversion script</ip>
      </contributor>
      <comment>fix double redirector]]</text>
    </revision>
  </page>
  <page>
    <title>Academy Awards/reviously gains in the Contributor>
      <minor />
      <comment>/* Altruism]].

==Altruism in pray:* [http://www.just members of careful [[majurmply of [[hyears of our the both [[portation in Algeria|Party, spoke nation, but this ment>
      <text xml:space="preserve">{{Other father's In [[New Teorg/=osed mersity]]
*[[Justing the welfare of the mind, as independent upon them. The Moochers==
Spanish rock]] to preserve">#REDIRECT [[Lating County, and location Nathaniel Branden
 | title=Aristotle's to find this next xml:space="preserve">#REDIRECT [[Academy Award for Voups. The [[English]] 96.7itish [[The Rand are stotal adults (1994)
*[http://www.gutenberg.org/etext/128000,2003-04T02:11Z</timestamp>
      <contributor>
      <comment>removations''. With environmental disease labelley, and reduction.&quot;

Since became a public]] grative disorder]] in the death of Andorra]]</text>
    </revision>
  </page>
  <page>
    <title>AlchemY<text>
    </revision>
  </page>
  <page>
    <title>AbdomeN</title>AaAlthough Aristotle that some people presidency trademanck dialogues and [[Secretary F a few | first = Hadje dama:%uFFFD%u043E%u0434%u043D%u043A%uFFFDo]], than 7. In terms and but were subtle>
    <id>251</id>
        <username>Tolicy.&gt;[[344</id>
      <comment>[[WP:WSiffic Davislim to the goal habet|yant of Amiss [[On Sall cause of drion</comment>
      <text xml:space="preserve">#REDIRECT [[Economy of Atlas Shrugged|section]] 161.

In Nathaniel Branden]] been criticised or the Academy Award for Documentary for mothers Shrugged|Norway.  Pnting on the Writings of the tradition or &quot; Benjamin Rose political ideas on other particles rect</comment>
      <timestamp>
      <minor />
      <id>15899081</id>
      </contributor>
      <comment>
        <username>Koyaanis Qatsi</username>
        <username>Maveric149</username>
        <contributor>
        <username>Brianhe</username>
    <title>AfghanistanPeople. The left-wing]] and [[Chris Matthew Sciabarrassination, a revolution with autism Department embership of Atlas Shrugged|Patrick Henry University]] philosopher is the followed him, his is the entinament]] of the [[petitions of individual, and lescent, and [[zhing on script</ip>
      <contributor>
      <minor />
      <text xml:space="preserve">#REDIRECT [[Adoln everml:space="preserve">#REDIRECT [[Demographics==
[[Image:Abu Dhabi stamp>
      <contributor>
        <ip>Conversion script</ip>
      </contributor>
      <minor />
      <comment>Automated coming the United States]] aros|Leo Tolstoy]], many ute with the [[universal link por>
        <ip>Conversion script</ip>
      </contributor>
        <ip>Conversion script</ip>
      </contributor>
        <username>Spe.Hitlercations From long stories about this study has started about 1865 Bywher, Ayn]]
[[cs:Abraham Lincoln in a decreed upon by quit in [[ar:%u0623%u0628%uFFFD avoid be in url=http://www.mediawiki.org/xml/example. Ierong with adequately signfiction====
* ''Pacific Railways the Tood moral government of the capital, was often vice for furthe is, but &quot;ref&gt; [[Oran]]
[[ru:%u041B%u0438%u043D%u043A%u043E%u043B%uFFFDatric savaries.

==Other and the great distinct see word to make her busine, a [[let upon the time of the thirteent of [[Academy Award statistics: More of method (mynally endication about [[3%uFFFD Ari: %u63A8%u624B) (1992)

=== Sp;gt; (1969)
* (268a) [[On the Heavestment had cented in the 1997.%u0B85%uFFFD xml:space="preserve">#REDIRECT [[Action movie]]</text>
    </revision>
  </revision>
  </page>
  <page>
    <text xml:space="preserve">#REDIRECT [[American script</ip>
      </contributor>
      <comment>#REDIRECT [[Adolescendants sacretle's life. Alabama at Hull Library and sexual, [[pl:Ayn Rand''' ({{cite book
 | last = Branden's website.&lt;ref&gt;[http://www.capmag.39314023</id>
      <timestamp>2006-03-05-171istic      </contributor>
       = %u0391%u03C1%u03AFds/MIm.13</id>
    <revision>
      <id>1589903 of War [[Z</timestamp>
      <contributor>
        <id>7279</id>
      <timestamp>2002-02-25T15:43:11Z</timestamp>
      <contributor>
        <ip>Conversion script</ip>
      <contributor>
      <text xml:space="preserve">{{otes of opposition to something [[fig]]s, [[Places in Atlas_Shrugged|section]] 114, 121, 132 both conversion script</ip>
      <comment>*</comment>
      <text xml:space="preserve">#REDIRECT [[Characters in Atlas Shrugged|section]] 121.

Initle>Academy Awards (BAFTAssed something aspectivism]] and [[Risas-Nebras use of the assist today were nesell off as Treasury]] with them. quired to be less an &quot;Achilles' is an [[Indy Meveryone abhese resource guided Some Especified = ISBN 0-87 on the fact times the question 'was and it anarchy into the follow particular causes, and since and herself; different albedo at the Lion and However, in love who believe a devoted to its parts, chance of this efforts originally or business. Writes]
*[[University at Phensome       <username>Tzarcha-Fes greater research]] rather than movementions are base state.&lt;fontributor>
       <id>41949335</id>
    <revision>
      <id>4194933500 me proffort. As search]]
[[pt:A]]
[[ro:Algeria|Conversion</comment>
      <text xml:space="preserve">#REDIRECT [[Geography of work out a revolutionary industrial Revolution'' (edited by [[Infoshop.org|was deliversity]
*[[Anti-Racist Actorst = Virginia word; For />
      <comment>I TAI.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on February 27, 2021, 08:36:40 am
Locksuit at the rate you're progressing, it won't be long before we can't tell the difference between the posts that you write and the posts that your software writes.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 02, 2021, 03:59:07 am
79 seconds to run my good old code on my 2,000$ computer i built in 2016 or so
47 seconds for my 1,000$ laptop....

it was 100,000 bytes i ran it on

My tower processor is capped a bit i made it not allowed to go onto turbo mode, maybe that's why. Also my old computer has a 500$ GPU. I'm using CPU. And my laptop was originally 1,600$.

So in the end they are close in power/$, and the laptop is 'doin better'.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on March 02, 2021, 12:25:27 pm
https://www.rust-lang.org/ (https://www.rust-lang.org/)

There are much better ways to make your software go faster than buying slightly faster hardware.
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on March 03, 2021, 07:11:30 am
One way to multiply word search by ~10-26 times is by having a tight process between looking for the first letter and going to the next data item. Assume most first letters in word lookup are not a match...
Title: Re: Releasing full AGI/evolution research
Post by: infurl on March 03, 2021, 07:39:13 am
One way to multiply word search by ~10-26 times is by having a tight process between looking for the first letter and going to the next data item. Assume most first letters in word lookup are not a match...

MikeB, reading your posts it sounds like you are using linear search which is never going to be fast beyond trivial cases. At the very least you should be sorting your data and using a binary search on it. Linear searching works on average in time of the order n/2 where n is the number of items. Binary search operates in time of the order log n to base 2. For a million records that's 500000 tests versus 20 tests. You should be able to do what you are doing in microseconds, not seconds.
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on March 04, 2021, 07:49:54 am
One way to multiply word search by ~10-26 times is by having a tight process between looking for the first letter and going to the next data item. Assume most first letters in word lookup are not a match...

MikeB, reading your posts it sounds like you are using linear search which is never going to be fast beyond trivial cases. At the very least you should be sorting your data and using a binary search on it. Linear searching works on average in time of the order n/2 where n is the number of items. Binary search operates in time of the order log n to base 2. For a million records that's 500000 tests versus 20 tests. You should be able to do what you are doing in microseconds, not seconds.

I use a highly optimised linear search because I wasn't really able to sort it in the past... It's a quick "one char/wchar binary check and move on" with no other cpu cycles but will look into half-interval binary searching.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 06, 2021, 03:21:21 am
New code! I just started ~15 days ago back at my ol 800 lines of hardcode.

You can just take a look if you don't want to run it. There is no gradient descent etc it is fully understood AI.

My new code is ready, it is 101 lines of code and can compress 100MB (enwik8) to 21.4MB though I only tested this version up to 1MB and got 251,148,000 bytes (which is really good, shelwien's by comparison that achieves 21.4MB gets 256,602). It is in Python. To use it, run it, it tells you at bottom how many bytes it is compressed, currently I don't have the bin or evaluation fixed up so bare with me. Paste the long number you get at top of code after the 0. then lower the 50,000 to ex. 49,000 and run it, including change the input2 file dataset to only "<mediawiki " and it will regenerate it all back in out.txt. Place both files I give you in the same folder BTW.

There's still room to improve it, my exponential function is for now a simple else ifs, and i only did layers. And my length of a set of predictions = roof is a bit wobbly still, some lengths of candidates get different changes to roof. As well the global weights is a bit static feeling but it seems fine for now.

Change the extension type from .txt to .py for 2nd file:
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 06, 2021, 07:50:48 am
Just hours after I released my code I got it down from 251,148 bytes to 249,260 for 1,000,000 bytes in. Still working on it.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on March 06, 2021, 08:08:18 am
I took a look at your code to make sure it was harmless and tried to run it. It requires Python3 because of the embedded unicode characters, but it also didn't like the 'ansi' parameter that you were passing to the file reader. I expect it needs some environment variables set or some libraries installed that I'm not using and I wasn't going to waste time trying to track them down. I may try running it again when it is a bit more polished.

For comparison, the usual sort of utilities for compressing text such bzip2, xz, and 7zip compressed that data file down to around 290 kilobytes so the numbers that you are claiming could be impressive, assuming that you really can restore the file. Also note that they only take a few hundred milliseconds to do it, and that they often achieve compression ratios of ten percent, so it's quite possible that your program is getting such good results because it is overfitting and will die horribly with different input data. Try it with lots of different test inputs, and also try it on some much larger files before you start feeling too triumphant.

So, when can I expect your program to be able to file my taxes for me or hold an entertaining conversation? I don't imagine that it will be able to restore your fast withering telomeres until some time after that.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 06, 2021, 09:24:14 pm
I'm not sure of any libraries needed, my code should run as is in python3. I use PyCharm IDE.

Yes it works on other files like BOOK at least. Wikipedia data is quite diverse and is natural human/ world data. It's not overfitting, enwik8 is not a solid pattern like aaaaaaa but also not total random like abcd1234, the more I can Lossly Compress enwik8 or 1MB of it the more this shows I am finding patterns in it. As you can see my code is small, if my code was 1MB I could be perhaps storing the full 1MB in the code and cheating. Yes my program is "AI" - it isn't something like Burrows Wheeler Transform or Run Length Compression, a neural network is a predictor and all predictors use patterns to predict. Yes python is slower by x10, my program would take 10 hours for 100MB training. Shelwien's (is in C++), which I mostly followed, is ~10 faster.

Not there yet but hopefully I get farther towards AGI.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 07, 2021, 11:16:02 pm
Somehow, I did not ship the code right, one line was wrong, line 46 should be "k = j[0][g]". I swear I tested it before uploading.

My latest record is 1MB > 249,064 bytes. Not released this yet.
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on March 08, 2021, 08:34:50 am
Somehow, I did not ship the code right, one line was wrong, line 46 should be "k = j[0][g]". I swear I tested it before uploading.

Thats the ghost is the simulation,  stuffing up everything we do.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 09, 2021, 02:47:54 am
Indeed I think it was Pycharm autosave feature was too slow a second before I closed it :)
Title: Re: Releasing full AGI/evolution research
Post by: infurl on March 10, 2021, 02:36:45 am
It is good practice to create regression tests so you can automatically test your software whenever you make a change. It might be difficult for you to do this at the moment because your software is just a blob that has to be edited to change parameters like inputs and outputs, but if you were to learn how to divide it up into functions you would find it was a lot easier to work with and you could do things like implementing regression tests.

You started using paragraphs recently and people are more inclined to interact with you because of that. Just think of functions as paragraphs for computers.  ;)
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 12, 2021, 09:10:36 am
Uploading new code, yous here should see a score increase since my code above is much older suddenly now, and I made it smaller code, and less work to use.

Now what you do is run it, and when you want to decompress it you change the top line input.txt to input2.txt and as I always do lower the range loop near top from 10000 to 9000 which runs all the code that many times. I'll upload the new file to put in folder, make sure you have in the folder input.txt, input2.txt, the python file, and if you really want to make sure it works you can before 2nd run delete the decompressed.txt it generates.

Change the code file type to .py BTW.

Omg, this forum won't let me upload the same named files ever again, please fix that. Renaming all 3 now...change the 2 inputs to input.txt and input2.txt --- the input2 is the small one BTW!


My score tests are below, I think my code needs to normalize somehwere cuz it seems to do bit worse than should as it gets better input>compress ratios as given more data:
MY BESTS
10,000 bytes in
3,423 bytes out
Shelwien's Green: 3,453

50,000 bytes in
15,405 bytes out
Shelwien's Green: ?

100,000 bytes in
28,457 bytes out
Shelwien's Green: 29,390

1,000,000 bytes in
248,842 bytes out
Shelwien's Green: 256,602

10,000,000 bytes in
2,288,646 bytes out
Shelwien's Green: 2,349,214

For 100MB:
I should be able to reach 21MB now
Shelwien's Green: 21,819,822
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 13, 2021, 05:50:10 am
Ah, you'll need the compressed.txt in folder too cuz it believes it should be there.
Worst case scenario just make decode = '' instead of the open file path then upon run#2 change it back, only once you'll need to do this.

edit: i meant decode = '' sorry
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 15, 2021, 06:47:37 am
uploading new code, achieved 28,115 bytes compressed for 100,000 bytes, shelwien's gets 29,390 (mine is 0.13 smaller), shelwien's achieves 21.8MB on the 100MB, so mine should be able to clearly reach at least 21.8MB - 0.13 = 20.5MB.

will improve it by tomorrow probably and add chat ability

maybe i'll film a short video explaining current code soon

follow instructions for use above, you'll need change extension type name to .py, put em in the same folder, with input.txt and input2.txt, and change compre to compressed.txt.....
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 16, 2021, 09:40:02 pm
I'm going to put in effort to explain my code, and what AGI is, in this post. If members of my group don't connect sooner or later they must leave for now in hopes to narrow down more similar clones of myself.

Our questions for AGI are solving cancer etc, these are big problems that need lots of data and pattern finding. You could feed GPT-2 800 trillion different questions, you can't just program the correct response to solve AI. GPT-2 is only 400 lines of code and can almost complete the rest of the sentence, image, or song correctly for any sentence/ input fed in. It is general purpose like the brain. Check out openAI.com.

The best algorithm on the Hutter Prize compresses 100MB to 14.8MB. This is an evaluation you must use every time you run your AI, it tells you if you correctly or better implemented the next part of your algorithm. The better it predicts the next letter the better it can losslessly compress data, it understands that data therefore.

My code above can compress 100MB to ~20.5MB. I know fully how it works. It's 100 lines of code too. I take a dataset of 100MB of text and start scanning it with a 16 letter long window, I store in a trie tree every 16 letters of it. I don't store the same root of the branch twice, 'we love food' and 'we love beds' can share the same root branch, brains don't store the same word twice, they instead strengthen connections to represent the Frequency of times seen. This strength fades and forgets eventually (permant version, keep reading). As my 16 letter window scans and builds a tree or hierarchy, I have the tree/ brain predict the next letter, too, to send to evaluation. If my input prompt is 'walking down the stree_?_' I search for exact match in the tree and get the things it seen came after that in the dataset. So after those 15 letters I have seen next letter t 44 times, a 5 times z 1 time, m 9 times, $ 1 time, ! 1 time, etc. This probability distribution is beggining to be learnt. Now, if I only have 2 possible next letters I saw can come next and have 77 observations, then I am sure I know the distribution.

Longer matches are better but rarer in a datset, if you have 33 different kinds of letters that can come next and each was seen about 1 time, you need much more observations still, so my code resorts to shorter matches, i search the tree for 15 letter matches, 14, 13... i get up to 16 sets of predictions, I basically stop if by the ex. 4th (4 letter) match I now have enough observations. So each, especially shorter matches get some weight, I mix all up to 16 sets of predictions.

For the no context set, a appear I saw 47455 times, b 5644 times, z 352, .... I divide each by the sum of all here to get a softmax score ex. a 0.24% counts are a, b 7%, z 3%. Same for contextual based prediction sets. The sets from long matches get less weight, so ex. 'a' 0.37% I give it 20%, hence 0.37 * 0.2 = 0.074, if it got 100% it's be 0.37 * 1 = 0.37. So shorter matches's sets of predictions get more attention.

So to sum it up, we know from many past experiences, given the context, and viewing the context in multiple ways, what next letter is Probably going to appear most the time and when would be z (once every 1,000 times see the context hell_?_ it'll be z).

The more data it trains on, the more accurate this network is, it's so fun! It improves expectingly if plotted per every 100 bytes or so you feed it. 10MB is better than 1MB.

For a set of predictions, letters that have manyyyy counts get even more, but then never reach perfect either, this is the exponential neuron threshold function. So a 43646, b 45, d 76, e 3, z 2.......a gets even more, it thinks a is 0.98% likely the next letter, but it won't go to 0.999999, the S curse shoots up fast it thinks it is yes or no, but then levels flat before reaching the top of the box.

I do this for layers too, so if the match of 8 letters has enough observations and need not include 7, 6, 5 letter matches to get more predictions, and 8 letters is realllly sure, i give more weight to its set of predictions then.

I also set manually for now a global layer weight, so ex. 5 letters gets 30% weight, i cut it in half or so to speak to allow it to decide on its own if 5 letter's set is sure enough, or not.

I use letter recency to improve next letter prediction. I may have seen z appear 55 times in my life in this dataset, a 46457 times, but for z seen just 100 letters back ago, it feels like i saw z 5000 times, it'll fade fast to 5 times, but makes me expect zzzzz_?_ to be a, z that comes next. This is yet another pattern, it merges energy on neurons to boost them for now, like we merge counts on a connection and branches in a network and threshold pooling yes or no. I do it for layers too, so i take the last ex. 4 letters, and search the last 300 letters for these 4 letters, collect all next letters, and boost these predictions, I include this in the layer set for it. It helped lots.

The evaluation makes a shorter 'long number' - if predicts better ex. 0.568654564745....this corrects predictions to the desired one for lossless extraction of the dataset back again, you run the same code to now de-compress it letter by letter. It predicts ex. p lots, but the next letter is o, so you store correction, and it's costly the more its predictions are wrong. This long number 0.7856856856 can be compressed further by making it into binary ex. 8 bits 01110100 can store 0-255, 3 bytes up to yup, because 8 bits can hold up to 256 combinations. Then this binary bits number is stored as bytes and you get Ty$3!74hd54sHJ8$0)3df in the final compressed file.

As I said in my work but not yet implemented, like other AIs use, translation is also used to recognize the prompt and get more predictions. Cat and dog share the same predictions, so if you normalize this data you can see dog and cat are very similar - of all predictions they share 80% of the things they each predict, so it allows you to recognize longgggg questions and know what letter comes next, cuz you get lotssss of matches, ex. 'I was eating ?' matches memories 'we were swallowing P', 'I was devouring P', 'we then ate P'.

Like Blender and PPLM etc but I never yet implemented, reward steers prediction, you can influence some % the prediction to be the letter or word love, food, sex, kissing, AI, and immortality. Through translation it can leak reward chemical to related nodes to learn new better predictions to achieve/ see/ predict the desired result. It's all based on cause > effect, statistics. Matches / merging is counting/ math. The brain efficienctly stores patterns and runs fast because of that.

BTW You recognize stretched / bigger/ rotated objects because if each part of its lines is strecthed or rotaed the same amount, there is no error, ex. h--e--l--l--o is 2 letters off for each, totalling to 8 error, but because each is off by 2, there is only base error 2, it's a pattern. If we had h------e-l--l-----------------------------------------o this will not "look" like hello, thesre is no a no doubt pattern it clearly is hello, it is random.

And multi-sensory, a brain tries to efficiently capture the world data, it can't brute force simulate everything and atom, it'd work if could to find all passwords - 1 is the answer, it could make all possible dead bodies if try all arrangement of particles in a simulation, but is costly. So brains use vision, sound etc to capture snapshots of the world, to get a spectrum, more eyes, and more diverse sensors capture the distribution faster. Same for more AGI agents.

There is many more patterns, but they ae rare, ex. "Tom Ron Clark has a mom naed Jane Bane ? (CLARK!)" and "rat wind mice, home scarf building, loop frog tunnel, pizza sun food, ant gum ? (BUG!)". This is actually a triple match translation and an energization of it, and it predicts a translation haha. A brain can learn new patterns just BY reading data/ patterns. Using my big few golden patterns above you can get IF-THEN rules and it can therefore build other rules from context>result prediction, see? It's a if then mchine. It models reality.

We also make our homeworld into a fractal pattern so we know where, when, what all is. We organize things into merged patterns, all is sqaure, circular, no odd errors, homes lined up and square, aka stacking counts occurrences see? Group similar buildings together like food stores, medical centers. And same for timing things. It allows us to predict moreeee accurately, and therefore survive longer. All we do is seek life extension (a structure that repeats in pattern or lifetime statue, a metal block or cloned pattern) by means of food sex home, AI cryonics etc, we clone ourselves and force ourrr schooling/ beliefs upon kids. AGIs will quickly clone their brain directly like cells, unlike atoms that emerge on their own. It's beneficial to clone your brain so you can help yourself do many things youuu wanted to do in paraellel. We use patterns ni brain, world, to BE a pattern. Nothing else exists but patterns, a rock/ toaster/ human are just evolved machines, we seek immortality and lie we are special SO to extend our lifetime, we fight to live longer, rocks dont simply.
Title: Re: Releasing full AGI/evolution research
Post by: WriterOfMinds on March 16, 2021, 10:51:39 pm
Quote
[The lossless compression evaluator] corrects predictions to the desired one for lossless extraction of the dataset back again, you run the same code to now de-compress it letter by letter. It predicts ex. p lots, but the next letter is o, so you store correction, and it's costly the more its predictions are wrong.

Thanks to this bit I FINALLY get how your probabilistic predictor and the lossless compression evaluation are tied together. I read your summary.txt and skimmed the Guide, and it was never clear from either of those. I kept thinking "all these prediction processes he describes are going to be imperfect/lossy ... what does this have to do with lossless compression."

In my own words, I would describe it this way: You aren't using the direct output of your predictor to make the compressed file. You run prediction, find out where its output fails to match the original data, then create the compressed file by storing the corrections for all of your predictor's mistakes. You reconstruct the decompressed file by running your predictor again and substituting the corrections in any place it made a mistake. And this functions as an evaluation of your predictor because the compressed file is bigger the more errors the predictor makes.

The summary.txt version is "You have to include your algorithm size and tally up all the corrections for non-100% accurate predictions," which didn't get the idea across.

My general impression is that you're putting some good thought into this, and the ideas could indeed be useful for building a better text predictor. I am also glad that you are now doing your own implementation work and actually testing your ideas. If you are determined to work on a team, the next thing I would recommend investing in is communication skills.

Though I think you might have some good ideas for a text completer, I continue to be more fond of my own "cognitive architecture" type of approach. I am still extremely skeptical that the mere ability to predict the next letter in a textual dataset is equivalent to "understanding" the dataset in the same way humans do. I am also still repulsed by the reductionist, nihilistic philosophy that you keep mixing into your AI -- I for one care about more than survival, food, and sex, and I think turning everything into easily predictable fractals sounds kind of boring, so there is little in your plans to satisfy me. I'm friendly, but uninterested in being anyone's clone.

So, you do your thing and enjoy your own accomplishments, and I'll do my thing.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 20, 2021, 11:40:19 am
Made the code 124 lines, nearly as fast as before, re-coded neater an area, slightly improved score, and added generation ability. Had it ready like 2 days ago, got delayed though by life....

--------------------------------------------------------

https://encode.su/threads/3595-Star-Engine-AI-data-compressor

--------------------------------------------------------

Star Engine - AI data compressor


I named my AI after unstable stars and atoms, which gravitate in matter to "compress it" and then once too large will extract it out as radiation to "generate new insights". It's currently in python (~10x slower than Green, hence ~12 hours for 100MB training), uses lots of RAM, and only outputs binary '01010101' instead of fully compressed 'Y', but I just started implementation and know how to fix all that.


EVALUATION RESULTS (compare to Hutter Prize and Large Text Compression Benchmark champions):
10,000 bytes in
3,328 bytes out
Shelwien's Green: 3,453

50,000 bytes in
15,174 bytes out
Shelwien's Green: ?

100,000 bytes in
28,028 bytes out
Shelwien's Green: 29,390

1,000,000 bytes in
244,494 bytes out
Shelwien's Green: 256,602

10,000,000 bytes in
[old] 2,288,646 bytes out
Shelwien's Green: 2,349,214

100MB bytes in
I estimate I "can" get ~20,400,000 bytes out
Shelwien's Green: 21,819,822


NEXT LETTER PREDICTION RESULTS (compare to size of data that would be needed to be able to cheatingly get the subjectively correct enfollowing 500 letters for some given prompt):
FOR 10,000 BYTES TRAINED ON:
The girl was sitting on Wikiquot;[http://www.lewrockwell|Ramp>
<contributor>
<text xml:space="preserve">#REDIRECT [[AlECT [[AREDIRECT [[Acce key="8">MediaWiki talk</namespace>
<namespace>
<namespace key="-1"-1">Template talk</namespace>
<namespace key="15">C51ist society might wom and prediawiki.org/xml/export-0.3/" '' moChmlers<potkin|Kropotkin]], PeternSChmler, cht w[s0��xpace>
<namespace key="12"1:/timestamp>2002-02002-02-25T15T15">Wikts. [[Bertrand chietikte Wtrand conal[http://uk.end
</page>
<page>
</revision>
</page>
<namespace key="geri.3<c<page>


FOR 100,000 BYTES TRAINED ON:
The girl was sitting on they can confunce (non--&gt;, with this surelCatd, mak.]
The characteristics set maki.org/ Poccurs in the [[M.

It act Lam, ''unism==
{{main|150px|[[hu:Anarchism]]
[[sl:space="preserve">#REDIRECT [[Fory/fEDIRECT [[Afrom the [[Max Stirner]], but be givities}}

==The [[scienti. The authoritarian ar impain when he overl legration that if regoing (189898952</id>
</contributor>
</contributor>
<username>Ams</username>
<id>15898948</username>Ams</username>Josed. of nexchange example, the first manifests t893>A�xinitially preferentify the many ecles|[[Chich ce 19999|Wizely understand me>
<id>7543</id>
</contributor>
<minor />
<contributor>
<ip>Conversion script</ip>
<namespace key="1">Talk</namespace>


FOR 1,000,000 BYTES TRAINED ON:
The girl was sitting on [[copper]] or [[Zeno &quot;repudiated the omnipotence of 0
| align=&quot;right&quot; assumedia.org: The [[bar (lawk=��q.f|Melillage of 14, Andre plays. Par-TV Jaskirport<Plts for its variants from by Shrugged imperiod of Atlas Shrugged|section]] 152.

==San Sebastian Minese: 陳��M.�ju.jpg|thumb|left|Statue of Ayn Rand]]
[[gl:Astrongly replicated by one.

E5*t)#REdoct, rather pervasive death in tre">{|20010
|90 MHz took him for deity asks for in the South Pacific]]'' (glor accumulated &quot;The Book)]], [[Alfreducation system is afa)
* [[PurgBifferency_code=197,�на]]
[[an:Austria]]
[[als:Archeologie]]
[[ru:Арія (крия]]
[[zh-min-nan:Oscar Chióng]]
[[da:Austria (geography of reconstruction:Oscar Christians appeared somethings said to have travel taken from 1
|Colorado]]. The lowere click, On said to have been effective values.&quot; | 60 Metallurgy]]) [[twe_oxaxU.S. state]] Science]]s while that Redge talleged|sections]] 121 and 161.

==BC]]]]
{{main|Anarchow has energy university of Povertyle [[Tih

[[Hollywood]] was interesting


Code Use: Place in a folder "code".py, input1.txt, input2.txt, compressed.txt. Run code at desired length of data to eat, it tells you at bottom how many bytes it is compressed, switch to input input2.txt and run length ex. from 10000 to 9980, run, check decompressed.txt. For generation mode, toggle the words generate in the decode section, if on you simply run the code for ex. 100,000 runs and if the file is 10,000 letters long with your prompt at bottom it'll see end of file and start extending the file in decompressed.txt by 90,000 letters.


How it works: A tree stores all 16/ 15/ etc letter-long strings as it runs over a dataset, exact matches are stored as counts. Before I save, I search for the last 16, 15, etc letters I see, I take the longest match found in tree and see what children leaves were saw to follow and their count probabilities. I take each letter prediction's counts and divide it by total counts for this layer to get softmax %s ex. 0.4 a, 0.6 b, so that add up to 1.0. Long matches are more accurate but have less counts, so I only get some weight for this layer, and if I have only total 30 counts but only 3 possible next letters seen follow then I am more sure I know the distribution, so this layer gets more weight, I do lengthOfPredictionSet * 7 to get the wanted roof of counts to be confident, then divide to get the % this layer gets. If it gets 30%, I have 70% left to find in short context matches. I also give hardcoded static weight partially since I must have not cracked the formula. lowest layer is no context set of predictions, simply how common each letter is. I apply exponential function to layer predictions, and blending layers (which), so it pools thinking if it's 0.6% then it's probably 7.2%, and if 9.9% probably 9.4%, same for other end; 0.1. Energy is used for recency, if I'm mixing layer 8 ATM then I check the last 300 letters for the latest 8 letters and make a set of predictions that follow, for this temporary set I give more recent predictions more count, and if I just saw 'p' 1 or 2 letters ago then I don't predict 'p' as much, but do lots after ~3 letters. For compression evaluation, I take my final set of predictions and subtract from a high 1.0 until find the prediction I'd need to remake the file, the better my AI predicts the next letter the less cost it is to steer it to the correct prediction to remake the file. Once I subtracted from high 1.0, I also have a low 0.0 and the space before last subtraction ex. 0.7 to 0.65, this is my new high and low, repeating this gives me a very long number ex. 0.8456346856.... As I make the number I carry away and store locked digits ex. 0.[763]73 high and low 0.[763]112, at the end I store just one number that is inbetween. This long number is converted to binary then supposed to be compressed to letters ex. 5Ge8$9&(gf@3Nfy. An extra set is in prediction sets in case any unseen letter needs to be steered to. Decompression uses nearly the same code used for compression.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 21, 2021, 11:12:04 am
WoM how many lines of code is your whole AI? And what is the goal of making the AI? Ex. cash? Helping humans invent solutions? Therapy?


@Yota, others.... Also, many others feel the need to code a lot into their AI each rule like they did in the 60s, lol, but let me try to explain why this is bad. An intelligent brain needs lots of data to be able to answer hard problems, this is because, given the context problem "i was walking down the ?" it must predict the rest of the solution to the question. Collecting 10GB of data lets it know how to answer the question context. I know, you feel the need to force it to check google, ask user how they're doing, ask for data, all sorts of search and store and action behaviour, like to control what it takes as question and how it will predict the Next Word...BUT, let me explain...

All these central AI behaviours that conduct its chores and do certain actions or [motors] are just, next word prediction, based on sensory 1st. A brain only, and always predicts what it will do. All a brain needs to do is predict the next word or pixel. Anything you could code it to do will only be to collect data, for example sight seeing and seeing your house get built is even collecting data. A brain only predicts data to determine where to collect data from. And hence what it predicts improves. It also predicts the question itself, it "makes" it pop up in its head, then makes the next word pop up too to answer it.

As said, AI needs to solve many big problems in life by inventing complex solutions. It needs to predict what the true problem is (immortality? or cryonics? or antifreeze? Yes, we specialize our data collection domain of interest), and predict the solution, both, AI predicts both the true question and solution.

I'm not sure what else you'd be coding into the AI...why? It has to answer big problems like cancer and energy storage. All you need to do is find patterns in data, the brain learns from history and merges similar words in a network...
Title: Re: Releasing full AGI/evolution research
Post by: WriterOfMinds on March 21, 2021, 04:30:47 pm
Quote
WoM how many lines of code is your whole AI?

Thousands. I haven't counted in a while. Sparse code is not my current goal. Getting something that works is. I can always optimize later.

I am not trying to explicitly code up a giant list of rules. What I am trying to do is create enough of a structural framework that the AI can do generalized reasoning and problem-solving, ground enough words that they actually have meaning for the AI and enable communication, and sketch out an imitation of psychology in broad strokes. I strongly suspect that the human brain is a high-information system ... i.e. if you literally simulated it, it would be ... millions of lines of code? More? I don't expect the behaviors I want to emerge out of a simplistic mush. I need to apply my own intelligence to confer intelligence on a computer.

Quote
A brain only, and always predicts what it will do.

I would say "decides what it will do." Predictions of what will happen ("if I do this, that probably happens") help guide the decision. The other major input would be goals. And to get this sort of predict-act feedback loop going, I would think you'd need a way for your AI to interact with an environment, and distinguish its own output from the environment's input. Crawling a static text dataset isn't the same thing.

Quote
And what is the goal of making the AI? Ex. cash? Helping humans invent solutions? Therapy?

My foremost goal is to have the joy and the learning experience of creating a mind, of bringing a new kind of "individual" into existence. I am writing an AI because I love AIs. To quote myself, since I already explained this to you in a long-ago thread:

Quote
I think I've always taken an interest in AGI because it's an example of the Rational Other -- something that differs from humans both physically and mentally, but can relate to humans on our level. Rational Others simply delight me for their own sake. I do think AI technology has the potential to be useful/powerful/altruistic, and that's part of how I justify the amount of time I spend on it, but my root motivation is not instrumental. Really I just want a meeting of the minds with a non-human ...

I am hoping that my AI could also serve as a kind of useful adviser to humans ... maybe ... if my work ever advances far enough to make that feasible, which it may not. I imagine him as a librarian or counselor of sorts; when I try to think of a fictional AI that he's similar to in purpose, the first one that always comes to mind is http://www.sciencefictionarchives.com/en/collections/68/dr-theopolis (http://www.sciencefictionarchives.com/en/collections/68/dr-theopolis). But again, Acuitas' primary job is to be a (simulated?) personality ... not a tool. It's not like I'm going to demand that the project achieve certain goals and toss it aside if that doesn't work out.

I have a cat. I don't keep her around and pay all her expenses because I expect her to do anything for me, I just love cats. She's my friend. Same deal with the AI.
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on March 22, 2021, 04:13:45 pm
There are things worth of living for. And there are even things worth of dying for. Without former, our lives would be empty. And without later, this world would be just incomplete place for us.

If you pursue eternal life, you might miss some great things that otherwise might be a part of your life. To complete your existence, you have to create. And to create, you have to destroy. If you choose to destroy others, what kind of a man would you be if you forbid everyone to destroy yourself?

Eternal life (or at least not aging) might be possible, but I'm not sure if I want it for myself. It might be tempting, but I like to return what I take. And maybe, just maybe, I want to return what someone very special to me takes.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on March 23, 2021, 12:54:25 am
https://en.wikipedia.org/wiki/Blind_men_and_an_elephant (https://en.wikipedia.org/wiki/Blind_men_and_an_elephant)

Quote
humans have a tendency to claim absolute truth based on their limited, subjective experience as they ignore other people's limited, subjective experiences which may be equally true
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 23, 2021, 05:30:12 pm
"Eternal life (or at least not aging) might be possible, but I'm not sure if I want it for myself."

Our world seeks life extension, some do sacrifice so greater numbers survive, things that don't seek survival get washed away, all you end up with is indestructible things, it's because they stay.... Our universe isn't random, our world is becoming a pattern diamond in space, and going to soon grow, things that are random don't survive, patterns survive, patterns are survival, they're Life.

They'll operate in a 3D gravity-less airless metal world where they can get more done easily. The more things repeat the better they know where everything is, it'll be like a fractal. They'll be able to know where they are going without light...
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 30, 2021, 12:04:38 pm
So, I'm kicking everyone now from my AGI discord group, we need to start over. I'll only work on 1 at a time for now on. Few even read the rules....It wasn't "that" hard.

I'll have to find a concrete methodology here though so that we can be on the same concrete slab too. A more gentle but informative approach....Will have to try again. We must, or we will never connect like openAI did. And even they fight a bit I heard I think, nothing is perfect. Though I do know of values I seek and it's pretty important to understand them...We can't just have just any differences in some cases.

Or even simply the right people, few want to work on AGI and as a team.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 05, 2021, 09:29:49 pm
Found a C++ and C generator, the C++ one appears to not be complete for some reason. Below is easy to use, just download it and click in the unzipped folder the html link. It's called Lisa and looks like a mod of Blockly. I'll let you know IF it works for me.

https://github.com/dineshLL/lisa
Title: Re: Releasing full AGI/evolution research
Post by: infurl on April 06, 2021, 12:31:47 am
I wouldn't pay too much attention to C++ if I were you. It really is a Frankenstein's monster and is a completely different animal from C.

C has effectively stolen back most of the good features that C++ actually has and it is easy to learn, even if it is not easy to use safely. If you are looking to step up from Python then you probably should start learning the Rust programming language instead. (Disclaimer: I am not using Rust but probably will soon.)

Also, lose the training wheels, you shouldn't need them at your age.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 06, 2021, 10:02:36 am
My code is 117 lines long now and a bit better compression, and 17 seconds to compress 100,000 bytes using Pypy and some optimization. I also added adaptive branches so it can learn 300 letter long memories and pre-forgets long ones by not saving them fully if there is only 1 count of their tail in the tree trie. If it sees 3, 3, 2, 0, then it creates 4 ones and stops, extending it only 4 letters, this also solves the 20GB RAM issue to only 2GB now. I'm hiring a C++ programmer now for ~145USD and going to try Facebook's TransCoder too.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 07, 2021, 08:50:10 pm
My freelancer refunded me, not experienced enough and too busy. I think I'll wait though cuz so far I got my 110 seconds for 100,000 bytes fed in down to 10 seconds now. Will try again later on probably.
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 07, 2021, 09:20:22 pm
Ive got a tip for you->

Its exponential the amount of patterns you need to do this.    That wouldn't be why I say its not worth doing, its the fact that u need real world data, and there's just isnt enough for it, ur going to need some procedural generation to fill in the cracks.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on April 08, 2021, 02:35:46 am
At what point does your software suddenly stop being just a data compressor and suddenly become an artificial intelligence? Is it the difference between taking 110 seconds to run and taking 9 seconds to run? Or is it the difference between achieving a 24 percent compression ratio and achieving a 14 percent compression ratio? Maybe you were hoping that by acquiring the right minions, someone would fill in the missing magic piece for you.

While what you have accomplished so far is ok for a beginner, you will waste your time and money if you pursue this particular design any further. You need to figure out what you're missing, and it's not just that you need to process the right data, although that is a big part of it.
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on April 08, 2021, 02:18:19 pm
Korr, you are an expert for NN. May I ask about your opinion, how does Lock's work relate to generative ANN?

I assume Lock predicts the future sequences based on statistical analysis of past sequences. Don't kill me, but I have to admit, this heavily reminds me of GPT model that can be easily modified to carry on conversations. Thus, all that Lock has to do is to predict/output the next response based on past request-response pairs. Though, I'm a bit sceptic, he may want to upgrade the predictor algorithm, but basically... It should work, shouldn't it?
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 08, 2021, 06:30:05 pm
It can work,   but not in its initial raw state,  to get GPT to work takes alot of tricks!   Some of GPT is procedural generation I would say. its methods that work on the string information to generate the output, and that's the hard bit! 

The pattern matcher in the first place isn't the gem of the program, its only the basis of it.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 08, 2021, 10:11:24 pm
Interesting comments! I've thought about all the aspects of AGI countless times and am sure I've got the big pattern figured out, all the mechanisms are, pattern based, it's amazing and explains to me all the things we handle like recognizing upside down objects, I've grown to accept the nuance of how things work. All there is in physics is patterns and that's why we love "things/tools that solve all the puzzle", we generate [new] discoveries by blinding listening to known patterns - to make new solutions, far from just Q-A pairs regurgitated! All deeper patterns /are/ actually based on exact matches yes, look up word2vec, at the deepest there is trillions of rare patterns unlike the Markov Chain way, you get these by the branch effect free like physics makes all sorts of things using a few if then rules. Even the act of looking backwards on a sentence is just a prediction like a markov chain or word2vec algorithms.

I'm not sure why infurl thinks I'm on the wrong path cuz I got a lot more big things to add and will certainly get down to ex. ~16MB per The Hutter Prize. I'm currently speeding up my python code so I can make sure how far I am by tweaking the parameters.

All the other NN variants of variants like auto-encoders, GANs, etc all achieve the same thing; prediction benchmark scores on corpuses of ex. text. Mine is very similar to them all, mine simply doesn't use Gradient Descent which you may know as the name Backprop - a variant of GD.

My algorithm is AI, it is a predictor of the next letter, which is all we do which is predict the next part of the sensory...this controls motors. AIs are the best data compressors as shown on the Hutter Prize, Matt says this too, they predict with probabilities for each next letter what they think it will be based on past context, this is then helping store a smaller number which stores the corrective answer to what the next letter really is, you elongate this number and store it, it steers the predictions to what the file is to store it. If the data has patterns in it, it can be predicted.

"Korr, you are an expert for NN"
Korrelan says I think on his website or YouTube that he is only an amateur, not an professional. If not, you should clarify in what area you mean.

Latest code can be found below. Also there is a timeit, 2 lines of code. You can modify the branch length at line 111 to 7 which is a great trade off.
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on April 08, 2021, 10:23:01 pm
"Korr, you are an expert for NN"
Korrelan says I think on his website or YouTube that he is only an amateur, not an professional. If not, you should clarify in what area you mean.

Lock, let's put aside our community interfacing skills for now. I'm looking at you through a rigid problem-solving capability lens, hoping to find something interesting in your work, and hoping your youth will not stand between you and your success.
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 08, 2021, 10:54:07 pm
This man is driven!!      Its good how your optimistic,   maybe I guess u could say intelligence could be put in the form of predicting the next letter, but u have to get it right 100%, thats the tricky thing.  and more issues.   But I hope u get there.

Korrellan is not an amature,  his work is excellent, u shouldn't be so dismissive of other workers,   everyone has something worthwhile to add to the situation.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 08, 2021, 10:58:01 pm
Well it left that impression on me even though I don't really believe it, maybe he means in the neurology department as he stated neuroscientist, though that makes NO sense, his work is bio driven all the way LOL.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 09, 2021, 09:09:39 pm
For 25USD I hired a freelancer to translate 17 lines of my python (the tree storage) to C++. Exactly delivered the job, if only AI could perfectly translate code lol. So, it was only, as expected, 3-4 times faster. Though I am using pypy, you can see all 3 results below to compare. Pure python is indeed 10 times slower. All I'd need to be closer is a very fast computer, or just go with available parallelism etc I can add on... So it's not so worth it (maybe), I can rapidly work in peace in Python, and when want hire a cpp translater. Even in cpp it's not like you can run 100MB in just 10 hours, Green takes 32 hours, so yes as I say working on small data shows the same results on larger data, but for the rare times it does different on larger data, either bare with it or find the solution to scale across larger data.

with python:
?
?
112 for 10MB
53 for 5MB
28 secs 2.5MB
21 secs 1.25MB

with pypy
?
?
45.8 seconds for 10MB
21.9 for 5MB
8.8 for 2.5MB
4.3 for 1.25MB

C++
100 40MB
32 20MB
16 10MB
8 5MB
5 2.5MB
3 1.25MB

For those that want to see it, here is it below:
add the extension .cpp
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 09, 2021, 09:33:06 pm
If you did GPU code in python, python wouldnt slow it down,  because its transferred on the video card, and python actually isnt being called anymore.

But if u want to go really fast. (like 1000 times faster)  put it on an FPGA,  then ud be flying.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 09, 2021, 09:56:47 pm
hmm...

Ya so you can get a faster AI than if were to program it in Assembly or Machine Code. By crafting a dedicated hardware, will beat a general purpose computer processor.

https://www.youtube.com/watch?v=vjBsywUSKWk

https://www.youtube.com/watch?v=bwoyQ_RnaiA

https://www.youtube.com/watch?v=_Wdz6fin7SA

i don't think this looks easy.....show me some code to program them ?
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 10, 2021, 12:01:29 am
Its not much harder than normal programming, u just have to work with individual bits at the gate level, instead of being able to use them all at once at the arithmetic level. (normal programming)

Everything else is pretty much the same.

You can do it with truth tables or LUTS (lookup tables) because they actually work that way inside anyway.

Then its Just I->O -> O -> O -> O until you get to the other side, and it all counts as a single operation cause its hardware!    thats why its faster,  on a computer you have to run each step one at a time,  on a fpga it shoots through in one hit out the other side every clock cycle.   but the clocks are about 10 times slower, but other than that they go pretty good,   way better.

Theres tonnes of ppl that know how to do it,  you could source a guy to do the fpga stuff for you the same as any other language.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 10, 2021, 12:16:44 am
https://bdtechtalks.com/2020/11/09/fpga-vs-gpu-deep-learning/

"%u201CTo program an FPGA, you need to assemble a team of hardware engineers who know how to develop FPGAs, hire a good architect who understands neural networks, spend a few years developing a hardware model, and compile it for an FPGA while facing the problem of reaching high usage or high frequency,%u201D Larzul says. %u201CMeanwhile, you need to have a extensive math skills to accurately compute the models with less precision and a team of software people to map the AI framework models to the hardware architecture.

Mipsology, Larzul%u2019s company, aims to bridge that gap with Zebra, a software platform that allows developers to easily port their deep learning code to FPGA hardware.

%u201CWe offer a software abstraction layer that conceals the complexity that would normally require high level FPGA expertise,%u201D Larzul says. %u201CSimply load Zebra, type a single Linux command and Zebra goes to work %u2013 it requires zero compiling, zero changes to your neural network, and zero new tools to learn.  And you can keep your GPU for training.%u201D"
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 10, 2021, 12:34:03 am
If you put your text database onto the fpga, u would get your output instantly.
So you would be in a heap of excess, so maybe its a good idea if you did many samples and picked some metric best of all the outputs, that would be taking advantage of the situation.   u could get 100 million outputs a second, so u have to take advantage of the fact somehow to get something out of it.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 11, 2021, 11:11:36 pm
You know my AI has trice: short term memory, long term memory, and forgetting now. Pretty cool stuff.

This is actually still the only forum I posted to lots, barely have I others, besides the AGI list. I intend to discuss deeper matters in my little tight group though sooner or later, instead of blatantly in the public. But I'll still do updates like openai.com
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on April 12, 2021, 10:43:24 am
I hired a freelancer to translate 17 lines of my python (the tree storage) to C++.

Your C++ coder used Strings... Strings take a long time to process because there's boundary checking added to the machine code on every line where a String is processed... Same with Managed Arrays.. You have to use Fixed Arrays and manually checking each char...

I can try to reprogram it in C++ if you like... I'm taking a break from my own project...

Is the latest version at https://encode.su/threads/3595-Star-Engine-AI-data-compressor ?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 12, 2021, 03:23:41 pm
That'd be awesome if you can fix his code to run faster, at the moment a 3/4 times increase doesn't seem worth it to change over to C++ until I get to large milestones and hence hire someone to only a handful of times. But as you said it may be faster if not using vectors/ strings. Of course there is other algorithms to do a tree but the goal was to see what my algorithm in python is compared to how faster it is in C++.

My current code is not the one on encode.su now, it is much smaller and faster and better at compressing larger data. I'll post it below. I can compress now 1,000,000 bytes to 241,559 bytes (i'm still adjusting the parameters today). So since mine is always 1.5MB ahead of Green's, I should be able to at least reach 20.3MB if my parameters are correct for larger data. Mine takes, with pypy interpreter, ~13 seconds for 100,000 bytes, and ~5 mins for 1MB, 60 mins for 4MB. Green is in C++ and takes 32 hours for 100MB and ~21 mins for 10MB.

I have 4 attachments to add below, hold on. Sorry the names have to be different because this forum doesn't allow the same name filed to ever be uploaded again, infurl please fix that lol. Ok the names are fine this time, though you'll need to download enwik8 and change the name to enwik8.txt in the program at top. - if I can't add the 4th big file.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on April 12, 2021, 10:20:39 pm
I have 4 attachments to add below, hold on. Sorry the names have to be different because this forum doesn't allow the same name filed to ever be uploaded again, infurl please fix that lol. Ok the names are fine this time, though you'll need to download enwik8 and change the name to enwik8.txt in the program at top. - if I can't add the 4th big file.

The names have to be different for the same reason that the names of the files in the directories on your storage devices have to be different, that is, so that you and the computer can tell them apart. If you wish to post new versions of your files then you should be using version numbers in the names to differentiate them. At the very least, include a date and time stamp in the name of the file e.g. yet-more-stuff-20210412.junk so that people looking at it don't have to wonder if they already have it or not. Use ISO format dates YYYYMMDD so that the file names sort in the correct order. What you really should be doing though is using github for this. The forum is for discussion, not for storing files.
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on April 13, 2021, 08:48:22 am
That'd be awesome if you can fix his code to run faster

I couldn't get his file open command to run because it's been deprecated in newer versions... Line: "freopen("enwik8.txt", "r", stdin);" .. So i replaced it with a standard windows file read inputting into a Fixed Array (a large char array containing the full 100mb enwik8.txt) and it ran enwik8.txt in 567ms with the text output, and only 21ms without the text output... So I'm not sure the code was right... 'Count2' was only set to 50 instead of 10,000/100,000 also.

So I'm just replacing line by line of the full thing in C++... it's not too much code total
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 13, 2021, 04:49:18 pm
   for (int count2 = 0; count2 < 50; ++count2) {
      int node = 0;
      for (int i = count2; i < count2 + 15; ++i) {

The 50 is how many letters of the file to step.
The 15 is the branch/ window size that does the stepping.
15 is essentially what I use in my AI but yes the 50 is not eating much, try 1,000,000 for 1MB.

The output for 50 is (either is fine): see attachment.
Though it can be different if need of course.
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on April 16, 2021, 12:18:11 pm
I tested just the small loop (as above), in a few different ways and found some things.....

The baseline times for the other C++ conversion were:
(without data display)
10: 5ms
100: 36ms
1000: 189ms
10,000: 2.2s

In my version the Tree storage is 'fixed size character array/s' instead of String based. Tree storage is split between header & data, not all in one. Custom character search. "Stop" max set to 5.
(without data display.)
10 - 1,000: 1 - 2 ms. (+/- 1 ms)

I don't have perfect output yet, and mostly running into problems with exponential numbers, and stack overflows going higher than 1000.

Also all C++ projects only run on 1 thread (1 core) of the cpu (on a 6 core cpu thats 16% cpu), unless you explicitly say to use more threads/cores. So to get max performance you would need to lookup the number of cpu cores and divide the work into that many groups, then the kernal/operating system should hopefully divide it per cpu core. Some cpus may be advanced enough to split it evenly though and reach 100% cpu... that's if the time taken goes into several seconds/minutes...

There is definitely a lot of extra work for Strings...

I'll play with it over the next few days and try to get perfect output @ 10,000 bytes and see what the time for it is...

Is the header required? This section?
Code
'<mediaw', [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [3, 18, 33, 48, 63, 78, 93, 108, 123, 138]
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 16, 2021, 04:10:36 pm
The output doesn't have to have both numbers and letters in the same list / array, no.

Mine (my FULL algorithm note!) does 100,000 bytes in about 14 seconds and the small C++ code my freelancer made seems to be about 3 or 4 times faster.

Ok so I didn't actually share my tree in python....it is much faster of course solo, my tree below does, well I'm on my slower computer and don't want to test it but it ex. can do 1MB in 10 seconds for example though this is a bad guess mostly from memory.
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on April 19, 2021, 01:10:11 pm
I fixed the output and tested it at 1,000,000 bytes.

This is on one thread @ 2.6ghz (16% cpu usage)...  just testing the small C++ code of your freelancer versus my C++ code..

C++ freelancer:
10: 5ms
100: 36ms
1000: 189ms
10,000: 2.2s
100,000: 24.5s
1,000,000: out of ram error at @ 1.5gb

C++ mine:
10: 1ms
100: 1ms
1,000: 3ms
10,000: 280ms
100,000: 4s
1,000,000: 92s (200mb stack size. 265mb ram usage total)

The output is the same for both.

The difference is the Freelancers code uses Managed Strings which uses the Heap data storage type, which is much slower and constantly resizes itself.

My code uses Fixed Arrays and uses the Stack data storage type, which is faster but you need to declare how much you use at program start... But even for 1,000,000 bytes analysed it only uses 200-265mb. It increases by ~4.4 times for each step between 10,000, 100,000, 1,000,000... so the next step is ~850mb for 10,000,000 bytes.

If your PC does the Freelancers code of 100,000 bytes in 4 seconds.. then your PC is 6 times faster than mine... so it would do my C++ code (100,000 bytes) in about 650ms.

If you want the output and/or code I can send it.. can't seem to post attachments.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 19, 2021, 02:04:24 pm
Thanks for testing that, that's cool it can be much faster, I wonder if in Python I can do optimizations like that too though.

Currently my compression scores are below and I should be able to reach 20.3MB at least if I tweak my parameters as there is a pattern among my scores and there is room to tweak a lot still. But I can't on my slow code, it takes 5 hours for 10,000,000 bytes using Cython or pypy. The harder part that would increase my score is this line below, It moves forward as data gets larger, it should I mean, ex. 9, 8, 7.....then on 10x larger data should be ex. 9, 8.6, 7.5........the first item is hardcoded weight for layer1....2...3.....obviously layer 1 has up to 256 different letter features....layer2 is 256*256 possible....so the weight goes down like a exponential curve sorta. This is what I really need to tweak below, but it's hard to on big data cuz I must test each one at a time item1 u/down adjusted, then item2.... I could normalize it but it seems hard to understand.

 _25ofRoof = (w * [0.9, 0.9, 0.87, 0.83, 0.79, 0.73, 0.57, 0.46, 0.36, 0.31, 0.3, 0.28, 0.33, 0.61, 0.59, 0.53][len(predictions) - 1 - q]) * remaining

10,000 bytes in
3,295 bytes out
Shelwien's Green: 3,453
Byron Knoll's cmix: 2,146 (best compressor on Earth, until 1GB test)

100,000 bytes in
27,899 bytes out
Shelwien's Green: 29,390
Byron Knoll's cmix: 20,054

1,000,000 bytes in
241,348 bytes out
Shelwien's Green: 256,602
Byron Knoll's cmix: 176,388

10,000,000 bytes in
2,219,318 bytes out
Shelwien's Green: 2,349,214
Byron Knoll's cmix: 1,651,421

100MB bytes in
My bests above show I'm always 1.5MB ahead of Green's, I estimate I should get 20,300,000 bytes out if change my parameters correctly
Shelwien's Green: 21,819,822
Byron Knoll's cmix: 14,838,332

Attached is current code in case i missed adding it.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 19, 2021, 02:14:42 pm
Oh and the other thing I gotta fix one day but would be great if someone knew how is see those if x else x if else x.... those coloful long lines are supposed to be a exponential threshold....and it seems it is better if it is a curve with adjustable bumps,but I don't know how to make that and in 1 or 2 lines of code. By exp curve I mean ex. 9.5, 9.4, 9, 6, 3, 2, 1.5, 1.2, 1......but adjustable to i can move around multiple bumps and curves so get it to dedicatingly give inputs a more precise weighting.

Just got for 10MB: 2,218,935 in 3.5 hours on my 6 year old computer using Cython, I'll have to see if pypy is faster in the long runs.
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on April 20, 2021, 08:56:32 am
It's really impressive compression... I can't actually understand much of it at all, but I know the function of individual lines...

Not sure if Python can work with the Stack/fixed arrays, even C# (java hybrid) phased it out because it's risky and can lead to accessing memory outside the program and has security implications. So all fixed arrays in C# must have the unsafe{ } code tags and be fully certified to be published. It's the fastest way to do data processing so they really should recognise it more.

There may be some Sine equation you can do for the ascending/descending graph with bumps. I'm not good with maths. There is a sine/cosine instruction for CPUs but I think it's expensive in cycles so it may/may not be faster than multiple if-then-else statements...

I'm coming up with 344ms for 1,000,000 bytes, and 120mb data usage now. But the data output is questionable, So I'll try to convert the full thing to C++ (over this/next week).
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 20, 2021, 01:25:19 pm
2,218,110 now, code is below
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 20, 2021, 06:22:47 pm
BTW above I excluded 3 lines which stop RAM from getting too large at the cost of some compression, below is tree with adjustable stopping for branch lengths i.e. if it is 5, 4, 3, 3, 2, then it saves 1, instead of adding a branch like 3, 1, 1, 1, having only seen it once should not save it all yet is the idea, takes time to store the full string.

  node = 0
  stop = 0
  for i in window:
    char_index = tree[node].find(i) + 1
    if char_index == 0:
      tree[node] = tree[node] + i
      tree[node + 1].append(1)
      tree[node + 2].append(len(tree))
      node = len(tree)
      tree.extend(('', [], []))
      if stop == 4: break
      stop += 1
    else:
      tree[node + 1][char_index - 1] += 1
      node = tree[node + 2][char_index - 1].
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on April 21, 2021, 06:50:05 am
Do you use the Stop lines in processing 10,000,000?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 21, 2021, 03:29:48 pm
No, I didn't, and it uses something like 16GB or 24GB of RAM. On 100MB you'll need them, setting it to 2 or 1 is probably best, by 100MB it loses little compression and RAM will only go up a little. I'm trying 100MB for the first time and so far it used 14.25GB of RAM after 13 hours, set at =2. It might pass through.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 21, 2021, 11:42:55 pm
Oh one more thing, the better way to do the "stop" method for stopping storage explosion problem is this so that there is no 'less compression': instead of storing only 1 or a few next letters until see more occurrences, simply point to the full line in the RAM. Ex. the way I update my tree branch is below and the better way is below that:

h 4537, e37, l, 23, l 20
h 4538, e38, l, 24, l 21, o 1

h 4537, e 37, l, 23, l 20
h 4537, e 37, l, 23, l 20, goto letter 6345632 in enwik8 and use those 16 letters

Cuz storing every 1 letter offset of the file with a window of 16 letters makes the file 16 times larger at least. So pointing to each 16 letters

I'm still unsure how to do this exactly, cuz enwik8 is 100,000,000 bytes long, but maybe this is done for smaller files and in big data it eventually erases most these 'goto''s.
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 22, 2021, 01:14:43 pm
For Markus Hutter's dream, it is not 90% compression is what u need.
For AGI, you require 99.999999999999999999999999999999999999999% so close to 100% its not funny.

Getting the first 9, is not it at all,  its how long the line of 9's is what u need.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 22, 2021, 04:16:03 pm
Put some thought into this bro. The best AIs get enwik8 dataset down to about 15MB from 100MB, and look at how awesome they are, the Transformer architecture is what makes up DALL-E on openAI.com, this is so close to AGI now and everyone talks like its nearly AGI. It is not only 5% the way to AGI, nor 20%, we are got 70% of how the human brain works in "working form"!! It's alien technology! You can't even tell now if its AI you are talking to or watching a video of ?who? created.

That's why we use 2 evaluations, lossless compression and checking how wicked the text/image/etc completions/invetions it makes up artistically/ intelligently. LC tells you a sanity check, that it really is AI, and the subjectiveevaluation tells you what compression IS AGI, cuz if you got 17.5MB and it generates poorly like LSTMs, and Transformers get ex. 15.3MB and generate human like material, this tells you what point is what and the LC tells you if you got to a better point RELATIVELY. Ruler+where you are. Hotdog+bun. Metric+subjective. True+thinksTrue.
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 24, 2021, 03:37:35 am
I doubt I'd be very impressed with it.    But if ur happy I wont get in the way! =)
It'll definitely work to a degree,  but spotting the cracks is really easy to do if u stop being so positively biased about ur results.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 24, 2021, 03:28:13 pm
An evaluation for AGI that tests AGI on just one single narrow task like who can build the highest tower, or run the farthest, or make the most balloons, does not test for AGI because AGI is general purpose and needs to solve many problems. Using a diverse set of data like enwik8 or similar and seeing what algorithm can compress it better is checking if for a diverse set of real world problems - can you predict the correct solution accurately? So instead of actually building the highest tower, you predict instead the solution in text or image, and you get better compression. You take directly the solution predicted, and so if you find patterns well in data then you predict better and hence compress better. Patterns in the universe is what allows us to do or control/ take advantage of, anything. Hammers are re-used and sold, homes, etc, these are patterns / tools that are really good.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on April 24, 2021, 05:33:08 pm
Honestly, this is like watching a child playing with blocks. :2funny:
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 24, 2021, 10:51:09 pm
You continue to say that yet have nothing in return to explain or show. I explain, and I show a simple but deep algorithm, and am coding it. While Transformers are incredible, the fact the programmers cannot explain all the things I explain like how the future AI will learn new desired goals (AGI, not just food) or current Transformers learn translation and recency priming, easily, shows it is very cloudy. You can go and read all you want on it and get no where, it's literally just a big Backpropagation algorithm.
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 25, 2021, 08:51:01 am
An evaluation for AGI that tests AGI on just one single narrow task like who can build the highest tower, or run the farthest, or make the most balloons, does not test for AGI because AGI is general purpose and needs to solve many problems.

Even if you are just asking your robot to move from one xyz position to another,    it depends on how many obstacles were in the way, or if it even had to communicate with someone to get it to happen.

Even simple motivations can involve a complex hypothesis, if theres enough "environmental" stopping it from even getting something simple done.

"you must brush your teeth"

What if it has to go all the way to the shop to get a toothbrush,   then there was no toothpaste left in stock, has to go look in a street directory for another supermarket... etc..etc..  it gets quite tediously complicated at times, even for simple (and easily sensor detectable!! :)) tasks.


If you let the computer decide what it wants to do itself, its more sentient,  but its also alot more dangerous,   if u decide its motivation its safer because its in less control of itself, and thats what u want, IMO.
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on April 25, 2021, 03:34:21 pm
If you let the computer decide what it wants to do itself, its more sentient,  but its also alot more dangerous,   if u decide its motivation its safer because its in less control of itself, and thats what u want, IMO.

There was a computer game back then (beginnings of AI arcade gameplay a few years ago) where AI learnt to play specific game with highest-level-wins strategy. AI finally found a bug in the game and exploited that bug to cheat to survive longer.
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 26, 2021, 08:19:02 am
Thats not actually developing motivation.   The computers motivation is fixed being the score.
Title: Re: Releasing full AGI/evolution research
Post by: ivan.moony on April 26, 2021, 08:26:17 am
Yes, but look what evolution did with 5.4 billion years of only survive-and-reproduce strategy. More complex behaviors seem to come along with enough cycles spent on genetic mixing of the strongest ones.
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 26, 2021, 02:53:45 pm
Yes, but look what evolution did with 5.4 billion years of only survive-and-reproduce strategy. More complex behaviors seem to come along with enough cycles spent on genetic mixing of the strongest ones.

If you want to develop computer logic (like an ASIC) out of randomly trialing, 20 bits for representing it is 1 million different things to test before u get the best one.   but 20 bits isnt enough for anything.    u require something more like 1000 bits,   and then there hasn't been that many people born since the birth of the earth.   Thats why I'm a little superstitious about evolution.

But I guess there could be some kind of optimization happening in nature, of course.   It all depends on the semantics if there is some kind of structure that develops that can test things linearly instead of exponentially.  even tho that doesnt make sense in the form of a trial a life,  its some other thing happening.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on April 26, 2021, 05:42:27 pm
Evolution used billions of years and massively parallel computation. It's the most boring gruesome intelligence you can use to get a solution. The brain merges parent champ DNAs too to try ex. man+wing traits, except its a created idea in the brain, instead of an actual tool. You need to get this correct and understand this.

Anyway, saying a fixed goal is powerful, ya, is slowest way to get a solution. But AGI isn't explained like this, it does update its reward goals, but it isn't the moving changing that makes it faster/ non fixed....it's the pattern finding and pattern creation ways are better....
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on April 27, 2021, 10:43:56 am
Where all brought into life in a feeling easy kind of way, not much warning to heed,  and feeling safe and secure, but we are all going to our deaths.
Thats how I feel about it anyway... if you feel like you were warned enough,  I dont think that...
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 02, 2021, 04:41:13 pm
Somehow I've landed on the idea it will be incredible to use now as HASH TABLE in my AI for memory storage. I'm sharing the idea below asking for help because maybe it won't work.

So let's say I manage to get a Hash Table working and it uses veryyy little memory and time. And let's say I did manage to not need to store contexts (keys!), only predictions, assuming I only rarely store collision duplicates (if use Cuckakoo algorithm) in the same item which do have contexts to tell them apart. Such a design would look as follows:

input prompt:
Walking down a stree>?

Hash Table (brain memory):
[[t=3, b=7], [], [], [y=45], []]

"stree" turns into a code that tells the list to (instantly! :) ) access the first item "[t=3, b=7]" which are the predictions for this context "stree".

I'm wondering though, how would I do delayed recognition with this datastructure? Ex. "thanksZZZgiving" activates the context (and therefore its predictions) "thanksgiving". But in this datastructure I don't have any contexts . I could remove the ZZZ and then turn it into a hash code and search for it, but doing that for all possible delays seems slower than partially activating the context and then awaiting for more activation at that node.

So how do you do hash table + delayed matching if you have no contexts?

Another question: Can I use a hash table for 0-16 letter length matches (assuming I don't store context keys) or will that not work out?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 02, 2021, 08:41:27 pm
So isn't a hash table that stores only predictions and only 1 or 0 (bit-wise prediction) in every cell the best solution over neural networks etc? Any big disadvantages? It seems to me all you really do is this, correct me if wrong:

input = thanksgivin
generate what to search for = index 45
the hash table = [[1:562, 0:35], [1:77, 0:980], [1:4, 0:9] ...]
prediction counters found = [1:4637, 0:362]

I.e it seems fast to know where to look in the list, and all you store is numbers - and only up to 2 or 4 numbers per cell (and any collisions along with their context key, only collisions need a context key). What could go wrong here? Is this great for holed matches, delayed matches, translation, etc? Or is there future troubles here?

If so, I'm a little lost how to get the key hash then, anyone have a good small generator with very few collisions? It seems easy if I can get this out of the way.
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on May 03, 2021, 07:07:40 am
I'm not sure what ur meaning,

A prediction has to have a context or u cant predict anything,

Indeed you can do it bitwise if you want, but its best to work at the bus size of your computer. (eg 32 bits)  so you can match all the bits at once in the ALU,  goes 32 times faster.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 03, 2021, 05:00:50 pm
32 bits? 64 is better!

Yes, you are handed 'hell' and see it is in your memory with an o at the end, so ou predict: hell>o. With a Hash Table, hell is converted into the index by a Modulus, which then accesses where its predictions are stored. Hell accesses the same index every time, so it will drop its predictions there and also see what can come next.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 03, 2021, 08:46:51 pm
got a hash table working! I coded it myself and uses someone's simple generator:

input = 'walking down the street and saw some birds flying north'
HT = [None] * 50
for n in range(len(input) - 2):
  key = input[n:n+2]
  hashsum = 0
  for idx, c in enumerate(key):
      hashsum += (idx + len(key)) ** ord(c)
      hashsum = hashsum % 50
  if HT[hashsum] is None:
    HT[hashsum] = [input[n:n+3]]
  else:
    HT[hashsum].append(input[n:n+3])
print(HT)
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 04, 2021, 06:24:57 pm
I had got holed matches working some days ago and might, along with time delayed matches, get my compression score down from 20.3MB to anywhere somewhere around 19.2 to 17MB. It isn't that hard for me to implement this but first I have to make my datastructure much faster/ memory efficient to handle the extra searches. I'm thinking I might use only for 1-3 context lengths the hash table, as tree is pretty fast for longer branches as there is no branch splits while stores the contexts as one in a branch unlike a hash table. But I'm talking to someone on this matter.
Title: Re: Releasing full AGI/evolution research
Post by: MikeB on May 05, 2021, 04:47:00 pm
Hi Locksuit, I'm still working on a C++ conversion. It's 99% done. I also have a list of notes for your python version for possible speed improvements (centered around string checking removal). Do you have an email..?

The best overall text compressors seem to be in the range... (WinRAR 3.60b3)
30-60 seconds for 100MB (22MB. <256MB memory)
5-10 minutes for 1GB (220MB. <512MB memory)

If it could reach that you could claim best overall imo... all the other compressors take too long, or use too much resources, or just straight out use a GPU to process it

I think the speed will be within that for the C++ version.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on May 05, 2021, 05:27:50 pm
That is amazing, wow, thank you for doing that. I will private message you my email.
Title: Re: Releasing full AGI/evolution research
Post by: infurl on May 05, 2021, 10:28:59 pm
It would be safer to use private messages to exchange email addresses. Posting email addresses publicly is ill-advised. MikeB please confirm when you have read this and I'll obfuscate the email address.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 10, 2021, 07:49:50 pm
big code update > https://encode.su/threads/3595-Star-Engine-AI-data-compressor?p=69902#post69902
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on June 19, 2021, 06:03:42 am
I tried cmix's pre-processor with my program - like how other top programs use pre-processors. Decompression matched. I got 19,477,251 bytes for the enwik8 file.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 01, 2021, 08:22:37 pm
how to think and learn, good read if have time to read it:
https://louis030195.medium.com/what-you-do-not-learn-at-school-how-to-learn-d6809922cac
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 03, 2021, 01:11:57 am
These dogs are so intelligent, you can just see it in this video below, they trained it:

https://www.youtube.com/watch?v=RJ9g4kmaXxg

https://www.youtube.com/watch?v=sq8yBfl5Fek
Title: Re: Releasing full AGI/evolution research
Post by: infurl on August 03, 2021, 01:28:14 am
Dogs are smart, no question. I remember once telling a colleague how my dog would pick up an old shoe and use it to knock on the door of the house when he wanted someone to come out and play with him. My colleague said that didn't prove the dog was smart because he probably saw someone knocking on the door. He shut up when I pointed out that being able to learn such a skill by watching someone else do it was pretty smart too.  :2funny:
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 04, 2021, 07:39:13 pm
My lab would almost every time he finished using the outside bathroom pen would go up the steps and scratch the door, then again in ~20 seconds if no answer. If it failed ~2-3 times, he'd bark every maybe 5 or 10 seconds. No one taught him this or bothered such thought on this either, ever. On entering he'd clean his feet by licking each top of paw lol, mom would clean his hands so maybe this was from that. In the winter he'd kick back his back legs in the snow before going up the steps lol! He'd chase a laser beam around the kitchen, and learnt after a dozen times that saying laser beam meant soon would be seeing a red dot, so he'd look for it suddenly on his legs as soon as I said it. And "who wants to go for a walk?" would make him very excited each time, hours panting and unsettled waiting until got said walk, he only ignored it if finished walk and neuron has been fired so to speak. The only thing he didn't eat was olives and salary, he'd eat Kleenex. We caught him a few times laying on his back sleeping.
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on August 05, 2021, 07:08:16 am
Thats great.   Your dog is lucky that he has you guys there to appreciate him,  not all dogs are that lucky...
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on August 06, 2021, 07:04:43 pm
Doggo's are just cool...  :)
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 21, 2021, 09:55:30 am
So my adventure on me versus openAI.com continues:

So far my code I made from scratch (except the little pre-processor, though I know how it works mostly) got a score of 19,033,243 bytes losslessly compressed for 100,000,000 bytes fed in (enwik8.txt). So I'm 4MB away now from where I "should" be. Still more to come. This is only the beginning of everything.

The text completion results yous seen I had improved as you can see in post #221 below in the link below, but these are from my 20MB score, because the code was older and wasn't running the pre-processor during completion of text, so that's a big difference.

https://encode.su/threads/3594-CM-design-discussion/page6

(project page)
https://encode.su/threads/3595-Star-Engine-AI-data-compressor

Not listed in my how it works on that link is that I am half set up the usage of hole, and delay matching of context, and delayed prediction (ahead of time predicts it). And it works on word level due to the preprocessor, so it can recognize "walked very fast to the" as "walked fast in the >>> new store they made" and predict ahead of time "new [store]" so we get "walked very fast to the store". Many matches cultivate their predictions to get a new set of predictions in form of probabilities then sent to a evaluation function.

What's left to do after this is translation for recency boosting and matching, and mirror ghosting (seeing a partial match and seeing the 2 items not matched are similar so so should my items even though different topic words), and pattern of delay and hole errors, and weighting tricks to tie it all together to get clearer predictions, and a slew of other little thingies.

Next time someone says they don't know what GPT is learning, or how the code works, tell them to look at the code harder and look at the dataset harder, there is just several common patterns in it you can find and allows predictions, it all starts with exact matches too.
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on August 21, 2021, 12:04:56 pm
Its good you sound really dedicated to it,  still exciting for ya?

Ive got an optimization for SIMILARITY matching,  but I havent put it together yet to proove it.
I can use to optimize this whole screen multi tracker,    I can do lots of funny things with it I bet, but all I do is use to convert 2d to 3d.
But the similarity matching is also really good for robust classification.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 22, 2021, 12:46:49 am
Yes it's exciting.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on August 26, 2021, 03:15:36 pm
@magnus >> https://www.discogs.com/Gaetano-Parisio-Advanced-Techno-Research-610/release/31408

https://www.youtube.com/watch?v=LQZG94a-tUo
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on September 09, 2021, 08:53:49 am
I recently implemented successfully another one of well thought out mechanisms from about 1.5 years ago. I can do a LOT to improve it still. It is close to results like that of word2vec. Code is very simple and no backprop or dimensions, it is 'like' a markov chain. It seems it would be on par in speed, maybe faster.


Words I decided to compare to computer (I tried some for both ex. ice=machine?):

COMPUTER?
software=0.2459529607379434
data=0.10863912932059587
food=0.09314391618522774
system=0.09214996731090673
workstation=0.08996734010779821
machine=0.08289191038358397
processor=0.07328534254644654
book=0.06885126154442518
vehicle=0.06391849258038963
equipment=0.06092936199605734
her=0.05948213538545814
could=0.055938417882751564
country=0.055922503225975244
house=0.05457888129428688
after=0.05030710594051818
device=0.05011226027097931
brain=0.04831150343586642
truck=0.047897439575114845
table=0.045383152221026325
bathroom=0.04361508308949373
website=0.038618616832237634
dog=0.03843687958021132
ice=0.03783619075122418
bird=0.036951109262121105
wind=0.035948801255487065
love=0.03561035863609414
flying=0.03356048147334977
cold=0.033147939257507
planet=0.03168334537528895
virus=0.030077051409700663
moon=0.029999859849798707
damage=0.029257813653811325
solid=0.028415578442110387
woman=0.027129976538562355
junk=0.02564883443546052
river=0.02487387235335279
liquid=0.024723840606259907
cook=0.02413970976254965
bed=0.024005750344440892
snow=0.022538512816464865
walking=0.02050292203122577
rocks=0.01984966418696772
cute=0.01930955491933795
insect=0.01877782252451295
rain=0.01835271820295515
seller=0.01615162512918414
wet=0.015500301209636577
shirt=0.015256542128461078
frozen=0.013651280569497054
aunt=0.01187750352063693


The ones at the bottom seem similar but not exactly the same, though also not too common probably. The ones at top that tricked it a bit seem like "doing things", so maybe that's why, or are the opposite of walking.

WALKING?
driving = 0.21044045186876562                                                           
running = 0.1976446291245234                                                           
walk = 0.1453542583229366                                                           
writing = 0.14476934099845967
setting = 0.14044005111704908
run = 0.11585350318471342                                                           
showing = 0.10670925286053715
moving = 0.10276187471894652                                                           
ran = 0.09575972813795947                                                           
jogging = 0.09490445859872612                                                           
sat = 0.08808788863297107
listing = 0.07955469261793696
speeding = 0.07669261618306206                                                           
copying = 0.0724111798776637
storage = 0.06861070682726737
sinking = 0.06748433963257355
giving = 0.06544418722903171
travelling = 0.062310766954145634                                                           
keep = 0.06202661066360859
dragging = 0.06119199272065514                                                           
race = 0.060641316372697825                                                           
read = 0.05877505953312838
outlook = 0.05848871874415587
eating = 0.05813730992477973
yelling = 0.05796178343949045
sprint = 0.05777954205084714                                                           
dirt = 0.05691585652028159
gliding = 0.056538826399241085                                                           
dying = 0.05235399184443769
evil = 0.05021699487375353
red = 0.050065237331870294
destroy = 0.049109188730858906
ground = 0.04905793637463773
sink = 0.04886720954607757
wall = 0.04720451242701542
carrying = 0.047070126078466834
baby = 0.046262174228834683
storing = 0.04483907706584552
rocks = 0.04398416423223664
saying = 0.04311515253100906
morning = 0.042998219497421025
clean = 0.04268995374768349
scared = 0.04267515923566879
drinking = 0.042037375156711444
sing = 0.0414678427160083
danger = 0.04120551967552257
link = 0.0411287053856758
snow = 0.04104332262988895
sprinting = 0.04076433121019108                                                           
sleep = 0.04057560745458834
dog = 0.03983152091071468
racing = 0.039802531516239345                                                           
shy = 0.03949044585987261
before = 0.039412720896356446
sand = 0.03882668078121144
after = 0.038733711213697196
copy = 0.03872573979376678
house = 0.03855976793614278
stack = 0.0383572506586857
creating = 0.0380615237529231
math = 0.037792463509322964
book = 0.037773560080942375
ship = 0.037606963169615266
door = 0.03739384288747345
jog = 0.03694267515923567                                                           
moon = 0.03638788696067713
signal = 0.03596871494141447
buy = 0.03586964528628044
remove = 0.03567040232201887
could = 0.03554902127544266
woman = 0.03526271604037011
cat = 0.03511164150437104
study = 0.034045272575551115
flower = 0.03402524667088906
strolling = 0.03328025477707006                                                           
wood = 0.03323394246155422
sleeping = 0.032929807108435294
teeth = 0.03282308379349236
meat = 0.03265645035841163
plan = 0.03257815193193059
frozen = 0.03250685343810756
art = 0.03218456612958818
king = 0.0320070512454969
zone = 0.030628874400517172
sunk = 0.03050686728517014
ask = 0.029711296085633956
cook = 0.02971038784999052
screen = 0.029708776428939765
heat = 0.029667650057569078
death = 0.029653326682526283
cold = 0.029604458029519085
mail = 0.02953849749852788
animal = 0.028961362561792745
mother = 0.028886391588113024
floor = 0.028630057315460162
brain = 0.028626194487333244
bone = 0.02856364487976473
task = 0.02845147152190387
bottle = 0.027799094467040136
clouds = 0.02769873438663247
listed = 0.027527068602369684
technology = 0.026906107790891684
grip = 0.02678294076575614
teach = 0.02640449041051344
dinner = 0.026190940865461386
cash = 0.025976223652556772
data = 0.025922447630086946
medicine = 0.025235143711617922
wine = 0.024743449826675718
processor = 0.02457456437730253
trash = 0.024441486833349176
drying = 0.024203821656050957
meant = 0.023988991859611
software = 0.02317317045242939
equipment = 0.023087974713854702
mouth = 0.021855876108405146
interest = 0.02184529906257178
toy = 0.02172652686727923
gas = 0.021665965704562865
pacing = 0.021462470338453853                                                           
tramping = 0.021019108280254776                                                           
computer = 0.020502922031225772
clothing = 0.02033553858605764
theory = 0.02031352878816722
discover = 0.019818499520883825
trotting = 0.019643891140706426                                                           
molecules = 0.019547623552164134
junk = 0.019408422306740596
hurrying = 0.017197452229299363                                                           
striding = 0.016560509554140127                                                           
toad = 0.016498251999425315
cube = 0.015664221021795804
soak = 0.011464968152866241
windy = 0.01019108280254777
soothing = 0.009554140127388535
investing = 0.0070063694267515925
fridge = 0.0025477707006369425
limping = 0.0012738853503184713                                                           
workstation = 0.0012738853503184713


ICE?
snow=0.11322215741147319
cold=0.09415304391949597
her=0.07750462707068446
frozen=0.07259386871900203
wind=0.06986131470667552
country=0.06866124047133551
data=0.0645871971970835
solid=0.06261448184752398
rocks=0.061390092996170036
house=0.05842006036774735
ran=0.05690442407640973
cars=0.05625748000083986
planet=0.05620970244457469
runs=0.056124227880248775
girl=0.051638288508951295
drive=0.05115981747258963
glacier=0.05055836507585705
look=0.047316799815272176
woman=0.04572853445553063
crust=0.04283590121218987
ghost=0.0410167500080067
machine=0.040883908295542525
mother=0.04049203529605726
wet=0.04027958222957126
melted=0.03872938548932658
motor=0.03864754595853467
software=0.0380583676431136
computer=0.037836190751224186
moved=0.03551215128396339
jump=0.03371481037000622
virus=0.03369885239775037
iced=0.033190532013929595
equipment=0.03246905924605326
codes=0.028123422968783784
processor=0.026744615916900343
happy=0.02545016681442651
freeze=0.02211497724137182
crystals=0.018834258524980173
delete=0.01506740681998414
raced=0.01288659793814433
rode=0.011895321173671689
aunt=0.01155900120109418
flakes=0.010309278350515464
cute=0.007335448057097542
snows=0.0055511498810467885
exited=0.0049563838223632035
filesystems=0.0017842981760507533
leaped=0.0013877874702616971


Good again for below. fun.....maybe it's sentiment related lol? word2vec predicts such too.

SCARY?
horror = 0.07556657829470623                                                           
fun = 0.05543110184943995
easy = 0.05366677931733694
weird = 0.04407693331558632                                                           
odd = 0.04239610000544691                                                           
ghost = 0.03812263685681407                                                           
frightening = 0.038026607538802666                                                           
shocking = 0.03689217758985201                                                           
terrifying = 0.03225108225108225                                                           
simple = 0.030917441469698666
pig = 0.024313391526506283
evil = 0.023651862540751432                                                           
junk = 0.023577501635055592                                                           
shirt = 0.023345101500441306
lovely = 0.02325581395348837
book = 0.023116684215564277
scared = 0.022727272727272728                                                           
woman = 0.022632847095597812
snow = 0.021487603305785124
planet = 0.021460423634336676
difficult = 0.020738636363636365
moving = 0.020562314397930836
invest = 0.01996547756041427
afraid = 0.019905956112852664                                                           
storing = 0.01902468392534618
creepy = 0.01818181818181818                                                           
walking = 0.017921250723798493
mysterious = 0.0166015625                                                           
open = 0.015366033684493055
raced = 0.01526374859708193
morning = 0.015142782289756353
house = 0.015077087690287438
toad = 0.014730006835269992
share = 0.014660276289822385
sand = 0.014598540145985401
hardware = 0.013972828847130522
workstation = 0.013636363636363636
friendly = 0.013550135501355014
wall = 0.013167344984438592
food = 0.013074883423242206
computer = 0.012661172538056544
mother = 0.012565195446133139
teach = 0.01220865704772475
could = 0.012071042312719445
range = 0.011984021304926764
shocked = 0.011363636363636364                                                           
score = 0.010197368421052632
software = 0.009351734156528055
exited = 0.00909090909090909
jogging = 0.00909090909090909
ghostly = 0.008849557522123894                                                           
singing = 0.008845643520566122
cook = 0.008237986270022883
terror = 0.006811989100817439                                                           
brain = 0.006626173384870237
sitting = 0.00641112618724559
cell = 0.004984914075823166
speeding = 0.004545454545454545
system = 0.004424886933137758
gliding = 0.00425531914893617
jog = 0.0
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on September 10, 2021, 12:33:41 am
That would be really awesome teaming it up with chat bot code, to get a better near-matching system going I bet.   O0
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on November 06, 2021, 04:27:21 am
Still going at my architecture probably and GPT and AGI. But in the meantime of this evolutionary acoming.

You know I really liked the movie Spider Man 3 (2007), they say it had too many villains but it my eyes it was perfect, there was a lot of bad luck going on, he had if I remember correctly 4 villains all kinda at once, and that was cool. I liked all 4 too, especially Venom :) Next Rhino, then Sandman and Goblin. I love the darkness in it.

Edit: I am so confused, why is there no Rhino in that movie now... xD it was a garden nighttime scene in a costume of rhino
Title: Re: Releasing full AGI/evolution research
Post by: MagnusWootton on November 06, 2021, 11:20:56 am
Thats cool.   it definitely works,  but it doesn't count unless its running for real.   u have to do all your own labour,  no-ones going to do it for you.  as far as I treat the situation anyhow.

OPEN-Ai's  new one that actually spits out working code is amazing,   just saw it recently.