Ai Dreams Forum

Member's Experiments & Projects => General Project Discussion => Topic started by: LOCKSUIT on December 25, 2019, 09:53:06 PM

Title: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 25, 2019, 09:53:06 PM
This is most of all my notes, images, algorithms etc summarized/unified in their recent forms. Here. I am 24 and started at 18 all this work. I make discoveries in my brain using mostly vision (visual language; shapes, cats, etc which has context, each explain each other like in a dictionary, a small world network of friendly connections).

https://www.youtube.com/watch?v=Us6gqYOMHuU

I have 2 more videos to share but not yet uploaded. And 2 more notes. The last note will have some more very recent good data but these not yet given are of less immediate importance. The long file though does have a lot of recent knowledge in it though still. It's better you know all it, at least the movie.

notes
https://paste.ee/p/mcnEk
https://paste.ee/p/kQLCx
https://paste.ee/p/CvSsB
https://paste.ee/p/lJYMP
https://paste.ee/p/EmCZt

Code-only of my advanced-ngram 'gpt2':
https://paste.ee/p/7DG3M
https://paste.ee/p/XvVp5
result:
The software was made on a
The software was made on a wide variety of devices, and operating apps and applications that users can easily read as an app for android. It is a bit of a difference, but i was able to get it. The developers are not going to make it through a web applications, and devices i have seen in the running for the mobile apps. Applications allows users to access applications development tools, and allow applications of the app store. A multimedia entertainment entertainment device, and allows platforms enabled access to hardware interfaces. Using a bit of html application app developers can enable users to access applications to investors, and provide a more thorough and use of development. The other a little entertainment media, and user development systems integration technology. Applications allows users to automatically provide access to modify, optimize capability allows users to easily enable. Both users and software systems, solutions allowing owners software solutions solutions to integrate widgets customers a day. And if you are accessing services product, and mobile applications remotely access to the software companies can easily automate application access to hardware devices hardware systems creators and technologies. Builders and developers are able to access the desktop applications, allowing users access allows users to
((I checked the 400MB, not too long copies pastes, like only 3-5 words at most))
https://www.youtube.com/watch?v=Mah0Bxyu-UI&t=2s

I almost got GPT-2 understood as shown at end of movie but need help, anyone understand it's inner workings? Looking to collaborate.

I recommend you do this as well and mentor each other.

more data by my top-of-mind pick:
AGI is an intelligent Turing Tape. It has an internal memory tape and an external memory tape - the notepad, the desktop, the internet. Like a Turing Tape it decides where to look/pay attention to, what state to be in now, and what to write, based on what it reads and what state it is in. The what/where brain paths. It will internally translate and change state by staying still or move forwards/backwards in spacetime. It'll decide if to look to external desktop, and where to look - notepad? where on notepad? internet? where on internet?

It's given Big Diverse Data and is trying to remove Big Diverse Data (Dropout/Death) so it can compress the network to lower Cost/Error and hence learn the general facets/patterns of the universe exponentially better while still can re-generate missing data despite having a small world network (all quantinized dictionary words explain each other). It uses Backpropagation to adjust the weights so the input will activate the correct node at the end. That node, can be activated by a few different sorts of images - side view of cat, front view of paw, cat ear, mountain, it's a multi dimensional representation space, Since the network Learns patterns (look up word2vec/Glove, it's same as seq2seq) by lowering error cost by Self-Attention evolution/self-recursion of data augmentation (self-imitation in your brain using quantinized visual features/nodes), it therefore doesn't modify its structure by adjusting existing connections (using weights/strengths) to remove nodes, it rather adjusts its structure by adjusting connections weights to remove error and ignores node count Cost.

Intelligence is defined as being flexible/general using little data, but walker bots are only able to solve what is in front of themselves, we need a thinker like GPT-2, and you can see the mind has evolved to simulate/forecast/predict the future using evolutionary mental RL self-imitation self-recursion of data. And intelligence is for survival, immortality, trying to find food and breed to sustain life systems, it's just an infection/evolution of matter/energy / data evolution.
Title: Re: Releasing full AGI/evolution research
Post by: Hopefully Something on December 26, 2019, 06:28:43 AM
Oh my gosh, you're so logical, I did not expect that. Use your power for good.  O0
Title: Re: Releasing full AGI/evolution research
Post by: Hopefully Something on December 26, 2019, 08:11:00 AM
You showed the optical illusions I posted! And talked about redundunduncy :)
Crazy text you is real life me, and real life you is chilled out Richard Feynman... This was too much I need to sleep...
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 26, 2019, 02:17:24 PM
I think you can skip the heterarchy maybe....simply the hierarchy nodes get activated ex. nodes cat, etc, which parallelly leaks energy to their nearby context ex. 'the cat ate' 'the cat ran' 'our cat went' and these handles leak energy to nearby context 'the dog ate' 'the dog ran' 'some dog went' and so on for proving 'some'='our' as well, including if cat=zebra/horse and dog=zebra/horse then cat=dog! Hence no w2V, just on the fly activated by leaking connections. Solves typos, rearranged phrases, unknown words ex. superphobiascience, alternative words, related words, names, references ex. it/he, and blanks. Then for the candidate words, the winner is the one that is most frequent in knowledge (has energy) and in Working Memory Activation Context (which fade energy/leak), most related to story word (activation leak), most favorite (has energy). This is how to recognize/understand a window where you look/how wide, then which candidate Next Word to choose, then you may also adapt it by translating it too.

It can be run faster to do it other ways but to understand it can be easier by other ways like this.
Title: Re: Releasing full AGI/evolution research
Post by: Art on December 26, 2019, 02:54:09 PM
Finally we get to hear the real Lock!! Good one! O0
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 26, 2019, 03:58:54 PM
The hierarchy can self-organize to lower node count Cost (error) by re-arranging the connections that exist, to learn Byte Pair Encoding (segmentation) on the fly too, not just translation or sequence-building on the fly. You don't have to look at all areas/layers of the hierarchy.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 26, 2019, 04:56:20 PM
Attention types: Decides which sense to pay attention to ex. buzzing noising or pop up goals, where/how wide to window on that sense, and which Next Word to predict using more attention.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 26, 2019, 05:38:12 PM
Think if you had a GPT-2 prompt 1 word long only ex. scratching, it could be seen as scratch ing, so let's say we had only scratch,  the word to predict next is the one seen before in the data, and the frequent one is more likely chosen. This word scratch can also match related words itch scrap etc and then see what frequent word is next. So the closest related word, and the highest frequent word is chosen.

scratch the x64
scratch it x58
scratch back x38
itch my x76
itch shoulder x50

So my is predicted because itch=scratch a lot and my is seen after itch x76 times. But we aren't done yet. The candidate words/tokens are given an extra score, now, which is relational score to story words 'scratch' (no itch exist) in your input prompt. So we end up picking shoulder say. We may also then translate shoulder to adapt it in ex. stomach if your story was about stomach itch.

If we have 2 words in our generated story so far: 'Scratch stomach', we can predict the next word using either word but we want to use both, so we see what nodes/related nodes including positional rearranging exists in the data ex. scratch stomach, itch bladder, bladder itch, etc, and the more distant they are positioned the lesser of course a match it is.

So say we got a 'first I go into the lake and stay there, second I cold freeze in the winter', the 'first' votes on 'second' because it could be 1st or etc translated, frequency, relation, but it is far back and there's other words, so it definitely has less vote being 1/10th the weight and fading in energy by this time, yet it does have a strong vote too.

Say we find 'stomach itch but it has a word in the middle or one missing ex. 'stomach really itch' or 'stomach', this also has less score but still has some.

What if I say 'second, this is that, third, i am this, but i won't say the next because i refuse to repeat myself'. Here I pay attention to ignore the word 'fourth' being said.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 27, 2019, 01:43:14 PM
Let me know if you want more, if no one says so then I assume no one is interested.
Title: Re: Releasing full AGI/evolution research
Post by: goaty on December 28, 2019, 10:49:11 AM
If your sure this is the future of a.i. you should just pursue it further yourself,   if you were just to release it all early for complete free, whats new about it theoretically that's exclusive to your system?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 28, 2019, 01:08:57 PM
It is free. I am sharing my knowledge/current stage. There's a lot of new knowledge there. My AGI description talks on its own to itself and asks us/tells us knowledge. It unsupervisedingly collects internet data and researches, and learns patterns/building blocks, then it Answers Questions using RL using 4 steps. It is transforming the data, old to new, then analyzing it again, old to new, this is bootstrapped Self-DataRecursion, data evolution, this is how evolution happens. And bigger brains are smarter by more context data. The tools/skills are another part for that and can be described by text/visual language of the universe. I also tell you what will happen in the end etc.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 28, 2019, 04:47:21 PM
Storing, Forgetting, and Recall. Attention does all. Which sense or where to pay attention to. How much attention. To recognize it. To choose the Next Feature by looking at story words and which to ignore (forget). To Adjust the Next Feature. The nodes that store/recall are attentive / pop up as agenda questions.
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on December 28, 2019, 05:25:57 PM
Quote
Korrelan, I'm really sorry you won't be able to fully process the green/red colors being color blind but, it doesn't matter much lol.

Thanks for the individual consideration Lock, it's much appreciated, but it's ok because I can read.

You did leave that comment in though, before you posted it on every single forum known to man.  ;D

 :)

ED: OH! I see you have removed it from one non Email based, editable forum... cool.

 :)
Title: Re: Releasing full AGI/evolution research
Post by: goaty on December 28, 2019, 05:54:04 PM
Looks like you've got a good system (I didn't think of it myself, sounds good technique),  but is it intelligent yet?  If not your work isn't finished,  just somewhere along.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 28, 2019, 06:28:15 PM
I spammed OpenAi with it too hehe, more 2 come

I call this ignition phase, spread it like fire
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on December 28, 2019, 06:30:59 PM
You did remove my name though? Yes?

Quote
I spammed OpenAi with it too hehe, more 2 come

Lock answer me... you have taken my name off this spam posting your doing?


https://agi.topicbox.com/groups/agi/T7cbcba9a1ae63532/releasing-full-agi-evolution-research

https://groups.google.com/forum/#!topic/artificial-general-intelligence/U8-wPGwON_0

https://www.mail-archive.com/agi@agi.topicbox.com/msg03960.html

So not funny lock...

Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 28, 2019, 07:37:55 PM
your not on the openAI's

and wont be in future posts/redirects

my notes pastees have you as K lol



That's where the doctors hang out, now that you infected everyone onto it.
Title: Re: Releasing full AGI/evolution research
Post by: Korrelan on December 28, 2019, 07:55:41 PM
Remove my name from all your spams.

 :idiot2:
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 28, 2019, 08:00:50 PM
only 2 allow but i'll try

All other names are have already been made into code name prior.
Title: Re: Releasing full AGI/evolution research
Post by: Hopefully Something on December 30, 2019, 01:11:05 AM
So the meaning of a word is not contained within it but is instead described by the shape of the web of related words as observed from the vantage point of the word in question? Context is the actual word? 
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 30, 2019, 03:12:33 PM
Understanding Compression

To learn how to break the law, of physics, we must understand it better.

https://paste.ee/p/kQLCx

"So the meaning of a word is not contained within it but is instead described by the shape of the web of related words as observed from the vantage point of the word in question? Context is the actual word?"
Yes, a given particle of Earth is defined by all of Earth context (and then it re-checks the all to each again, a self-attentional-SelfRecursion of Data-Improvement like editing a Paper), a exponential explosion of heat is given to the core of Earth and self-extracts free energy from burning fuel. Brains do this, atoms do it, galaxies do it. That's why magnetic domains align and propagate brain waves in brain, team of brains, magnets, etc. AGI will be a collaborative project and already is too, we share data. Let's hug each other (real tight hug).

The big bang was unstable and decompressed. Planets re-compress. Atoms do it. Galaxies do it. A brain compresses to learn the facets of the universe by using data compression, so that it can burn fuel and extract free energy/data from old data (just like batteries, gasoline, and our stomachs). Data evolution, data-Self-Recursion. Lossy/Lossless compression both transform data from one form to another. When you compress a file losslessly, it actually is destroyed and gone because it isn't the same file/data. Compressing/firing employees do this too. Luckily, being lossless, you can re-generate it back at a click of a button (or if you destroy a drawing on your desk and re-draw it from memory), however it takes time to evolve it back, sometimes VERY long time. Brute force to find the smallest compression of the Hutter Prize file would take extremely long. Intelligence is all about speed, evolving domains of nodes (cells, neurons, brains, cities) to find which out-pace each other. This aligns the domains of the brain/group to propagate brain waves faster through the cluster and have a bigger electro-magnetic potential. If we use lossy compression, you can actually get the exact file back but takes much longer. A system in space will collect data to grow, then decompress, a self-extracting drive. This decompression is exponentially explosive and results in smaller agents that evolve to compress-extract so they can resist change. Energy (photons) propagate forward but can be pulled in by gravity and will loop around like in a motionless, cold battery. Change=energy release. Unstable. Equilibrium is the opposite. We seen a algorithm can be run perfectly many times, compress, decompress, compress, repeat. To do this requires a form of equilibrium. Wear and tear affects it though. Yet our sperm/eggs has seen many generations. If the universe contracts back, Earth can emerge again by this self-organizing/attention physics. Different systems and their size evolve different but is based on electromagnetic compression/decompression, Earth if became nanobots would simply grow in size and resist change/death approx. better. Lossless compression is so fast because it's all contained in such a small place like a cor rod and is very hot/related, lossy requires ex. the whole Earth, a form of brute force and exponential hints/data evolve it back faster. Lossless, locally, without brains to discover the data, requires only little data. The bigger a system is the bigger file you can re-create from nothing - a human brain can re-generate back almost anything. Lossless, based on how many particles are in the defined system (uncompressed file size which needs a computer to store/run it), has a limit of how small it can become and so does lossy because Earth is finite in size during a given period quantinized and a file can be re-generated back quite fast if some of it is still around - the lossy file, even if incinerated, can be re-generated back based on how many particles make up Earth. Here we see a file can be compressed deeper the bigger the file is or the bigger the Earth is. With such little of the file left (even just the remaining physics if incinerated) it can come back based on large context but has a limit/need (size of Earth/fileData, time, and compute).

We see the communication/data tech builds on itself exponentially faster, bigger data = better intelligence and extracts exponentially more/better data (per a given system size). Earth is growing and heating up by collecting more mass and extracting/utilizing exponentially more energy like nanobots will when they come. We will harvest Dyson Spheres. Our goal to resist change by finding/eating food and breeding (Darwinian survival) could Paperclip Effect us and explode ourselves! A cycle of compress, decompress. Our goal is to compress data in our files, brains, teams, but also to expand our colony of data. Why? To resist change, to come to equilibrium (end of evolution fora given system exponentially faster). These colony mutants/tribes have longer stable lives being so large and using its size to extract so much. The bigger a system is the less it changes. Imagine destroying all instantly-repairing nanobots superOrganism? Can't. And, the bigger a system the more weight/vote/context interaction (heat) is transmitted/infected, not just to extract free knowledge/heat (motion/energy) but also to fix issues/damage. My body/knowledge  stay the same almost yet my cells/blood all change their spots for new ones, the air stays the same yet it blows around Earth, the heat in my walls stay the same yet the heat moves around, Earth is a fractal of pipes, veins, roads, and internet connections to propagate energy, ideas, blood, waste, traps, and negative electricity, simply to loop it around and re-use it. Distribution of data allows global, not just local, flow/alignments. It moves around and the system can resist change/repair/or, emerge. Or goal is to resist change by using large context/collaboration by aligning random domains to get free energy/knowledge. We have to collect/grow big and digest/extract it so we can resist change better. We are doing both compression and decompression of data/energy and possibly are trying to equal them out so we can come to equilibrium jussst right in the middle of the 2 opposites/attractors. The system we become will be exponentially repairing/immune to change - compression and decompression, however we may be growing larger but less dense as it does so to become approx. more immortal. We will likely need a exhaust/feed though, we will need a fine tuned food source and radiation exit for our global utopia sphere/galactic disc loop string.

So we should be very interested in compression, and decompression, i.e. Bigish Diverse Dropout - which data to destroy and remove/ignore/forget, and Big Diverse Data collection/creation by extracting free data using old data context vote/weight in. In the brain, we do compression and can basically still re-generate the ex. Hutter Prize file despite having a small decompression brain. The need to do both ignore/attend are the same process in Dropout or data collecting/harvesting, and the decompression process when ignore/attend which to extract/collect new data from old data is also the same process, and the compress/decompress processes are the same process too - which to remove and which to attend however to attend fast we need to remove fast, hence these 2 steps are not really the same process. However when you do compress data and create a brain/team, it is easy to attend to the remaining keys. During extraction, you use what you Learned (patterns) to decide what to Generate. So they are both 2 different processes I guess. Btw, when you build a heterarchy you need the hierarchy first, and may not even need the heterarchy! The connections of context handles are already laid. I was going to say, making relational connections doesn't compress data on its own yet in effect does, though.

Some concepts above were compression, decompression, equilibrium (no change/death), exponentialality. We seen how we grow mutants that resist change better by using both compression/decompression (destruction of neurons/ideas/employees/lives/Earth, and creation of such) so we can come to equilibrium exponentially faster by large context weight (which exponentially helps compression, and extraction during Generating (ex. GPT-2's 40GB and 1024 token view)). I'm still unsure if we are just growing and exploding. If the universe only expands then we will likely radiate.

Compression looks for patterns and leads to faster domain alignment/propagation and exponentially faster large brain waves/free energy extraction/re-generation from nothing. If we want to compress the Hutter Prize the most, we will need to stop it from generating multiple choices from a given context (it still uses the context). We could sort all phrases in the file like 'and the' 'but the', 'so I' 'then I', and force it to discover the concept that leads to the re-used code 'the' or 'I'.
Title: Re: Releasing full AGI/evolution research
Post by: Hopefully Something on December 30, 2019, 09:19:07 PM
Resisting change is still change though :P. I'd say the goal is to resist entropy.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on December 30, 2019, 09:30:43 PM
Taking the right path is a lot less change than bumping into the burgler with a shotgun 0O. They simply breed/rejuvenate more than they die. The agent stays most similar when from statue it bends down to grab an apple.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 04, 2020, 04:55:32 AM
I've got 10 YTbe subscribers now lol.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 08, 2020, 01:37:01 AM
I have my man on the compressor algorithm for 25USD from india. I am learning how they work currently and will shortly post my formal formula for AGI. In the meantime see my entries here: https://agi.topicbox.com/groups/agi

Layer Norm......see it now is just >
https://knowledge.insead.edu/operations/warning-do-not-just-average-predictions-6641

GANs compress data...they generate realistic data...so does lossless prediction...the data fed to it, allows it to work on unseen data...because its so similar

https://royvanrijn.com/blog/2010/02/compression-by-prediction/
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 09, 2020, 06:58:38 AM
Me and my employee got the compression working. It is 5 bits per character, normally each char is 8bpc. So the 100MB wiki8 would be about 63MB. Good for a start.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on January 18, 2020, 05:35:34 PM
My order-2 made the 100MB wiki8 file compressed into exactly 40,572,450 bytes. Took exactly 12 hours lol in python. The dictionary (I included it into the 40MB) was 2,069,481 bytes. The decompressor was 4,910 bytes (also included in the 40MB). Code is attached for the non-believers. It's in python so you know it was me cus they are usually in C++ for speed. You can try it on the small input I uploaded. https://paste.ee/p/Cd7Va

The world record is 15MB. 25MB away lol!!!
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on February 24, 2020, 04:18:48 AM
It will move incredibly fast. AGIs can think/move a ton faster and replay skills perfectly and erase bad parts. Can nest deep into thoughts "the girl who saw the guy who wanted the man who said to him to go was here". Recall perfectly, have more memory, longer attention, don't sleep eat poop nag etc. AIs live longer than humans, can clone/download skills etc. Many sensors/motors, many types of them, 3D vision using MRI and sims, wireless communication of visual thoughts, full cooperation, fully times updates shared, can store facts when read them instantly and fast, can see/feel nanobots to control them - we can't, and a lot lot more I won't list here. Advanced nanobots will eat Earth in a day. It's really cheap to gather microscopic data and make small replicators to up your computer fabrication and data intake and manipulation accuracy. The more data/ processors/ arms/ eyes they get and better ones they get, the more such will they get!

Inventing 1 AGI and cloning it on a mass fabrication scale is all we need. The most powerful thing will not be inventing 1 AGI per see, it will be cloning workers on cheap replicating computer hardware, data, arms and eyes. I.E scaling AGI and inventing AGI is all we need.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 22, 2020, 01:46:48 PM
Lol

https://rule-reasoning.apps.allenai.org/?p=The%20squirrel%20is%20young.%20%0AThe%20tiger%20is%20rough.%20%0AThe%20tiger%20eats%20the%20bear.%20%0AIf%20something%20eats%20the%20bear%20then%20it%20is%20red.%20%0AIf%20something%20is%20red%20and%20rough%20then%20the%20squirrel%20likes%20the%20tiger.&q=The%20squirrel%20likes%20the%20tiger.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 26, 2020, 12:57:39 PM
Huge breakthroughs I made.

See the link below if you're new to neural realistic_future generators for text....aka AGI attention:
https://aidreams.co.uk/forum/index.php?topic=14561.75

Distributed networks are the most powerful Systems. Brain and city government for decision forming. They are most robust. And larger Systems are the most powerful Systems. Big brains (big diverse data) and big teams. They are most robust. Both allow you to go deep, fast, building a large concept/prediction based on many parts. With these de-centralized networks, you have duplicate data so that no human node or brain memory node has to be accessed/used by billions of tasks nor take a long time to complete/reach from all nodes globally. The sum of nodes recreates a node. Prediction makes the future based on the past data/world state, and the human brain keeps a energized dialog state in its local agenda focus while a global sub-conscious attention votes on more-so un-desired nodes as well. Prediction is the decision process that is based on surrounding context in an "environment", be it a womb or a neuron. There's many factors/conditions that trigger actions / thoughts (same thing). To make a prediction of the future, you use the past context. Text generators do this. An exact match is the most basic way to see what occurs next. Word/letter frequency is used to choose more likely predictions. The brain is a physics simulator, with its image and sentence "thoughts". Just the act of a word or image/object appearing next results in truth. In big data, you can get exponentially more out of it using intense/deep "translation" instead of exact matches only. So even if the truth appears to be said many times, it can be overrided by invisible truth deep in the data that the data barely says it wants in life. It's all based on the frequency of what comes next in text. Deep translation let's it gather all the truth it needs. It's a simulation based on real data. This "deep translation" is the very evolution/"AGI" we seek. Data self recursively evolves itself and we do this in our own brain as well until com to a settled down colder equilibrium. In the world before brains that simulate the world, the instinctive short term direct response primitive brain and especially the environment itself like ponds and wombs, use context to evolve itself by making decisions. But the first doesn't remember the past, and the second only remembers the past. The third compares the past to previous states.

So, all based on direct frequency (truth), Deep Translation (for human brains that simulate, not primitive, not raw physics) can extract new data from old data (hidden truth) and decide the future prediction (new truth), evolving the mass of data your using to do this. Desired reward guides this to desired outcomes.

Deep Translation improves prediction for the Hutter Prize in all ways. And notice that attention for deciding which question to ask yourself/others or to act it out in motors for real, is based on, past context - the current state of te system/world.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 26, 2020, 01:24:33 PM
See last post.

Oh and see, told yous, regenerating/repairing is used in all AI, here it comes up again:
https://www.youtube.com/watch?v=bXzauli1TyU
try it:
https://distill.pub/2020/growing-ca/
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 26, 2020, 01:40:17 PM
they even mention "Embryogenetic Modeling"
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 26, 2020, 02:09:40 PM
i found that link btw AFTER i wrote all meh text. See I'm spot on in every way lol....translation, context, etc
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 26, 2020, 03:10:04 PM
Similarly we inhibit questions when see the outcome we wanted, which is like cells build/branching during organ creation/regeneration.



notice the link says

"The biggest puzzle in this field is the question of how the cell collective knows what to build and when to stop."

Based on context and hardcoded desires, it grows its way forward.

"While we know of many genes that are required for the process of regeneration, we still do not know the algorithm that is sufficient for cells to know how to build or remodel complex organs to a very specific anatomical end-goal"

“build an eye here”

"Imagine if we could design systems of the same plasticity and robustness as biological life: structures and machines that could grow and repair themselves."

"We will focus on Cellular Automata models as a roadmap for the effort of identifying cell-level rules which give rise to complex, regenerative behavior of the collective. CAs typically consist of a grid of cells being iteratively updated, with the same set of rules being applied to each cell at every step. The new state of a cell depends only on the states of the few cells in its immediate neighborhood. Despite their apparent simplicity, CAs often demonstrate rich, interesting behaviours, and have a long history of being applied to modeling biological phenomena."

"Typical cellular automata update all cells simultaneously. This implies the existence of a global clock, synchronizing all cells. Relying on global synchronisation is not something one expects from a self-organising system. We relax this requirement by assuming that each cell performs an update independently, waiting for a random time interval between updates"

Both local, and global shape of the context (what, and where (position)) affect the prediction.

"We can see that different training runs can lead to models with drastically different long term behaviours. Some tend to die out, some don’t seem to know how to stop growing, but some happen to be almost stable! How can we steer the training towards producing persistent patterns all the time?"

Sounds like GPT-2. When to finish a discovery sentence. Keep on topic until reach goal.

"we wanted the system to evolve from the seed pattern to the target pattern - a trajectory which we achieved in Experiment 1. Now, we want to avoid the instability we observed - which in our dynamical system metaphor consists of making the target pattern an attractor."

"Intuitively we claim that with longer time intervals and several applications of loss, the model is more likely to create an attractor for the target shape, as we iteratively mold the dynamics to return to the target pattern from wherever the system has decided to venture. However, longer time periods substantially increase the training time and more importantly, the memory requirements, given that the entire episode’s intermediate activations must be stored in memory for a backwards-pass to occur."

That sounds like Hutter Prize compressor improvement. Takes more RAM, takes longer, for better regeneration to target from Nothing (seed, compressed state).

"it’s been found that the target morphology is not hard coded by the DNA, but is maintained by a physiological circuit that stores a setpoint for this anatomical homeostasis"

We want to regenerate shape (sort the words/articles), and grow the organism/sentence as well. But avoid non-stop growth past the matured rest state goal and stop de-generation.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 27, 2020, 08:49:13 PM
A good AGI predictor can answer this. Can you? Try!

"Those witches who were spotted on the house left in a hurry to see the monk in the cave near the canyon and there was the pot of gold they left and when they returned back they knew where to go if they wanted it back. They knew the keeper now owned it and if they waited too long then he would forever own it for now on."
Who owns what?
Possible Answers: Witches own monk/witches own canyon/monk owns house/monk owns cave/monk owns gold/cave owns pot/there was pot/he owns it
Title: Re: Releasing full AGI/evolution research
Post by: krayvonk on March 27, 2020, 09:05:41 PM
I think that sentence is confusing and if the a.i. couldnt answer it probably wouldnt be so bad a thing.

The keeper of what?  u should dictate it.

Uve got something like word2vec in yours dont you - did u see my post?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 27, 2020, 09:16:51 PM
omg! someone answer it lol....it's EASY.........easy!!!!.........plz?

no w2v yet, but i'll get my own in there and i'm coding it! :)))))))
Title: Re: Releasing full AGI/evolution research
Post by: krayvonk on March 27, 2020, 09:31:52 PM
If ur understanding the text so literally,  the robot would be really easy to trick.
Yeh, its fine if your catering the sentences for it, and its still pretty cool i guess if it could parse its way through that mumbo jumbo you wrote.

Giving the robot a bullshit detector is very important,   its the big "assimilation of lies" problem.   There is a solution tho... thats what more important to me right now.

But I guess all NLP has that problem just about,  so no biggie.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 27, 2020, 09:40:19 PM
it DOES flexible to the text.....to get the best answer probable to the answer can get.....u just cant answer my test above :)
Title: Re: Releasing full AGI/evolution research
Post by: krayvonk on March 27, 2020, 10:01:39 PM
Monk owned the gold?
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 27, 2020, 10:04:56 PM
Huuuuuuu! U got it!

GOOD JOB! AGI pet deserves a kibble treat. Pats heads lightly. Perr. Open mouth now.

Now we need the AGI TO DO THAT!!! Come on you guys! This is our last stand.
Title: Re: Releasing full AGI/evolution research
Post by: krayvonk on March 27, 2020, 10:08:17 PM
What I want to know,  if its this easy to make A.G.I  how come it wasnt done years ago.  and what about this a.i. winter - lies?  To me it seems.
Title: Re: Releasing full AGI/evolution research
Post by: LOCKSUIT on March 27, 2020, 10:11:38 PM
Cus you gotta grab it by the balls and say "I'm gonna do it!".

I don't hear anyone hear saying it with enough courage!

That was a real typo btw, wow!
Title: Re: Releasing full AGI/evolution research
Post by: krayvonk on March 27, 2020, 10:42:49 PM
Yeh well what were ppl doing in the 80's  *scratching* their balls?!?!?

The whole vietnam war went thru killed everyone, and why didnt they have drones back then - makes NO sense.