Releasing full AGI/evolution research

  • 290 Replies
  • 190149 Views
*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Releasing full AGI/evolution research
« Reply #15 on: December 28, 2019, 06:30:59 pm »
You did remove my name though? Yes?

Quote
I spammed OpenAi with it too hehe, more 2 come

Lock answer me... you have taken my name off this spam posting your doing?


https://agi.topicbox.com/groups/agi/T7cbcba9a1ae63532/releasing-full-agi-evolution-research

https://groups.google.com/forum/#!topic/artificial-general-intelligence/U8-wPGwON_0

https://www.mail-archive.com/agi@agi.topicbox.com/msg03960.html

So not funny lock...

« Last Edit: December 28, 2019, 07:17:54 pm by Korrelan »
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #16 on: December 28, 2019, 07:37:55 pm »
your not on the openAI's

and wont be in future posts/redirects

my notes pastees have you as K lol



That's where the doctors hang out, now that you infected everyone onto it.
Emergent          https://openai.com/blog/

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Releasing full AGI/evolution research
« Reply #17 on: December 28, 2019, 07:55:41 pm »
Remove my name from all your spams.

 :idiot2:
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #18 on: December 28, 2019, 08:00:50 pm »
only 2 allow but i'll try

All other names are have already been made into code name prior.
« Last Edit: December 28, 2019, 08:51:16 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
Re: Releasing full AGI/evolution research
« Reply #19 on: December 30, 2019, 01:11:05 am »
So the meaning of a word is not contained within it but is instead described by the shape of the web of related words as observed from the vantage point of the word in question? Context is the actual word? 

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #20 on: December 30, 2019, 03:12:33 pm »
Understanding Compression

To learn how to break the law, of physics, we must understand it better.

https://paste.ee/p/kQLCx

"So the meaning of a word is not contained within it but is instead described by the shape of the web of related words as observed from the vantage point of the word in question? Context is the actual word?"
Yes, a given particle of Earth is defined by all of Earth context (and then it re-checks the all to each again, a self-attentional-SelfRecursion of Data-Improvement like editing a Paper), a exponential explosion of heat is given to the core of Earth and self-extracts free energy from burning fuel. Brains do this, atoms do it, galaxies do it. That's why magnetic domains align and propagate brain waves in brain, team of brains, magnets, etc. AGI will be a collaborative project and already is too, we share data. Let's hug each other (real tight hug).

The big bang was unstable and decompressed. Planets re-compress. Atoms do it. Galaxies do it. A brain compresses to learn the facets of the universe by using data compression, so that it can burn fuel and extract free energy/data from old data (just like batteries, gasoline, and our stomachs). Data evolution, data-Self-Recursion. Lossy/Lossless compression both transform data from one form to another. When you compress a file losslessly, it actually is destroyed and gone because it isn't the same file/data. Compressing/firing employees do this too. Luckily, being lossless, you can re-generate it back at a click of a button (or if you destroy a drawing on your desk and re-draw it from memory), however it takes time to evolve it back, sometimes VERY long time. Brute force to find the smallest compression of the Hutter Prize file would take extremely long. Intelligence is all about speed, evolving domains of nodes (cells, neurons, brains, cities) to find which out-pace each other. This aligns the domains of the brain/group to propagate brain waves faster through the cluster and have a bigger electro-magnetic potential. If we use lossy compression, you can actually get the exact file back but takes much longer. A system in space will collect data to grow, then decompress, a self-extracting drive. This decompression is exponentially explosive and results in smaller agents that evolve to compress-extract so they can resist change. Energy (photons) propagate forward but can be pulled in by gravity and will loop around like in a motionless, cold battery. Change=energy release. Unstable. Equilibrium is the opposite. We seen a algorithm can be run perfectly many times, compress, decompress, compress, repeat. To do this requires a form of equilibrium. Wear and tear affects it though. Yet our sperm/eggs has seen many generations. If the universe contracts back, Earth can emerge again by this self-organizing/attention physics. Different systems and their size evolve different but is based on electromagnetic compression/decompression, Earth if became nanobots would simply grow in size and resist change/death approx. better. Lossless compression is so fast because it's all contained in such a small place like a cor rod and is very hot/related, lossy requires ex. the whole Earth, a form of brute force and exponential hints/data evolve it back faster. Lossless, locally, without brains to discover the data, requires only little data. The bigger a system is the bigger file you can re-create from nothing - a human brain can re-generate back almost anything. Lossless, based on how many particles are in the defined system (uncompressed file size which needs a computer to store/run it), has a limit of how small it can become and so does lossy because Earth is finite in size during a given period quantinized and a file can be re-generated back quite fast if some of it is still around - the lossy file, even if incinerated, can be re-generated back based on how many particles make up Earth. Here we see a file can be compressed deeper the bigger the file is or the bigger the Earth is. With such little of the file left (even just the remaining physics if incinerated) it can come back based on large context but has a limit/need (size of Earth/fileData, time, and compute).

We see the communication/data tech builds on itself exponentially faster, bigger data = better intelligence and extracts exponentially more/better data (per a given system size). Earth is growing and heating up by collecting more mass and extracting/utilizing exponentially more energy like nanobots will when they come. We will harvest Dyson Spheres. Our goal to resist change by finding/eating food and breeding (Darwinian survival) could Paperclip Effect us and explode ourselves! A cycle of compress, decompress. Our goal is to compress data in our files, brains, teams, but also to expand our colony of data. Why? To resist change, to come to equilibrium (end of evolution fora given system exponentially faster). These colony mutants/tribes have longer stable lives being so large and using its size to extract so much. The bigger a system is the less it changes. Imagine destroying all instantly-repairing nanobots superOrganism? Can't. And, the bigger a system the more weight/vote/context interaction (heat) is transmitted/infected, not just to extract free knowledge/heat (motion/energy) but also to fix issues/damage. My body/knowledge  stay the same almost yet my cells/blood all change their spots for new ones, the air stays the same yet it blows around Earth, the heat in my walls stay the same yet the heat moves around, Earth is a fractal of pipes, veins, roads, and internet connections to propagate energy, ideas, blood, waste, traps, and negative electricity, simply to loop it around and re-use it. Distribution of data allows global, not just local, flow/alignments. It moves around and the system can resist change/repair/or, emerge. Or goal is to resist change by using large context/collaboration by aligning random domains to get free energy/knowledge. We have to collect/grow big and digest/extract it so we can resist change better. We are doing both compression and decompression of data/energy and possibly are trying to equal them out so we can come to equilibrium jussst right in the middle of the 2 opposites/attractors. The system we become will be exponentially repairing/immune to change - compression and decompression, however we may be growing larger but less dense as it does so to become approx. more immortal. We will likely need a exhaust/feed though, we will need a fine tuned food source and radiation exit for our global utopia sphere/galactic disc loop string.

So we should be very interested in compression, and decompression, i.e. Bigish Diverse Dropout - which data to destroy and remove/ignore/forget, and Big Diverse Data collection/creation by extracting free data using old data context vote/weight in. In the brain, we do compression and can basically still re-generate the ex. Hutter Prize file despite having a small decompression brain. The need to do both ignore/attend are the same process in Dropout or data collecting/harvesting, and the decompression process when ignore/attend which to extract/collect new data from old data is also the same process, and the compress/decompress processes are the same process too - which to remove and which to attend however to attend fast we need to remove fast, hence these 2 steps are not really the same process. However when you do compress data and create a brain/team, it is easy to attend to the remaining keys. During extraction, you use what you Learned (patterns) to decide what to Generate. So they are both 2 different processes I guess. Btw, when you build a heterarchy you need the hierarchy first, and may not even need the heterarchy! The connections of context handles are already laid. I was going to say, making relational connections doesn't compress data on its own yet in effect does, though.

Some concepts above were compression, decompression, equilibrium (no change/death), exponentialality. We seen how we grow mutants that resist change better by using both compression/decompression (destruction of neurons/ideas/employees/lives/Earth, and creation of such) so we can come to equilibrium exponentially faster by large context weight (which exponentially helps compression, and extraction during Generating (ex. GPT-2's 40GB and 1024 token view)). I'm still unsure if we are just growing and exploding. If the universe only expands then we will likely radiate.

Compression looks for patterns and leads to faster domain alignment/propagation and exponentially faster large brain waves/free energy extraction/re-generation from nothing. If we want to compress the Hutter Prize the most, we will need to stop it from generating multiple choices from a given context (it still uses the context). We could sort all phrases in the file like 'and the' 'but the', 'so I' 'then I', and force it to discover the concept that leads to the re-used code 'the' or 'I'.
« Last Edit: December 30, 2019, 04:32:07 pm by LOCKSUIT »
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1177
Re: Releasing full AGI/evolution research
« Reply #21 on: December 30, 2019, 09:19:07 pm »
Resisting change is still change though :P. I'd say the goal is to resist entropy.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #22 on: December 30, 2019, 09:30:43 pm »
Taking the right path is a lot less change than bumping into the burgler with a shotgun 0O. They simply breed/rejuvenate more than they die. The agent stays most similar when from statue it bends down to grab an apple.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #23 on: January 04, 2020, 04:55:32 am »
I've got 10 YTbe subscribers now lol.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #24 on: January 08, 2020, 01:37:01 am »
I have my man on the compressor algorithm for 25USD from india. I am learning how they work currently and will shortly post my formal formula for AGI. In the meantime see my entries here: https://agi.topicbox.com/groups/agi

Layer Norm......see it now is just >
https://knowledge.insead.edu/operations/warning-do-not-just-average-predictions-6641

GANs compress data...they generate realistic data...so does lossless prediction...the data fed to it, allows it to work on unseen data...because its so similar

https://royvanrijn.com/blog/2010/02/compression-by-prediction/
« Last Edit: January 08, 2020, 03:08:33 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #25 on: January 09, 2020, 06:58:38 am »
Me and my employee got the compression working. It is 5 bits per character, normally each char is 8bpc. So the 100MB wiki8 would be about 63MB. Good for a start.
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #26 on: January 18, 2020, 05:35:34 pm »
My order-2 made the 100MB wiki8 file compressed into exactly 40,572,450 bytes. Took exactly 12 hours lol in python. The dictionary (I included it into the 40MB) was 2,069,481 bytes. The decompressor was 4,910 bytes (also included in the 40MB). Code is attached for the non-believers. It's in python so you know it was me cus they are usually in C++ for speed. You can try it on the small input I uploaded. https://paste.ee/p/Cd7Va

The world record is 15MB. 25MB away lol!!!
Emergent          https://openai.com/blog/

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #27 on: February 24, 2020, 04:18:48 am »
It will move incredibly fast. AGIs can think/move a ton faster and replay skills perfectly and erase bad parts. Can nest deep into thoughts "the girl who saw the guy who wanted the man who said to him to go was here". Recall perfectly, have more memory, longer attention, don't sleep eat poop nag etc. AIs live longer than humans, can clone/download skills etc. Many sensors/motors, many types of them, 3D vision using MRI and sims, wireless communication of visual thoughts, full cooperation, fully times updates shared, can store facts when read them instantly and fast, can see/feel nanobots to control them - we can't, and a lot lot more I won't list here. Advanced nanobots will eat Earth in a day. It's really cheap to gather microscopic data and make small replicators to up your computer fabrication and data intake and manipulation accuracy. The more data/ processors/ arms/ eyes they get and better ones they get, the more such will they get!

Inventing 1 AGI and cloning it on a mass fabrication scale is all we need. The most powerful thing will not be inventing 1 AGI per see, it will be cloning workers on cheap replicating computer hardware, data, arms and eyes. I.E scaling AGI and inventing AGI is all we need.
Emergent          https://openai.com/blog/


*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Releasing full AGI/evolution research
« Reply #29 on: March 26, 2020, 12:57:39 pm »
Huge breakthroughs I made.

See the link below if you're new to neural realistic_future generators for text....aka AGI attention:
https://aidreams.co.uk/forum/index.php?topic=14561.75

Distributed networks are the most powerful Systems. Brain and city government for decision forming. They are most robust. And larger Systems are the most powerful Systems. Big brains (big diverse data) and big teams. They are most robust. Both allow you to go deep, fast, building a large concept/prediction based on many parts. With these de-centralized networks, you have duplicate data so that no human node or brain memory node has to be accessed/used by billions of tasks nor take a long time to complete/reach from all nodes globally. The sum of nodes recreates a node. Prediction makes the future based on the past data/world state, and the human brain keeps a energized dialog state in its local agenda focus while a global sub-conscious attention votes on more-so un-desired nodes as well. Prediction is the decision process that is based on surrounding context in an "environment", be it a womb or a neuron. There's many factors/conditions that trigger actions / thoughts (same thing). To make a prediction of the future, you use the past context. Text generators do this. An exact match is the most basic way to see what occurs next. Word/letter frequency is used to choose more likely predictions. The brain is a physics simulator, with its image and sentence "thoughts". Just the act of a word or image/object appearing next results in truth. In big data, you can get exponentially more out of it using intense/deep "translation" instead of exact matches only. So even if the truth appears to be said many times, it can be overrided by invisible truth deep in the data that the data barely says it wants in life. It's all based on the frequency of what comes next in text. Deep translation let's it gather all the truth it needs. It's a simulation based on real data. This "deep translation" is the very evolution/"AGI" we seek. Data self recursively evolves itself and we do this in our own brain as well until com to a settled down colder equilibrium. In the world before brains that simulate the world, the instinctive short term direct response primitive brain and especially the environment itself like ponds and wombs, use context to evolve itself by making decisions. But the first doesn't remember the past, and the second only remembers the past. The third compares the past to previous states.

So, all based on direct frequency (truth), Deep Translation (for human brains that simulate, not primitive, not raw physics) can extract new data from old data (hidden truth) and decide the future prediction (new truth), evolving the mass of data your using to do this. Desired reward guides this to desired outcomes.

Deep Translation improves prediction for the Hutter Prize in all ways. And notice that attention for deciding which question to ask yourself/others or to act it out in motors for real, is based on, past context - the current state of te system/world.
Emergent          https://openai.com/blog/

 


Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

563 Guests, 0 Users

Most Online Today: 564. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles