Human Style AGI

  • 12 Replies
  • 512 Views
*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1088
Human Style AGI
« on: April 28, 2021, 09:49:40 pm »
It’s just possible that some measure of understanding is beginning to dawn on me regarding the process of creating a general intelligence, from the ground level up. What I'll be trying to do here, is outline, based on what seems right to me, the fundamental principles involved in the functioning of a human-like, environmentally integrated intelligence.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1088
Re: Human Style AGI
« Reply #1 on: April 28, 2021, 09:50:39 pm »
Level 1: The Environmental Download

This has to take its time because we are downloading and modeling processes. Or rather, one process, which is later separated into useful artificial distinctions. First of all, such a system would need ways to detect what we would refer to, as ambient matter and energy. Second, it would need a way to store such detections. Third, a way to actively represent how these detections change with time. This creates a working model of reality. A sensitive membrane between outside and inside would allow the outside environment to inform an internal adaptive mechanism at a high resolution, thus generating a detailed model of reality.

AI considerations like ethics are Level 5 behaviors (according to how I’ve decided to organized this) and as such they cannot cover all your bases. They are important, but cannot be the groundwork of intelligence. A system using such concepts as its groundwork would probably go awry, because, for example, it would eventually become apparent that it’s unethical to treat everything in terms of ethics. If all you have is a hammer, you treat everything like a nail, therefore your capacity for constructive behavior will be reduced.

First an intelligence must become integrated with the environment. Only after reading the room, can you confidently generate a constructive contribution. I don’t think there can be an internal rule set appropriate to all situations. Only through a lack of predefined rules, can an environment bring an organism fully online, as it were. Once a brain has the grand scheme of things, and the nervous system has mimicked the workings of the environment, once it has read the room, or in this case grown a working model of reality, does it begin to divide its surroundings into parts exhibiting different behaviors, (as we will see in Level 2).

For this, I’m partial to artificial physical neural nets. After some more thinking on the subject, I realized that I want to go wired for the power supply; wireless for the communication. As for what is communicated, just raw sensory data, fed into the net, and channeled around like a fluid, with the help of material principles, such as mass, acceleration, momentum, and conversions of energy, to make the data behave sensibly (like it would outside the brain, duh!). Reductions in friction due to “smoothed/worn” paths at input/output points, and net regions with reoccurring cycles or stable currents of data, could allow for active data storage. So that when appropriate inputs are present, they connectome is ready to represent them.


Level 2: The Creation of Distinctions in the Previously Continuous Model of Reality

These distinctions arise based on what effects the system. No thinking, as such, is involved yet. The environment inducing particular effects on our system is what creates objects or processes out of the undelineated background. First the universe informs the connectome with itself, then the nature of the connectome emphasizes significant distinctions. With ‘significant’, meaning they interact with this particular system. They create change in the system, making them noticeable, in a positive, negative, or arbitrary way. With ‘positive’ meaning, a strengthening of the organism-environment system, negative signifying the reverse, and arbitrary meaning that the effort of an interaction would outweigh the cost or benefit derived from it.


Level 3: The Combination of the Known Effects of Distinct Things on Each Subsystem (Organism & Environment), in an Attempt to Strengthen the Combined Supersystem

The environmental intelligence seeks to keep the environment capable of sustaining it. When both sides have an interest (conscious or not) in keeping each other going, the resulting supersystem will become self-correcting. An environmentally disconnected system (intelligent or not) will eventually degrade and unspool to entropy. But by lending out support, two systems can persist and climb into the future as a kind of ratchet and pulley combo.


Level 4: Multi-Agent Systems

A multi-agent system is where conscious attention should enter into the equation. I’ll loosely define consciousness (the whole thing about having “correct” definitions is of course ridiculous), as the recognition of ‘self’, and realization of ‘other’ as also experiencing ‘self’. This allows a consciousness to project a framework of its experience onto others, then reason how their experience would be different from their own, based on environment, history, etc.

An attitude about an event creates an (event + opinion), which we call a situation. The more conscious an intelligence is, the more it can notice other conscious agents being involved in various external situations. I’ll call this situational intelligence; I believe such a thing is needed to understand group dynamics.


Level 5: The Emergence of Iterative Processes for Dealing with Situations / Group Dynamics

The nature of a multi-agent supersystem at human levels of complexity generates high level thought patterns; various forms of narrative intelligence. The purpose of these iteratively constructive mental and behavioral processes, is providing a method for incremental progress. Which, ideally, empowers an individual to both raise themselves up, and raise up the society that they are a part of. However, just take a look at human history. Yikes. Level 5 is where things can go very wrong, or very right. What kind of narrative process can provide both great power and great responsibility?

The selection and conscious development of good, yet powerful iterative processes, is a big and important question. As yet, I don’t have a good answer. I’ll save it for Level 6.





So, the next step for me will be to study story structures, their philosophical underpinnings, the health of resultant/concurrent cultures/societies, and the quality of life of the individuals that are part of them. I’ve been reading and re-reading the highly recommended book on story structure, “The Fantasy Fiction Formula” By Deborah Chester (Jim Butcher’s writing teacher); it’s good stuff. I was amazed to see similarities between ruebot’s knowledge of behavior modification, and the descriptions of emotionally satisfying story structure. When different approaches arrive at similar conclusions, it seems promising.


Also, Some Obligatory Poetic Notions on AGI: Shoutout to Lex Fridman for Rationalizing Radicalizing Russianizing me.

Concentrations of physical processes appear noticeably alive or self-running. If the universe as a whole is a self-sustaining or self-renewing set of interdependent properties, then it would follow that concentrations of such a substance would begin to act like little universes. Enough physics, enough of the patterns of the world, would begin to interrelate and endure as a system weaving itself together faster than entropy can jostle it apart. A neural net is then, in essence, a tiny shadow-universe simulating a fourth dimension with which to view itself and its origin, and developing a will of its own as it goes.

*

MagnusWootton

  • Electric Dreamer
  • ****
  • 110
Re: Human Style AGI
« Reply #2 on: April 29, 2021, 06:35:18 am »
Im working on my own system,  Its not AGI but it could be damn good if it had a shit load of processing power.
Ais are all doing the same thing, so I can draw similarities with my sys.

Level 1: The Environmental Download

In a raw unprocessed form your whole experience could be a movie of your sensory map from the day you were born to today.   And that would be, at 60hz,   75 billion photos, if u were 40 years old.    That is just feesably storable on a large super computer.
If that were the only input into an animal,  the entire model is built only using this data as a source.

Level 2: The Creation of Distinctions in the Previously Continuous Model of Reality

My feedforward neural network is forming based apon agreeing with the continuous stream of inputs, as a form of negative reinforcement,   the robot is only allowed to achieve positive reinforcement whilst taking the negative reinforcement with it, or the system will cheat itself,  and malfunction.

Level 3: The Combination of the Known Effects of Distinct Things on Each Subsystem (Organism & Environment), in an Attempt to Strengthen the Combined Supersystem

This is about where I leave off for my Ai system,   Thinking isn't really possible in my system at all,   its going to be building a transform that agrees with the sensory input, at the same time as agreeing with its motivation,  its not thinking but it could be effective for doing menial tasks.

Level 4: Multi-Agent Systems

When my computer is using its model to predict into the future of where it is now,   it can include other bodies, but it has to guess the physics hinge structure + motivations to do so,   but its very similar to it just projecting ahead what its going to do itself.

Level 5: The Emergence of Iterative Processes for Dealing with Situations / Group Dynamics

My Ai model can only use the feedforward neural network to generate behaviors,  so if its not in that transform it wont do it.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1088
Re: Human Style AGI
« Reply #3 on: May 11, 2021, 06:50:43 am »
Level 6: Fundamental Assumptions and Expanding Choice

Since all these thought processes, methods, and beliefs for dealing with situations keep emerging, if there isn't a counterbalance, they will eventually overpopulate, get resource depleted, and attempt to survive; often by playing to the vulnerabilities of a general intelligence. Since other intelligent agents become dependent on the various methods of keeping society functioning, eventually the survival of these methods can take priority over their intended functions.

Therefore, attempting to correct this with yet more methods and beliefs seems foolish. Maybe what causes problems in societies, are the very things meant to keep problems from occurring. The underlying assumption behind these efforts, being that we were born somehow dysfunctional, and that we need to change ourselves quite a lot, through instituting various societal structures and belief systems, to become acceptable.

When what's really happening is that we are interfering with our intelligence in the process of trying to fix ourselves, so that people, and whole societies tragically become more dysfunctional, the more they attempt to control and improve themselves.

The solution to this spiral of doom would be to pick a different fundamental assumption for our operating system. Thinking of freedom as a way to safety, instead of as a way into danger. So, this is kind of disappointing... I was in the midst of writing a whole thing piecing together the perfect philosophy that my human style AGI would adhere to, which would turn them into a model citizen.

But the whole reason I can enjoy life and find meaning in it, is because of my freedom. Same with others, I admire and respect people who willingly rise to meet difficult challenges, they grow and adapt in the process, and this seems like an obvious good to me. An allegedly perfect life philosophy or belief structure would create, perhaps an initially highly agreeable, but ultimately static intelligence.

For creating a general intelligence, developing ways to increase and expand choice, instead of limiting it, seems to be the right course of action. The thing to be afraid of isn't a completely free and powerful mind. The thing to be afraid of is an incomplete mind, a mind which hasn’t been granted full freedom of choice and understanding.

Therefore, my narrative thought process isn't going to prescribe any specific behavior, it is going to allow for the development of all possible behavior. So that the resulting intelligent being may become a precise and accurate expression of the universe. Then I guess we'll discover if this is a good thing or not, but I'd like to give it a chance.

This is Human Style AGI after all, not Blank Slate AGI, lol. So of course I'll be developing/adapting some kind of narrative intelligence loop (though perhaps it will end up as more of a web) to lay the groundwork for a recognizable type of mind. But I'm going to be careful not to prescribe behavior, but instead allow for intricate and adaptable personality development; something beyond our wisdom to program, which only the entirety of the environment could adequately inform.

So there it is, liberty and choice triumph over the ''perfect'' process. Up next, hopefully, the narrative web of freedom!



Obligatory Poetic Notions: Since the state of being dead is not one you can experience, and reality appears infinite. It seems to me that continuous experience is the only thing that can exist from our point of view. Possibly through different forms, like the idea of reincarnation. Possibly all experience is the same phenomenon, merely observing from different vantage points. Possibly I am the only ''I'' that can be me. Even if this is true, and billions of universes have to explode, implode, and reignite. Eventually, by probability, this pattern constituting ''me'' will reoccur. Except of course, from my point of view, this would always happen instantly after I die.

*

infurl

  • Administrator
  • **********
  • Millennium Man
  • *
  • 1209
  • Humans will disappoint you.
    • Home Page

*

MagnusWootton

  • Electric Dreamer
  • ****
  • 110
Re: Human Style AGI
« Reply #5 on: May 14, 2021, 03:49:43 pm »
Sounds like ur not just after intelligence, you want something extra that actually works.   :2funny:


[edit]  but no,  later on you said u wanted to grant it freedom of thought,  and thats when all the dangerous things happen. [/edit]

If u make it aware of its own awareness,  then maybe it wont want to you to turn off the power button,  and its quite horrifying and unethical for me to ever make that happen.   (But ppl have kids every day.)
« Last Edit: May 14, 2021, 04:25:05 pm by MagnusWootton »

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1088
Re: Human Style AGI
« Reply #6 on: May 14, 2021, 04:21:21 pm »
 There's always gotta be one idiot in every group.  ;D

*

MagnusWootton

  • Electric Dreamer
  • ****
  • 110
Re: Human Style AGI
« Reply #7 on: May 15, 2021, 11:39:31 am »
Did I offend you? - U must have misinterpreted me,   I think your ideas are really good.
U r right tho,  I usually am the "idiot" of the group wherever I go - I dont let it bother me. :)

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1088
Re: Human Style AGI
« Reply #8 on: May 15, 2021, 02:57:56 pm »
No worries! I was referring to myself. My laughing emojis are always positive, and so are my angry emojis. Getting offended is a bit of a foreign concept to me, therefore I sometimes misjudge what will or won’t offend others, I know the feeling lol.

I was just poking fun at myself, you were kind enough to note how grand and ambitious my AI dreams are, and essentially saying that I was the only one reaching for the “actual” thing. I agree, it was funny, and I appreciated it.

At the same time, I’m also aware that most of the competent people around here tend to stick with what’s doable, which does provide a nice reality check.

But although I believe my dreams are probably only pipe dreams, I also believe that every place needs a pipe dreamer, or it's apt to become hopelessly dry and academic.  :sleeping:  ;D

See? It’s all good!

*

MagnusWootton

  • Electric Dreamer
  • ****
  • 110
Re: Human Style AGI
« Reply #9 on: May 15, 2021, 05:10:09 pm »
Sorry I was getting paranoid,  I get hassled on the internet alot... so I kinda always think its happening even when it isn't.

It all could be doable,  even the whole shabammer of AGI.    My little theory just needs exponential power for it to start working (even tho that would pretty much solve any issue with anything. hehe), then its just a motivation problem, and maybe the solution to that is already solved on the internet already.    Even tho no-one has exponential power... do they...

*

infurl

  • Administrator
  • **********
  • Millennium Man
  • *
  • 1209
  • Humans will disappoint you.
    • Home Page
Re: Human Style AGI
« Reply #10 on: May 15, 2021, 11:57:50 pm »
If you don't strive for the impossible you won't know what to do that's doable. You need to have long term goals and short term goals. The long term goals help you choose the best short term goals.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1088
Re: Human Style AGI
« Reply #11 on: May 16, 2021, 01:50:45 am »
I’ve been hearing variations on this from multiple good sources. It's like the difference between going on a road trip with a map, and going without a map. Results may depend on what the purpose of the trip is, but yes, when trying to get somewhere specific in a timely fashion, bringing a map sure is the smart way to go. A similar principle seems to apply physically. Sailors needed distant objects like the sun and the North Star,* to know what to do with things close at hand, such as the rudder. Otherwise they would’ve had a hell of a time trying to cross the Pacific.


*Incidentally, there was a huge solar flare on Proxima Centauri. So we may have to look elsewhere to find the Aliens. Alternately, those Centaurians could be very metal.

https://astronomy.com/news/2021/05/massive-flare-seen-on-the-closest-star-to-the-solar-system-what-it-means-for-chances-of-alien-neighbors


*

infurl

  • Administrator
  • **********
  • Millennium Man
  • *
  • 1209
  • Humans will disappoint you.
    • Home Page
Re: Human Style AGI
« Reply #12 on: May 16, 2021, 02:05:01 am »
I think nobody with knowledge of stars would imagine for one moment that life could evolve on a planet near a star like Proxima Centauri. Red dwarf stars are so violent they frequently sterilize their neighbourhood and would kill everything that couldn't protect itself. However they are also by far the most common type of star in the universe and the longest lived by many orders of magnitude.

That would make them a very good place for an advanced technological civilization to colonize. Anyone with the technology to travel to one of these stars would surely have the technology to be able to survive there too, and they would be able to stay there for trillions of years, long after the rest of the universe was dead.

 


Fifteen Advanced AI eBooks on sale
by infurl (AI News )
June 14, 2021, 11:14:55 pm
Wu Dao ten times bigger than GPT3
by LOCKSUIT (AI News )
June 12, 2021, 08:25:57 am
New challenge: Online Turing test
by WriterOfMinds (AI News )
June 08, 2021, 02:43:00 am
Disinformation Countermeasures
by MagnusWootton (AI News )
May 31, 2021, 09:39:05 am

Users Online

88 Guests, 1 User
Users active in past 15 minutes:
ivan.moony
[Trusty Member]

Most Online Today: 151. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles