Human Style AGI

  • 14 Replies
  • 3750 Views
*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1178
Human Style AGI
« on: April 28, 2021, 09:49:40 pm »
It’s just possible that some measure of understanding is beginning to dawn on me regarding the process of creating a general intelligence, from the ground level up. What I'll be trying to do here, is outline, based on what seems right to me, the fundamental principles involved in the functioning of a human-like, environmentally integrated intelligence.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1178
Re: Human Style AGI
« Reply #1 on: April 28, 2021, 09:50:39 pm »
Level 1: The Environmental Download

This has to take its time because we are downloading and modeling processes. Or rather, one process, which is later separated into useful artificial distinctions. First of all, such a system would need ways to detect what we would refer to, as ambient matter and energy. Second, it would need a way to store such detections. Third, a way to actively represent how these detections change with time. This creates a working model of reality. A sensitive membrane between outside and inside would allow the outside environment to inform an internal adaptive mechanism at a high resolution, thus generating a detailed model of reality.

AI considerations like ethics are Level 5 behaviors (according to how I’ve decided to organized this) and as such they cannot cover all your bases. They are important, but cannot be the groundwork of intelligence. A system using such concepts as its groundwork would probably go awry, because, for example, it would eventually become apparent that it’s unethical to treat everything in terms of ethics. If all you have is a hammer, you treat everything like a nail, therefore your capacity for constructive behavior will be reduced.

First an intelligence must become integrated with the environment. Only after reading the room, can you confidently generate a constructive contribution. I don’t think there can be an internal rule set appropriate to all situations. Only through a lack of predefined rules, can an environment bring an organism fully online, as it were. Once a brain has the grand scheme of things, and the nervous system has mimicked the workings of the environment, once it has read the room, or in this case grown a working model of reality, does it begin to divide its surroundings into parts exhibiting different behaviors, (as we will see in Level 2).

For this, I’m partial to artificial physical neural nets. After some more thinking on the subject, I realized that I want to go wired for the power supply; wireless for the communication. As for what is communicated, just raw sensory data, fed into the net, and channeled around like a fluid, with the help of material principles, such as mass, acceleration, momentum, and conversions of energy, to make the data behave sensibly (like it would outside the brain, duh!). Reductions in friction due to “smoothed/worn” paths at input/output points, and net regions with reoccurring cycles or stable currents of data, could allow for active data storage. So that when appropriate inputs are present, they connectome is ready to represent them.


Level 2: The Creation of Distinctions in the Previously Continuous Model of Reality

These distinctions arise based on what effects the system. No thinking, as such, is involved yet. The environment inducing particular effects on our system is what creates objects or processes out of the undelineated background. First the universe informs the connectome with itself, then the nature of the connectome emphasizes significant distinctions. With ‘significant’, meaning they interact with this particular system. They create change in the system, making them noticeable, in a positive, negative, or arbitrary way. With ‘positive’ meaning, a strengthening of the organism-environment system, negative signifying the reverse, and arbitrary meaning that the effort of an interaction would outweigh the cost or benefit derived from it.


Level 3: The Combination of the Known Effects of Distinct Things on Each Subsystem (Organism & Environment), in an Attempt to Strengthen the Combined Supersystem

The environmental intelligence seeks to keep the environment capable of sustaining it. When both sides have an interest (conscious or not) in keeping each other going, the resulting supersystem will become self-correcting. An environmentally disconnected system (intelligent or not) will eventually degrade and unspool to entropy. But by lending out support, two systems can persist and climb into the future as a kind of ratchet and pulley combo.


Level 4: Multi-Agent Systems

A multi-agent system is where conscious attention should enter into the equation. I’ll loosely define consciousness (the whole thing about having “correct” definitions is of course ridiculous), as the recognition of ‘self’, and realization of ‘other’ as also experiencing ‘self’. This allows a consciousness to project a framework of its experience onto others, then reason how their experience would be different from their own, based on environment, history, etc.

An attitude about an event creates an (event + opinion), which we call a situation. The more conscious an intelligence is, the more it can notice other conscious agents being involved in various external situations. I’ll call this situational intelligence; I believe such a thing is needed to understand group dynamics.


Level 5: The Emergence of Iterative Processes for Dealing with Situations / Group Dynamics

The nature of a multi-agent supersystem at human levels of complexity generates high level thought patterns; various forms of narrative intelligence. The purpose of these iteratively constructive mental and behavioral processes, is providing a method for incremental progress. Which, ideally, empowers an individual to both raise themselves up, and raise up the society that they are a part of. However, just take a look at human history. Yikes. Level 5 is where things can go very wrong, or very right. What kind of narrative process can provide both great power and great responsibility?

The selection and conscious development of good, yet powerful iterative processes, is a big and important question. As yet, I don’t have a good answer. I’ll save it for Level 6.





So, the next step for me will be to study story structures, their philosophical underpinnings, the health of resultant/concurrent cultures/societies, and the quality of life of the individuals that are part of them. I’ve been reading and re-reading the highly recommended book on story structure, “The Fantasy Fiction Formula” By Deborah Chester (Jim Butcher’s writing teacher); it’s good stuff. I was amazed to see similarities between ruebot’s knowledge of behavior modification, and the descriptions of emotionally satisfying story structure. When different approaches arrive at similar conclusions, it seems promising.


Also, Some Obligatory Poetic Notions on AGI: Shoutout to Lex Fridman for Rationalizing Radicalizing Russianizing me.

Concentrations of physical processes appear noticeably alive or self-running. If the universe as a whole is a self-sustaining or self-renewing set of interdependent properties, then it would follow that concentrations of such a substance would begin to act like little universes. Enough physics, enough of the patterns of the world, would begin to interrelate and endure as a system weaving itself together faster than entropy can jostle it apart. A neural net is then, in essence, a tiny shadow-universe simulating a fourth dimension with which to view itself and its origin, and developing a will of its own as it goes.

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Human Style AGI
« Reply #2 on: April 29, 2021, 06:35:18 am »
Im working on my own system,  Its not AGI but it could be damn good if it had a shit load of processing power.
Ais are all doing the same thing, so I can draw similarities with my sys.

Level 1: The Environmental Download

In a raw unprocessed form your whole experience could be a movie of your sensory map from the day you were born to today.   And that would be, at 60hz,   75 billion photos, if u were 40 years old.    That is just feesably storable on a large super computer.
If that were the only input into an animal,  the entire model is built only using this data as a source.

Level 2: The Creation of Distinctions in the Previously Continuous Model of Reality

My feedforward neural network is forming based apon agreeing with the continuous stream of inputs, as a form of negative reinforcement,   the robot is only allowed to achieve positive reinforcement whilst taking the negative reinforcement with it, or the system will cheat itself,  and malfunction.

Level 3: The Combination of the Known Effects of Distinct Things on Each Subsystem (Organism & Environment), in an Attempt to Strengthen the Combined Supersystem

This is about where I leave off for my Ai system,   Thinking isn't really possible in my system at all,   its going to be building a transform that agrees with the sensory input, at the same time as agreeing with its motivation,  its not thinking but it could be effective for doing menial tasks.

Level 4: Multi-Agent Systems

When my computer is using its model to predict into the future of where it is now,   it can include other bodies, but it has to guess the physics hinge structure + motivations to do so,   but its very similar to it just projecting ahead what its going to do itself.

Level 5: The Emergence of Iterative Processes for Dealing with Situations / Group Dynamics

My Ai model can only use the feedforward neural network to generate behaviors,  so if its not in that transform it wont do it.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1178
Re: Human Style AGI
« Reply #3 on: May 11, 2021, 06:50:43 am »
Level 6: Fundamental Assumptions and Expanding Choice

Since all these thought processes, methods, and beliefs for dealing with situations keep emerging, if there isn't a counterbalance, they will eventually overpopulate, get resource depleted, and attempt to survive; often by playing to the vulnerabilities of a general intelligence. Since other intelligent agents become dependent on the various methods of keeping society functioning, eventually the survival of these methods can take priority over their intended functions.

Therefore, attempting to correct this with yet more methods and beliefs seems foolish. Maybe what causes problems in societies, are the very things meant to keep problems from occurring. The underlying assumption behind these efforts, being that we were born somehow dysfunctional, and that we need to change ourselves quite a lot, through instituting various societal structures and belief systems, to become acceptable.

When what's really happening is that we are interfering with our intelligence in the process of trying to fix ourselves, so that people, and whole societies tragically become more dysfunctional, the more they attempt to control and improve themselves.

The solution to this spiral of doom would be to pick a different fundamental assumption for our operating system. Thinking of freedom as a way to safety, instead of as a way into danger. So, this is kind of disappointing... I was in the midst of writing a whole thing piecing together the perfect philosophy that my human style AGI would adhere to, which would turn them into a model citizen.

But the whole reason I can enjoy life and find meaning in it, is because of my freedom. Same with others, I admire and respect people who willingly rise to meet difficult challenges, they grow and adapt in the process, and this seems like an obvious good to me. An allegedly perfect life philosophy or belief structure would create, perhaps an initially highly agreeable, but ultimately static intelligence.

For creating a general intelligence, developing ways to increase and expand choice, instead of limiting it, seems to be the right course of action. The thing to be afraid of isn't a completely free and powerful mind. The thing to be afraid of is an incomplete mind, a mind which hasn’t been granted full freedom of choice and understanding.

Therefore, my narrative thought process isn't going to prescribe any specific behavior, it is going to allow for the development of all possible behavior. So that the resulting intelligent being may become a precise and accurate expression of the universe. Then I guess we'll discover if this is a good thing or not, but I'd like to give it a chance.

This is Human Style AGI after all, not Blank Slate AGI, lol. So of course I'll be developing/adapting some kind of narrative intelligence loop (though perhaps it will end up as more of a web) to lay the groundwork for a recognizable type of mind. But I'm going to be careful not to prescribe behavior, but instead allow for intricate and adaptable personality development; something beyond our wisdom to program, which only the entirety of the environment could adequately inform.

So there it is, liberty and choice triumph over the ''perfect'' process. Up next, hopefully, the narrative web of freedom!



Obligatory Poetic Notions: Since the state of being dead is not one you can experience, and reality appears infinite. It seems to me that continuous experience is the only thing that can exist from our point of view. Possibly through different forms, like the idea of reincarnation. Possibly all experience is the same phenomenon, merely observing from different vantage points. Possibly I am the only ''I'' that can be me. Even if this is true, and billions of universes have to explode, implode, and reignite. Eventually, by probability, this pattern constituting ''me'' will reoccur. Except of course, from my point of view, this would always happen instantly after I die.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Human Style AGI
« Reply #4 on: May 11, 2021, 07:44:52 am »

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Human Style AGI
« Reply #5 on: May 14, 2021, 03:49:43 pm »
Sounds like ur not just after intelligence, you want something extra that actually works.   :2funny:


[edit]  but no,  later on you said u wanted to grant it freedom of thought,  and thats when all the dangerous things happen. [/edit]

If u make it aware of its own awareness,  then maybe it wont want to you to turn off the power button,  and its quite horrifying and unethical for me to ever make that happen.   (But ppl have kids every day.)
« Last Edit: May 14, 2021, 04:25:05 pm by MagnusWootton »

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1178
Re: Human Style AGI
« Reply #6 on: May 14, 2021, 04:21:21 pm »
 There's always gotta be one idiot in every group.  ;D

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Human Style AGI
« Reply #7 on: May 15, 2021, 11:39:31 am »
Did I offend you? - U must have misinterpreted me,   I think your ideas are really good.
U r right tho,  I usually am the "idiot" of the group wherever I go - I dont let it bother me. :)

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1178
Re: Human Style AGI
« Reply #8 on: May 15, 2021, 02:57:56 pm »
No worries! I was referring to myself. My laughing emojis are always positive, and so are my angry emojis. Getting offended is a bit of a foreign concept to me, therefore I sometimes misjudge what will or won’t offend others, I know the feeling lol.

I was just poking fun at myself, you were kind enough to note how grand and ambitious my AI dreams are, and essentially saying that I was the only one reaching for the “actual” thing. I agree, it was funny, and I appreciated it.

At the same time, I’m also aware that most of the competent people around here tend to stick with what’s doable, which does provide a nice reality check.

But although I believe my dreams are probably only pipe dreams, I also believe that every place needs a pipe dreamer, or it's apt to become hopelessly dry and academic.  :sleeping:  ;D

See? It’s all good!

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Human Style AGI
« Reply #9 on: May 15, 2021, 05:10:09 pm »
Sorry I was getting paranoid,  I get hassled on the internet alot... so I kinda always think its happening even when it isn't.

It all could be doable,  even the whole shabammer of AGI.    My little theory just needs exponential power for it to start working (even tho that would pretty much solve any issue with anything. hehe), then its just a motivation problem, and maybe the solution to that is already solved on the internet already.    Even tho no-one has exponential power... do they...

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Human Style AGI
« Reply #10 on: May 15, 2021, 11:57:50 pm »
If you don't strive for the impossible you won't know what to do that's doable. You need to have long term goals and short term goals. The long term goals help you choose the best short term goals.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1178
Re: Human Style AGI
« Reply #11 on: May 16, 2021, 01:50:45 am »
I’ve been hearing variations on this from multiple good sources. It's like the difference between going on a road trip with a map, and going without a map. Results may depend on what the purpose of the trip is, but yes, when trying to get somewhere specific in a timely fashion, bringing a map sure is the smart way to go. A similar principle seems to apply physically. Sailors needed distant objects like the sun and the North Star,* to know what to do with things close at hand, such as the rudder. Otherwise they would’ve had a hell of a time trying to cross the Pacific.


*Incidentally, there was a huge solar flare on Proxima Centauri. So we may have to look elsewhere to find the Aliens. Alternately, those Centaurians could be very metal.

https://astronomy.com/news/2021/05/massive-flare-seen-on-the-closest-star-to-the-solar-system-what-it-means-for-chances-of-alien-neighbors


*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Human Style AGI
« Reply #12 on: May 16, 2021, 02:05:01 am »
I think nobody with knowledge of stars would imagine for one moment that life could evolve on a planet near a star like Proxima Centauri. Red dwarf stars are so violent they frequently sterilize their neighbourhood and would kill everything that couldn't protect itself. However they are also by far the most common type of star in the universe and the longest lived by many orders of magnitude.

That would make them a very good place for an advanced technological civilization to colonize. Anyone with the technology to travel to one of these stars would surely have the technology to be able to survive there too, and they would be able to stay there for trillions of years, long after the rest of the universe was dead.

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1178
Re: Human Style AGI
« Reply #13 on: June 29, 2021, 07:55:00 am »
Level 7:  Grounding Intelligence in Moods to Generate Sensible yet Free Progressions of Action


Overview

I’m devising a system based on optimizing moods, with moods being a direct consequence of the environment, and the environment being a partial consequence of actions. I’ve conceptualized perception, mood, imagination, and action, as the basic modalities of a human-like intelligence. Each of these would lead to the next, creating a thought loop. Each modality would feature many subsections, (different mechanisms of perception, various types and intensities of moods, different methods of reasoning employed by the imagination, and a variety of possible actions), but the process wouldn’t go through discrete sequential steps. Instead, each mode would vary in its intensity to move the process forward. The thought process would be an epiphenomenon, like a wave appearing to travel across a surface, despite all the individual particles merely jiggling in sync.

MENTAL-GEARS" border="0

These thought loops, guided by the meanings* of moods, (moods being symbols describing the health of an organism) would learn to create narrative progressions. Seeking to optimize the state of an organism, means eventually seeking to optimize the state of the organism’s environment. This is usually quite complex, and often requires a multi-step program inching towards a larger goal with many try – fail cycles. An AI would learn something at each step though, and become somewhat better prepared for the next one.

 * In this system the observation of results is enough to give you meaning, (meaning (here) being the relationship between anything and its implications or tendencies).


The Modalities in Greater Detail

Perception would require memory to be awakened by sensory inputs which hint at it. Novel perceptual data would be combined with existing memory, enhancing subsequent perception. It could use the reoccurrence of basic physical tendencies such as opening, closing, merging, dividing, increasing, reducing, approaching, retreating, to understand sensory inputs through a vocabulary of form, function, and process. Evaluating a situation can be done by taking it apart and looking at the distinct facts, or it can be done by blurring perception until the autonomous mood system’s combined hue becomes apparent. Both make use of taking extreme views of the environment, one by separating it into the most prominent distinctions, the other by blending it together into a hitherto unknown substance. The fuzzy method could serve as a guide for the distinct method. Many precise questions and problems are unanswerable through precise means, as seen with ‘the question game’ such methods by themselves often lead an intelligence into loops, dead ends, and ever-increasing branching. When formal logic starts to exhibit that kind of behavior, guesstimates, ambiances, and intuitions, become more sensible guides.

A mood gives the mind a qualitative judgement about what is perceived, imagined, and done. The job of the other qualities is to optimize the mood. Mood associations would be informed by the positive/negative/arbitrary effects of past experience. A pitfall to avoid here is a Sisyphean attempt at an emotional trendline which rises more than it falls; this is impossible because moods have finite ranges, and current experience is judged relative to similar past experience. It may be possible to avoid this trap by thinking of mood progressions and their accompanying events musically or narratively. The AI could learn to see its life as a potentially worthwhile song or story. I’m not sure if delay of gratification fully conveys the idea. A deepening of scope is more what I’m getting at. Something like the difference between your favorite word, favorite sentence, and favorite book. The point isn’t to forego your favorite word now in order to get more later, the point is to enjoy a more intricate concept, or unfolding of interrelated events, as outlined by a sentence, or an entire novel. The development of new mood associations would probably have a positive mood effect, which might be felt as an increase of meaning, and a variation on this could create humor. These effects are important self-generated feedback, because accurate and thorough mood associations to things and processes are needed to develop a sensible intelligence in this type of system.

Imagination might be a reversal of perception. It might use memory to activate perceptual regions in the connectome. Different imaginations could be devised to essentially calculate futures on the mood system, only these calculations would play with experience instead of numbers. Intelligence then (in this system), is the degree of symbiosis between moods and imagination. Imagination might be spun into action by certain instances or sequences of perception which were indicated as significant by mood. Then imagination would churn out potential actions, until an agreeable mood pairing was found, in which case the action would be performed.

Flag-of-Freedom" border="0

Action is the organism moving itself to change the environment, (it is devised by imagination, (which is guided by mood, (which evoked by perception, (which is informed by memory)))).


The Memory Mechanism

Repeating neural states would reactivate data structures which previously arose out of similar states, reducing the need for complete thought cycles. Novel neural states would increase the need for complete thought cycles. Imagination could then be expanded into analysis, review, and anticipation, to generate the larger contextual frameworks needed to understand novel events.

MENTAL-GEARS-MEMORY" border="0

MENTAL-NET" border="0

Because the AI will be electronic and not biological, it might not require multiple types of neurons. Wireless communication theoretically allows for all the necessary connectome configurations, and each individual neuron could have the code for all the modalities.


The Perception of Processes

A flat layer of neurons can represent a point instance of time. But if, on a delay, you transfer the pattern to a layer of neurons at a different depth, you’ll get a model of change through time. Now actively compare all the layers and you’ll artificially extend the present instant into a moment you have time to think in, and about. It seems reasonable that just as two dimensions can be used to simultaneously represent a volume of three-dimensional space, three dimensions can be used to simultaneously represent a span of four-dimensional process.

MENTAL-MOMENT" border="0

This is a simple way to visually demonstrate the idea, but it might not be the best way to actually run a neural net. Representations of the changing environment probably don't have to go layer by layer, with a comparison net functioning at right angles to them. Instead, each frame of perception, visual and otherwise, could be represented as a three-dimensional network of neural connections, same with the following modalities of mood, imagination and action. The comparators would then have to go by activation order or follow data transfer makers, but it could be worth it because it’d give the mechanism more degrees of freedom.


The Function of Narrative Progressions

Everyday tasks would be remembered in terms of significant process. These might include simple narrative progressions to help intelligences with basic needs. However, when an intelligence would not be able to get their basic needs met, a mood would spin the imagination into action, then a narrative progression designed to improve the situation would be set in motion. The final goal which is imagined to solve the situation might seem presently unreachable, but a few simple first steps in the right direction can usually be imagined. Once achieved, the results of these actions would further inform the intelligence, allowing it to imagine a few steps further towards the main goal.

Often the intelligence would need to get other situationally involved intelligences on the same page. One of the most significant processes for a mood based narrative intelligence would be the resolution of differences between intelligences. This would be one of the core properties in the stories/ real-life events they would remember, and one of the core bases for the fictional stories they would imagine. Conveying/enacting a story means communicating it in such a way that all particulars and processes will symbolize the same things for the listeners; therefore, both in live-action life, and for recounted/imagined events, a narrative intelligence must first and foremost recognize and minimize the differences between everyone’s experience of reality.   

This can be done because other life forms (and the entire environment) reflect their experience of you, back to you, likewise between each other. The difference between your experience of yourself, and someone else’s experience of you is an obstacle to communication, and the differences between your experience and their experience of the greater environment is an obstacle to cooperation. This may be the foundational principle behind meaningful stories; there are paths of action which can align an intelligence’s internal experience of themselves with another’s external experience of them, while attempting to optimize both in the process of coming to an agreement on their experience of the environment. Stories might be the windy roads necessary to experientially get people on the same page, therefore allowing for effective communication and cooperation. Once everyone is on similar experiential ground, communication is simplified because logical constructs built on similar foundations will require similar structures, they will therefore be easier to recognize, work with, and solve.

But wouldn’t it be better if everyone just minded their own business, lived in peace, and never mind differences of experience? Possibly, but it is in the nature of the universe to change, and stability itself eventually becomes rigidity, then coordinated progress is often required to adapt to new circumstances. That’s when you need a narrative sequence to get intelligences on the same page, so that they may agree on, and work towards the necessary goals which would adapt them once again to their changing environment.



The meaning of life? My answer to the letter of the question, is that some parts of life can include meaning, but since meaning is a product of symbolic understanding, and therefore a smaller category than life itself, all of life can’t be encompassed by it. If I try to answer the spirit of the question, the ‘’meaning of life’’ could be said to exist as an empirically absent but experientially significant privative; the complete lack of a fundamental meaning being a form of infinite possibility. The noticeable void of a predetermined meaning is like finding an empty glass. This seems good, because it implies that we’re not limited to one type of drink (as we would have been, if we’d discovered just one glass of absolute meaning), plus, there might not have been a glass at all.

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: Human Style AGI
« Reply #14 on: June 29, 2021, 03:19:00 pm »
Loving the visuals on your posts!  :D

If you remember what you did and what happened when you did it   (sensory motor history going way back to Jeff Hawkins heirarchical temporal memory.)   You can use it as a truth of the world around it,   thats the essence of induction as I see it.   But ur limited in processing, and the sensors you can use, which make it more of a robot than a sentient being,  but I bet if you could get over the processing problem (your going to need a quantum computer...) , it would be quite amazing to see it go.


Drawing the similarity with what u said to what I'm doing->

Perception would require memory to be awakened by sensory inputs which hint at it.

Thats the computer vision system formulating the environment/sensory key, to go into the predictor.

Novel perceptual data would be combined with existing memory, enhancing subsequent perception.

Thats the constants of the system updating as the sensory history gets longer.

It could use the reoccurrence of basic physical tendencies such as opening, closing, merging, dividing, increasing, reducing, approaching, retreating, to understand sensory inputs through a vocabulary of form, function, and process. Evaluating a situation can be done by taking it apart and looking at the distinct facts, or it can be done by blurring perception until the autonomous mood system%u2019s combined hue becomes apparent. Both make use of taking extreme views of the environment, one by separating it into the most prominent distinctions, the other by blending it together into a hitherto unknown substance. The fuzzy method could serve as a guide for the distinct method. Many precise questions and problems are unanswerable through precise means, as seen with %u2018the question game%u2019 such methods by themselves often lead an intelligence into loops, dead ends, and ever-increasing branching. When formal logic starts to exhibit that kind of behavior, guesstimates, ambiances, and intuitions, become more sensible guides.

To get that to happen is a bit beyond what I'm doing,  but if u had an infinite amount of processing allowance,     then it could intractably match a function to an every growing history of sensory motor,   but its quite intractable to do it.

Im limited to 30 or so values, I can synch up to reality,   being more possible.
So those 30 or so values is all you have to garner anything from the robots sensors.

Imagination might be a reversal of perception. It might use memory to activate perceptual regions in the connectome. Different imaginations could be devised to essentially calculate futures on the mood system, only these calculations would play with experience instead of numbers. Intelligence then (in this system), is the degree of symbiosis between moods and imagination. Imagination might be spun into action by certain instances or sequences of perception which were indicated as significant by mood. Then imagination would churn out potential actions, until an agreeable mood pairing was found, in which case the action would be performed.

Sorta->
Imagination -> using the model.      Induction -> forming the model.

If U synch your values,  at the same time as developing the output string together,   this might create something in the form of developing the model and using the model at the same time together in unison.    I'm not sure if Itll do anything super amazing, but maybe it adds a bit more self diognosis to the system,   if there is defects in its model,  maybe this could help it pick the model that takes into account the defects in the system,  in a form of machine meta-awareness.

Action is the organism moving itself to change the environment, (it is devised by imagination, (which is guided by mood, (which evoked by perception, (which is informed by memory)))).

My perception is fully fixed, and completely un-backeffected by the system its developed, and fixed functioning.Thats definitely a reason why its only a machine,   With a real human, it would all be perfect, all communicating to itself.
If my machine had "moods" it would be a part of its model, and it would predict it with the rest of the environment.

This is a simple way to visually demonstrate the idea, but it might not be the best way to actually run a neural net. Representations of the changing environment probably don't have to go layer by layer, with a comparison net functioning at right angles to them. Instead, each frame of perception, visual and otherwise, could be represented as a three-dimensional network of neural connections, same with the following modalities of mood, imagination and action. The comparators would then have to go by activation order or follow data transfer makers, but it could be worth it because it%u2019d give the mechanism more degrees of freedom.

The degrees of freedom in my one,  are the members of its environmental key,  that is fixed function, solid detections,  What I'm putting in the key now,   is the posture of all the geometry in its eye,  plus pain like things,   like damage,  low on battery,  I bet theres more I could put in mine that could make it more sentient,   but Im under the water with todo: 's   and I dont have time to think about it further than what it is,  I need to get it running!
« Last Edit: June 29, 2021, 04:32:44 pm by MagnusWootton »

 


Requirements for functional equivalence to conscious processing?
by DaltonG (General AI Discussion)
November 19, 2024, 11:56:05 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

352 Guests, 0 Users

Most Online Today: 461. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles