Ai Dreams Forum

Artificial Intelligence => General AI Discussion => Topic started by: HS on January 09, 2019, 06:10:05 pm

Title: Conceptualizing General Intelligence
Post by: HS on January 09, 2019, 06:10:05 pm
General intelligence is a model of reality on a perpetual fall into the future. A self predicting system which operates within its own blurry sphere of influence, with the "self" being perceived as the extent of that sphere of influence.

Yes? No? Got a different take on it?
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 10, 2019, 01:36:00 am
Yes. Except for the 'self part, we are machines, we do do self imitation (prediction), and like to think we are more and surely the 'sensor' likes it when the robot says it is happy, but no self, unless you mean thinking text/vision knowledge - knowledge can describe any concept. Text is General.

General Intelligence is the ability to do what humans can do in a wide spectrum of domains, hence not narrow single tasks like playing snake. General Intelligence, like humans, must change Earth, and that requires GI to know about the Earth, have goals, and generate good plans fast without trying every possible combination of words. Then carry them out by acting (informing us, or it moving its body).

To change Earth, you gotta have knowledge, you gotta have that desire/flame, and then bright the idea in the mind of man, and see it come to fruitation.
Title: Re: Conceptualizing General Intelligence
Post by: Korrelan on January 11, 2019, 05:07:20 pm
@HS, nice definition… my first attempt would be…

The ability to adapt/ predict an internal/ external pattern based on the recognition of internal/ external patterns in general terms, in order to solve a problem relative to its environment and/ or survival.

 :)
Title: Re: Conceptualizing General Intelligence
Post by: ivan.moony on January 11, 2019, 05:19:23 pm
Artificial intelligence has nothing to do without living beings. Without them it could only predict some future states of the Universe or its parts, but what would it really do without living beings? Only with living beings come problems to solve, reaching some set of states that is more desirable than some other Universe setups.
Title: Re: Conceptualizing General Intelligence
Post by: HS on January 11, 2019, 06:21:14 pm
@HS, nice definition… my first attempt would be…

The ability to adapt/ predict an internal/ external pattern based on the recognition of internal/ external patterns in general terms, in order to solve a problem relative to its environment and/ or survival.

 :)

That's a very Korr definition :) Very defined/precise.

Artificial intelligence has nothing to do without living beings. Without them it could only predict some future states of the Universe or its parts, but what would it really do without living beings? Only with living beings come problems to solve, reaching some set of states that is more desirable than some other Universe setups.

Who would have guessed that problems make life worth living? Nice thing to realize.

Edit: Actually I have noticed this when I used to get bored out of my mind on summer vacations during school years.
Title: Re: Conceptualizing General Intelligence
Post by: Korrelan on January 11, 2019, 09:01:08 pm
Quote
Artificial intelligence has nothing to do without living beings.

But what if the AGI itself is self-aware, conscious and... alive.

 :)
Title: Re: Conceptualizing General Intelligence
Post by: ivan.moony on January 11, 2019, 09:24:59 pm
Quote
Artificial intelligence has nothing to do without living beings.

But what if the AGI itself is self-aware, conscious and... alive.

 :)

Self-aware, conscious and... alive? You mean to feel real emotions, not virtual byte representations? If we would poke it, it wouldn't just say "ouch", it would also feel the real pain, like we do?

Then it would make sense for such an AGI to be occupied by surviving.

To make such an AGI, we'd need some answers about the life phenomena first. But how to get those answers without experimenting on innocent beings? I wouldn't dare to conduct such experiments without having some explicit answers from the very God himself. And you know how hard he is trying to hide himself, whatever reason he has.

Maybe we are not mature enough as a species. Maybe we can hurt him by being irresponsible. And maybe he is just not up to having real children with us.

But maybe we can gain some faith by creating some decent simulations. If the concept proves to be worth of being inhabitable by a real life, who knows, maybe he'll answer our prayers, and we'll finally hear a word or a two from him.

A long way to go anyway. I doubt we will witness that one in our lives. And what if it finally takes a death to create a life? Would we be up to it?
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 10:07:51 pm
Survival=seeks immortality like locksuit - locksuit not dumb see! :D ...


So, when I eat food, this is a problem I enjoy solving? Cool! Problems are good! ... But I don't want Burgler problems!


"The ability to adapt/ predict an internal/ external pattern based on the recognition of internal/ external patterns in general terms, in order to solve a problem relative to its environment and/ or survival."

Yes, AGI will be installed with goals/desires, like its Master, to stop death & pain 1st, and it will be taught knowledge like through the medium Text like a child, learning all about the world from DNA to code, will have a internet connection 24/7, and it will predict sequences of action, and talk to us...
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 10:16:06 pm
AGI has to figure out the future.

It has to do Sequence Prediction.

And tell us the answers.
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 10:20:01 pm
AGI is going to be a little researcher nut making discoveries. Putting 1 and 2 together and seeing the connections.
Title: Re: Conceptualizing General Intelligence
Post by: ivan.moony on January 11, 2019, 10:22:28 pm
AGI is going to be a little researcher nut making discoveries. Putting 1 and 2 together and seeing the connections.

That sounds like you, Lock  :D
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 10:23:11 pm
I agree, I too notice a little paradox here, a loop....my job is going to be taken!
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 10:27:05 pm
Sequence Prediction requires a Generator & Validator.

To Validate things it is Told or that it Generates, it has to reflect the input against what it knows to be True. Old knowledge is used as Validators. Convincing.

Old knowledge is also used as Generators.
Title: Re: Conceptualizing General Intelligence
Post by: ivan.moony on January 11, 2019, 10:33:55 pm
Sequence Prediction requires a Generator & Validator.

To Validate things it is Told or that it Generates, it has to reflect the input against what it knows to be True. Old knowledge is used as Validators. Convincing.

Old knowledge is also used as Generators.

Funny you should say that right now. I'm currently researching human generation / computer validation issue of a formal language for representing general knowledge. For generating, it is required for knowledge not to be contradictory (translated: true in at least one interpretation). For validating, it is required for knowledge negation to be contradictory (translated: true in every interpretation).
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 11:06:37 pm
You may think, hmm, but what if AGI is just Reinforced Learning using Motors? Put a baby on the ground in a simulated universe, give it rewards, and soon it will learn how to crawl, turn from walls, run, put food in its mouth, and build rockets and houses for further reward e.g. protection against (stopping) negative storms? Sounds simple. It learns by Rewards, the actions! Especially if Lock has some sort of desire to make it work...

and then you get Lock's adventure a year ago uploaded to YouTube in HD Dec 4 2017 !

https://www.youtube.com/watch?v=gFAhM0BYdJI

https://www.youtube.com/watch?v=fdMr8mAAfHM

So why did I stop? Because with some further knowledge I learned, partially cus of a friend of mine, I realized a lot of things in a fast cascade. The idea of reward=builds rockets is 'sound', but, actually doing it requires the baby's mind to have to know all about the world, and know relations between them for general discovery, because it has to have goals like 'rewards' and change its goals and have mini goals, it has to know to look for metal or rock, mine it, use tools to make sheets, engines etc, make a control system, etc, to build a rocket. At every step of the way it'd use motor actions or think 'text sequences' to do things it discovers...if there's no plan discovered and its hoping random movement will do the trick, then theres a huge search space of combinations. So it has to have goal rewarders, make plans, and use sequences it already knows and make them tuned appropriate to the task like catching a ball at a unseen new angle. Only this way, it can come up with future sequences / action plans that will be the likely answer to carry out and narrowdown its search space of possible answers instead of randomly jiggling its body or crawling in hopes of building a home to stop wolves attacking.
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 11:18:44 pm
Put another way, my old plan may have learned cues and action programs, but, it has to think deep about what the goal is, and related knowledge ex. about how a type writer functions and looks like and all its parts ex. metal spokes are flat on top etc, and metal is hard and not soft.

It has to come up with the plan of actions, yes, but, then it has to use what it has learned, and leverage it so it can anologate it to different situations/reflections and come up with a plan fast instead of making one from complete scratch with huge search space.
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 11:25:24 pm
You may wonder though, ok, so this baby is a bad idea, but, if there's no baby/human, then who caries plans out for real? So someone has to learn motor actions, sequences, by Reinforced Learning, and link them to visual or text knowledge like 'throw arm' or 'run a bit to the left' or 'put the block on top and shoot it 3 times'. And so then we have a bunch of little pieces of motors, that can be built into hierarchical long sequences, just like building blocks.

After AGI acquires bodies, it will then also do external discovery, using feedback. Tests.
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 11:35:21 pm
Imitation is where we accelerate at learning motor programs, not just knowledge taught.
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 11:37:28 pm
Having been taught 2 hierarchies, one of motor and one of text knowledge (sequence sensory (words are like non-sequence sensory)), and 'linked', you now narrowdown its search space, using old knowledge, SP, convincing, you know (as explained well in the last posts).
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 11:51:04 pm
You might be able to evolve the AGI program using allowed modifications to nodes and links. Or you could code the whole thing if you're smart enough.

AGI uses knowledge and generalization to discover desired answers, yes, inference. Certainty some evolutionary algorithm can be added to AGI to speed it up, but, the reason it doesn't have a huge search space in the first place isn't because of the evolutionary algorithm, so, it may not even be needed!
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 11, 2019, 11:54:13 pm
How often, do you ivan or korrelan, as a human brain, generate 100 possible answers like a evolutionary algorithm? I never generate that many ideas. I always just 'have the answer' on my tongue.

Not 100% sure though, but definitely to some extent! Never 10,000. I don't even have time to generate that many !!!!!!!!!!
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 12, 2019, 12:16:48 am
So ya, we pass down to our children the imitated knowledge and motor actions (senses are actions too, just, motor must be MOTOR!, to actually carry things out).

We're giving all the learned important building blocks (knowledge), and how to do some of them.

Then man has desires, ! We then work in our gifted world of teachings, and utilize them of course.
Title: Re: Conceptualizing General Intelligence
Post by: Korrelan on January 12, 2019, 01:30:18 pm
Quote
How often, do you ivan or korrelan, as a human brain, generate 100 possible answers like a evolutionary algorithm?

This depends on how you imagine the brain to function.

In my opinion an idea is constructed from 100’s of sub-conscious snippets of knowledge.

Evolutionary algorithms are not the total answer, but a similar schema does play a part in our intelligence.  When we are trying to figure out a problem space, we tend to hold the main facets of the problem in ‘short term memory’ and then mentally apply different parameters whilst mentally considering the repercussions/ outcomes.

This smacks of a genetic algorithm schema, where proven past experience (wisdom) is mentally applied to a set of parameters.

Let’s take a simple example… is it quicker to walk or run?

Your sub conscious is doing 90% of the work here, you are mentally conceptualising the parameters of walk/ run, you didn’t need to think about the concepts individually… you just knew automatically what walk/ run meant along with their relevant parameters, speed, etc. 

All your consciousness has to do is consider the question being asked and compare the relevant speed parameters… and then speak/ write the answer.

So in a way I believe we do have 100’s of sub-ideas for every thought/ idea we have.

 :)
Title: Re: Conceptualizing General Intelligence
Post by: LOCKSUIT on January 12, 2019, 02:02:57 pm
"When we are trying to figure out a problem space, we tend to hold the main facets of the problem in ‘short term memory’ and then mentally apply different parameters whilst mentally considering the repercussions/ outcomes."

What we hold in Working Memory is the question/s. We do bring into WM related things / things that come with it, and consequences/appeals vote in on the decision/s found based on old knowledge.


"This smacks of a genetic algorithm schema, where proven past experience (wisdom) is mentally applied to a set of parameters."

We use Long Term Memory to help discover the answer/s. Helping knowledge need not be proven valuable/ranking at all.


" "is it quicker to walk or run?" "

You imitate back the answer "it is quicker to run". You just need a little, support/validation comparison. Here we already have the answers, we just need to narrowdown. We may know 'it is faster to jog'. Or we may see 'is it [quicker] to walk or run?' and so 'run' wins. (litterally lol woho)