The stink on instincts

  • 27 Replies
  • 10004 Views
*

djchapm

  • Trusty Member
  • **
  • Bumblebee
  • *
  • 37
The stink on instincts
« on: February 19, 2016, 03:14:50 pm »
From another thread - Zero wrote:
I think the core of instinct is the "cat protocol"
Code: [Select]
1: scan until something unusual catches your attention
2: evaluate it
3: if it seems good, get closer (or eat it) and goto 2
4: if it seems bad, get away (or push it) and goto 2
5: if it's irrelevant goto 1

Would you agree?
EDIT: Also, I think AI needs a body and a world to explore. In my opinion, the world could be internet, and the body could be a web browser.


EDIT: Going on... The purpose of life is to make things alive. A cat basically turns cat food into cats. Animals need food. AI needs users who want to host AI on a computer. Anything that can help AI reach this goal should be considered "good" by AI. Do you see where I'm going?
--
I think it's a good start, but doesn't quite capture it.  It's a more passive view with respect to instincts. 

The above algorithm could be used for line following robot.  Or any goal directed behavior.  But instincts need to be more pervasive in influencing goals and being omnipresent. 

A lot of behavior can be derived from the above algorithm - like I always think a baby somehow learns to love the face of mother using this algorithm - and it becomes the basis of a lot of behavior, turning into goal seeking - see the face and better yet see the face smile.  This makes me happy so try to do things to recreate this event.

Thinking about this my brain goes into a spin - are instincts just simple instructions like - "I don't like to be hungry" and "fear exceptional things" (like all dark, exceptionally bright, exceptionally loud, really hot, really cold, etc)?  They're always present and during the cat protocol they have the ability to fire, changing the goal immediately?

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6855
  • Mostly Harmless
Re: The stink on instincts
« Reply #1 on: February 19, 2016, 03:18:53 pm »
From your last paragraph the first thing that came to mind is that basic instincts are there to ensure survival.

*

djchapm

  • Trusty Member
  • **
  • Bumblebee
  • *
  • 37
Re: The stink on instincts
« Reply #2 on: February 19, 2016, 03:28:11 pm »
Yes to the world concept - World can mean internet, even the binary representation of it.  AI shouldn't really need to know language.  Language is for humans.  An AI could see a signal, classify it, and come to understand it without needing to translate it.  Everyone tries to make AI do what humans do or want it to do - but I feel that's jumping the gun and really blocking development - brute force approach.  True AI means letting it build itself based on some core instincts.

Turning catfood into cats (lol) - yea there is some weird instinct there, sex is good.  Creating more seems to be the bottom line to everything.

*

djchapm

  • Trusty Member
  • **
  • Bumblebee
  • *
  • 37
Re: The stink on instincts
« Reply #3 on: February 19, 2016, 03:33:48 pm »
Yea, I want to survive and I guess everything we know that is alive is built to survive.  But let's not go down the skynet road just yet.  That conversation ruins everything.

*

Zero

  • Eve
  • ***********
  • 1287
Re: The stink on instincts
« Reply #4 on: February 19, 2016, 04:05:11 pm »
An AI could see a signal, classify it, and come to understand it without needing to translate it.  Everyone tries to make AI do what humans do or want it to do - but I feel that's jumping the gun and really blocking development - brute force approach.  True AI means letting it build itself based on some core instincts.

I agree. AI is not "human being in a computer".

Quote
I think it's a good start, but doesn't quite capture it.

It's just an empty main loop, and I'm not saying that it's enough. If you look at cats, you can see that instinct is enough to create cats. The bottom line is DNA survival&evolution, not cat survival&evolution.

So, AI instinct should be enough to ensure AIcode survival&evolution. Cats' instinct is far from being empty. It's made of a lot of mental structures that successfully helped previous cats. Cat instinct is made of:
- structures that make cat scan (like looking around)
- structures that help cat recognize good things (like mice)
- structures that help cat recognize bad things (like vacuum cleaners :) )
- structures that make cat react soundly to good things (like eating it)
- structures that make cat react soundly to bad things (like running)

AI instinct should be enough to make AI useful or pleasant to users, so users install it on one more computer, and one more, one more... Since users don't like old software, AI also has to be able to evolve.

*

djchapm

  • Trusty Member
  • **
  • Bumblebee
  • *
  • 37
Re: The stink on instincts
« Reply #5 on: February 19, 2016, 08:42:15 pm »
Or - instead of Vacuums...

So getting to lower levels.  Only instinct is survival which is only some sort of preconceived notion of situational good/bad rating at any point in time.  Maybe the rest is learned behavior.  i.e. reactions are result of making something bad less-bad, or something good more-good.  Our instinct is to do this but how is undefined and subsequently learned.

If agreed, where would you start with bad vs good?  Start out with anything new is bad until found otherwise? 

About staying current - read an article today explaining how algorithms for AI are typically solved years ahead of having data good enough to use them against.  Thus making the dataset more important than the algorithm (their theory).  Recent datasets have allowed significant advancements in AI ability to perform visual recognition etc.  So - I think the key to AIs keeping current is their ability to create their own datasets and advance their understanding through continuous model updates.  If they are creating their own data, then it's a serious problem being able to program a positive or negative to that data and apply our own views of instincts.

*

Zero

  • Eve
  • ***********
  • 1287
Re: The stink on instincts
« Reply #6 on: February 20, 2016, 08:45:45 am »
Quote
Only instinct is survival which is only some sort of preconceived notion of situational good/bad rating at any point in time.  Maybe the rest is learned behavior.  i.e. reactions are result of making something bad less-bad, or something good more-good.  Our instinct is to do this but how is undefined and subsequently learned.

I don't agree :) I think instinct is not only knowing what's good/bad, but also everything needed to react soundly. These cats didn't learn to jump when they finally see a cucumber very close to them. This "get away as fast as you can" reaction is part of their instinct.

All in all, learning is way overrated. Being able to produce thoughts matters most. Learning, well... most human adults can live without it.

Memorizing and remembering are useful. But it's different from learning.

I teach IT to adults. When they are older than 30 years old, often they even think they can't learn anymore, because they're not children. This is not true obviously, but you always have to prove it before you start working with them.

The real challenge about AI is probably not "how are we gonna make it learn?". We have plenty of algorithms for this, it's ok.

The real challenge is "how do we make it react soundly to every possible situation?"

And then the typical thought train goes like "well, there's way too many possible situations, it's impossible to code an appropriate reaction for each of them, so instead we'll make it able to learn, and it will create everything by itself".

This is a wrong start. To succeed, we have to admit and accept that the thing is HUGE. Indeed, there are many many things inside an AI. It takes time. A big AIML chatbot with 40K templates is closer to it than a small wonderful learning algorithm.

We must first build an adult AI who cannot learn, but who can think and process information consciously (an AI who knows that itself is a part of the world, and who observes the world, including itself). Then, and only then, we'll make it able to learn.

So it's about having direct access to a lot of information, and more important, being able to choose which piece of information is relevant in the current situation.

We already have a lot of information. It's internet.

I think that in the future, we won't "surf" the internet by ourselves. Instead, we'll interact with a software which uses internet to react soundly to our requests.

Like, we ask "hey, can you explain how a steam machine works please?". Then AI uses internet to do what it is asked.

Hence, the "browser as a body" idea.


Now, that's me, and I could be wrong...

EDIT: typos


EDIT:

What should be done is made of two things:

- a client that can interact with user (like a browser), feel its environment (OS) and think. Thinking is like talking to itself: the client should be able to build mental webpages based on other mental webpages, before it presents actual webpages to the user. Nwjs could be the client, with Couchdb as local memory.

- a new kind of websites, which aren't meant to be browsed by users, but by AI. These websites hold knowledge, a part of brain, including raw data and more importantly, behaviors. These behaviors are mainly about "how to think", or more precisely "what to think next".

This is why the forum to store a brain idea is so interesting.
In my opinion, it's a good start.


EDIT: added links
« Last Edit: February 20, 2016, 10:37:26 am by Zero »

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: The stink on instincts
« Reply #7 on: February 20, 2016, 11:06:03 am »
Survival Instincts are pretty simple to explain through genetic mutation.

A newborn Giraffe can usually stand and walk within half hour of being born.  This seems like an astonishing learning feat for such a lanky unstable animal until you consider evolution and genetic mutation. Some were along its evolutionary path a trait gradually developed for the newborn to stand and run, probably its ancestors had much shorter limbs to aid its genetic tuning.  All the giraffes that didn’t have the trait were killed by predators.  That trait was then passed on to its offspring and developed further.  The instinct developed through natural selection and genetic mutation.

Quote
All in all, learning is way overrated. Being able to produce thoughts matters most. Learning, well... most human adults can live without it.

Surely you can’t have one without the other.  For a machine to be able to think about a subject it must have knowledge about it.

Quote
We must first build an adult AI who cannot learn, but who can think and process information consciously (an AI who knows that itself is a part of the world, and who observes the world, including itself). Then, and only then, we'll make it able to learn.

I disagree.  I think a young AGI that develops over time is the correct approach.  Both learning and the act of thinking have to be developed at the same time because they both depend on each other.

Quote
And then the typical thought train goes like "well, there's way too many possible situations, it's impossible to code an appropriate reaction for each of them, so instead we'll make it able to learn, and it will create everything by itself".

You have to move from serial to parallel programming techniques.  An intelligent system has to be able to consider all the relevant information/ sensory streams at once. 

I personally use the connectome approach.  My design uses a kind of ‘holographic matrix’ were all sensory/ thought patterns are distributed to all cortex regions at the same time.  If you say hello to my AGI it recognizes you by matching the way you look/ sound/ move/ time of day/ location, etc.  All facets of its current sensory state/ environment produce a pattern unique to the target subject. The design distills the complex pattern down to a level that can then be included in the next frame of ‘thought’ where more information is added and distilled down and so on.  Any recognized trait from any of the data streams will then trigger the complete pattern train at a later date.

The system started as a small complex neural seed. The available cortex area grows over time as more neurons/ synapse are added to the mix through neurogenesis and experience.

A view of the ‘seed’ neural map I designed.



A small cross section of the cortex area from the seeds surface.



A activation map of the audio cortex once it had learned phonemes. The fixed colours represent learned frequency sets/ phonemes and you can see the underling parallel neural patterns triggering the map. This activation is caused by the current ‘state of mind’ after all sensory/ internal patterns have mixed and been filtered by experience/ learning over time. So although its main stimulus is a audio frame from that moment, previous audio frames, visual, tactile, etc are all being included from other cortex areas all at the same time.



The physical design of the seed allows for prediction, etc to form naturally from a set of genetic rules of how synapses form.  Over time/ learned pattern sequences gain a kind of inertia that also allows the system to ‘tune’ to the current experience. The main overall pattern comprises of thousands of smaller feedback patterns that guide the progress of the overall pattern. They shape the ‘mental landscape’ for the flow of thought (like a river through a valley)

It can recognize speech no matter how noisy the environment is for example. If all or only a few of the frequencies that make up a recognized phoneme/ are present in the audio stream the complete phoneme pattern will be sent to other cortex areas.

There is a constant pattern activity within the cortex even in the absence of sensory stimulus.  The seeds main cyclic pattern is changed overtime by learning/ experience and these new patterns are triggered/ included.  So learned chains of events/ knowledge will fire in sequence simply because that’s the way they were laid down in the system at the time… this pattern leads to that pattern, unless this pattern is present, then another pattern fires. Etc.

Heuristics, databases and chatbot engines do a very good job of simulating intelligence and are worth studying and developing, but if you want a machine to actually ‘think’, I think this is the way to true AGI.
« Last Edit: February 20, 2016, 12:45:16 pm by korrelan »
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

Zero

  • Eve
  • ***********
  • 1287
Re: The stink on instincts
« Reply #8 on: February 20, 2016, 12:59:08 pm »
Korrelan, I love your videos  :D

Quote
I think a young AGI that develops over time is the correct approach.  Both learning and the act of thinking have to be developed at the same time because they both depend on each other.

In my opinion, the path you describe (the one 99% subscribe to, I guess) can't work without a biological middleware, because it is indeed perfectly natural for biological entities, but not natural at all for computers.

Birds flap wings. Planes don't.



Having said that, I have to say that I admire what you do, and that I also believe in the strength of parallel programming, but not like this.

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: The stink on instincts
« Reply #9 on: February 20, 2016, 01:20:10 pm »
Quote
In my opinion, the path you describe (the one 99% subscribe to, I guess) can't work without a biological middleware, because it is indeed perfectly natural for biological entities, but not natural at all for computers.

I agree… I use the computer to simulate a biological machine.  The AGI is not running on the computer, but on the simulation. This is the biological middleware. 

I provided my solution only as an example to a different approach.  It may stick with you and in years to come jog something that helps you create the worlds first true AGI. :D
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

Zero

  • Eve
  • ***********
  • 1287
Re: The stink on instincts
« Reply #10 on: February 20, 2016, 01:31:43 pm »
Your work is highly valuable.
Above all, I believe AI should come up from a good team work.
This my home now, we all succeed together, or we think until death  8)



EDIT: sorry, back to the question
Quote
If agreed, where would you start with bad vs good?  Start out with anything new is bad until found otherwise? 

Maybe we need a second dimension, predictable/unpredictable
Code
     Predictable
          ^
          |
          |
Bad <-----+-----> Good
          |
          |
          v
    Unpredictable

Unpredictable is potentially dangerous. But it is also potentially very good.

EDIT: I don't think "new" should be considered dangerous at first. Instead, it should fire the evaluation behavior. And sometimes, you need to get closer to better evaluate things (like my baby, who put anything to her mouth to taste it, whatever).
« Last Edit: February 20, 2016, 02:31:52 pm by Zero »

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: The stink on instincts
« Reply #11 on: February 20, 2016, 02:32:34 pm »
« Last Edit: February 20, 2016, 03:01:54 pm by korrelan »
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

Zero

  • Eve
  • ***********
  • 1287
Re: The stink on instincts
« Reply #12 on: February 20, 2016, 02:51:33 pm »
That's it! Emotions! The very first layer, which triggers instinctive behaviors!  O0

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1302
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: The stink on instincts
« Reply #13 on: February 20, 2016, 03:25:20 pm »
Code
1: scan until something unusual catches your attention
2: evaluate it
3: if it seems good, get closer (or eat it) and goto 2
4: if it seems bad, get away (or push it) and goto 2
5: if it's irrelevant goto 1

Zero said, "The purpose of life is to make things alive."

Then, the purpose of artificial life is to make things artificially alive.

The cat protocol accomplishes a great deal in just a few lines.

I enjoyed the goto statements the most.

Analyzing the "cat protocol":

If 3: "eat it" is TRUE then 2: "evaluate it" is "irrelevant".
If 3: "get closer" is TRUE then the 2: "evaluate it" loop killed the cat.

My Very Enormous Monster Just Stopped Using Nine

*

Zero

  • Eve
  • ***********
  • 1287
Re: The stink on instincts
« Reply #14 on: February 20, 2016, 03:53:50 pm »
Yeah, because it's Twice, the nice buggy cat from The Matrix  ;D

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

258 Guests, 0 Users

Most Online Today: 274. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles