Conscious Control Flow

  • 12 Replies
  • 1838 Views
*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Conscious Control Flow
« on: April 13, 2020, 11:23:08 pm »
https://arxiv.org/pdf/2004.04376.pdf

Quote
In this demo, we present ConsciousControlFlow(CCF), a prototype system to demonstrate conscious Artificial Intelligence (AI). The system is based on the computational model for consciousness and the hierarchy of needs. CCF supports typical scenarios to show the behaviors and the mental activities of conscious AI. We demonstrate that CCF provides a useful tool for effective machine consciousness demonstration and human behavior study assistance.

This paper presents a nice clean architecture for a practical implementation of conscious software.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Conscious Control Flow
« Reply #1 on: April 14, 2020, 01:03:08 am »
Hey look, they got my idea in there finally! Food>money>respect. A hierarchy with rewards for nodes.

my special hierarchy:
https://ibb.co/vVbG2cp

It lost me early in, I'm sorry, I don't like the architecture. My hierarchy (that i never fully made yet) stores all at the same time a hierarchy of phrases with frequency weight and can handle positional delay and translation etc, Working Memory energy, and reward goals, in 1 hierarchy, seriously. And it can do a lot, lots!! Yes a desired question is asked until it has fulfilled it, then takes up a next priority goal question. My hierarchy schema saves, and updates long term/short term memory. Satisfaction for my hierarchy would be recognizing a node that it wanted to recognize, just it needs to make sure it knows how to get there! For example you ask how to put a chair on top a table or ask which is true: Earth will become a giant single brain or many small brains? And so one answer is verified because it follows/translates to the data more!
« Last Edit: April 14, 2020, 01:53:02 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: Conscious Control Flow
« Reply #2 on: April 14, 2020, 01:45:02 am »
Levels" border="0

I was just about to say, this looks like a combo of your word hierarchy and Ivan’s fractal ellipsoid search engine. You can zoom in on big concepts to see what they are made of. It's the same complexity at any magnification so that you can start moving in the right direction immediately, discovering where to go next in the process. Faster than front loading your cognition. Just one step in the right direction increases the resolution of applicable options! Allows you to make a mental motion like threading a needle. Speed first, then precision. Like the opposite of a ball bearing rolling towards a magnet.

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Conscious Control Flow
« Reply #3 on: April 14, 2020, 02:27:00 am »
Does anyone here ever read anything, or do you just look at the pictures? I would be disappointed and upset if these were actually my own pearls being wasted here.  :knuppel2:

It's based on Maslow's hierarchy of needs (an old idea, from psychology) and combines a modified version of that with a subsumption architecture (another old idea, from robotics). The paper is notable because it's a clean practical design based on ideas that have been around so long that they could be mistaken for common sense by anyone who had any.

Unfortunately to be truly intelligent, the design still depends on some magic black boxes that haven't been invented yet.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Conscious Control Flow
« Reply #4 on: April 14, 2020, 02:29:46 am »
I read the full thing. Notice I used the word 'Satisfaction' :) ?

It's still shit though. Prove me wrong. Lol.

It's utter nonsense, what does it even do!? Prediction? Oh ya.

Most of it is just wrong and not clear. It also mixes a virtual body into it.... That complexifies it and probably can't do much with it to compliment a brain that can think about anything. The brain must think about complex answers, what kind of data can it collect from shitty R&D sims with no hands or motor cortices?

Stick to the brain-only path until it has a good reason to go into the R&D lab room to look for specific data collection. So just feed it lots of diverse data from everywhere, and it can first seek desired webpages too on the internet btw.

Prediction+Reward is good but keep the body out of it for now. And use a explainable hierarchy architecture!

Keep in mind a perfect body that doesnt need to sleep or walk or eat and just wants immortality, needs to collect data, and THEN try R&D tests. That's all there is to AGI!! Really... I barely do R&D, I just invent new knowledge :) :) :)
« Last Edit: April 14, 2020, 02:50:31 am by LOCKSUIT »
Emergent          https://openai.com/blog/

*

HS

  • Trusty Member
  • **********
  • Millennium Man
  • *
  • 1179
Re: Conscious Control Flow
« Reply #5 on: April 14, 2020, 05:02:24 am »
No worries infurl, only skimmed it so far, but I plan to take a closer look at the paper.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Conscious Control Flow
« Reply #6 on: April 14, 2020, 05:07:35 am »
?? Does anyone else agree it has much jumble jumble in it ??

It's just all about reward and prediction and embodiement....I got nothing else from it. Please back it up infurl! Explain.

It is not clear at all, it should have no unexpectedness to the viewer (least surprise).
Emergent          https://openai.com/blog/

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Conscious Control Flow
« Reply #7 on: April 14, 2020, 05:18:52 am »
No worries infurl, only skimmed it so far, but I plan to take a closer look at the paper.

Thanks @HS, I hope you are able to get some benefit from it. I devote a lot of time to finding interesting and useful things for my own purposes, and I do try to mention them here whenever I find something that seems like it might be useful to somebody on this forum. I've even been known to post things that I don't agree with if I know of somebody who might be able to use them anyway.

Unfortunately it is becoming increasingly necessary to post things privately to avoid the frustration of having them pooped on by idiots who haven't been potty trained yet.


*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Conscious Control Flow
« Reply #8 on: April 14, 2020, 06:35:55 am »
Giving it another read. The thing here is I'm interested in a much larger and deeper architecture beyond these cheesy Mr.Professor Papers that never explain much intuitively. Ya, there some good in it, but it's not mind-blowing.

If anything infurl linked us this Paper right after I mentioned 'RL for thoughts'.

It's idea is to know how to get, what it wants. It starts with ex. eat food, sees no food, changes solution to reach goal to search for food, works, removes top priority goal since solved. To do so requires checking memory that you own no food. This is akin to the brain mutating along branches up the hierarchy, looking for milestone goals to be recognized, prediction (and reward) is used to pick paths, and satisfaction/reward is used to say hey stop this is what we want. The pre-check for if you can do something in physics/on Earth in the first place is just the prediction, you'll predict likely/true things if prediction is running good.
Emergent          https://openai.com/blog/

*

WriterOfMinds

  • Trusty Member
  • ********
  • Replicant
  • *
  • 620
    • WriterOfMinds Blog
Re: Conscious Control Flow
« Reply #9 on: April 21, 2020, 06:05:38 am »
I had some spare time for technical papers this week (yay), so I read through this.

People sure like to use the word "consciousness" to mean a lot of things, don't they?  I would be a little more specific and describe what's in this paper as a model of executive function.

My biggest active question about executive function implementing a goal/need hierarchy is, how should it decide when one level has been "satisfied" so it can move to the next?  So I was watching for ideas that might be a help toward answering that question.

For me the most interesting thing in the paper was the bit about using Bayesian optimization to set the constants in the function that calculates the weights.  I've been in that position of having a reasonable function that combines multiple variables, and knowing what outcomes I wanted it to yield, but not really knowing how to tune it (outside trial and error).  This Bayesian optimization appears to be a formal method for tuning constants based on a set of test cases that are labeled with their desired outcomes.  That could be really useful for a wide variety of systems.

There is still important material missing, though.  I was thinking about issues like "when should an agent consider its food stockpile to be large enough?"  And the agent described in this paper doesn't seem to do any long-term planning or risk assessment; the need weights are only determined by how high its drives are at the moment.  So it searches for food and eats until the hunger value becomes negligible, then moves on to other things -- very simple, but also inadequate.

 (I was maintaining a stockpile of several months' worth of food long before the coronavirus panic-buying began. That has proven to be a very smart decision! But filling my whole house with canned goods, or turning into a "prepper" who spends every spare minute running a backyard subsistence farm, would be insane. Why did I stop exactly where I did? Why a few months' worth instead of three years' worth? I don't know! This is an example of me following instincts that I can't interrogate, which makes understanding the behavior well enough to replicate it in an AI very difficult.)

Another issue that isn't addressed is the possibility of pathological oscillations.  I remember reading somewhere (I forget where) about a hypothetical agent that works on solving whichever need has maximum value right this moment, the way the agent in the paper does.  Suppose that it is both hungry and thirsty, but its food and water are in different locations, and it can't carry them. It gets stuck in a horrible loop where it eats one mouthful of food, feels its hunger level drop below its thirst level, runs over to the water, drinks one sip, then runs back over to the food again ...

I assume the test agents in the paper avoid this problem by treating possible actions as atomic events rather than processes.  E.g. "eat" could mean "eat all the food you have" or "eat until full," and the need weights aren't re-evaluated until such a discrete action is complete.  A more sophisticated version of the agent might need to make finer-grained evaluations, and also have some mechanism for avoiding oscillations.  Perhaps what's missing here is an assessment of which of the needs on the table will be easiest (not just possible) to fulfill.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Conscious Control Flow
« Reply #10 on: April 21, 2020, 06:46:33 am »
Note I modified some of your quotes.

"how should it decide when one level has been "satisfied" so it can move to the next? "
"how much food should I stockpile?"
"eat until full but what if needs a extra full tummy to get any work done or get enough weight to crush the ship door and escape its cell?"

> When it really recognizes it, with enough proof. It must be the best cumulative match/prediction.
> Based on context, if it knows a 3-month winter is likely and a 10 month one is much less likely and so on, then it makes it predict 'gather more food of amount X'.
> Again, based on context, you predict it is most likely/preferred to eat extra so to get the desired result X seen in your retina recognized.

All you do is ask a desired question (create root of story, it's also prediction), and try predicting answers using frequency/reward/induction until find the path that makes you say ah I recognize the answer now much better and it is the best I've found so far. Your prediction is really good but you know it isn't matching up and still search lol. Once satisfied with its best answer that matches up to most its related knowledge, it can also proceed further in steps to a deeper goal or motor procedure.
Emergent          https://openai.com/blog/

*

infurl

  • Administrator
  • ***********
  • Eve
  • *
  • 1372
  • Humans will disappoint you.
    • Home Page
Re: Conscious Control Flow
« Reply #11 on: April 22, 2020, 12:58:38 am »
People sure like to use the word "consciousness" to mean a lot of things, don't they?  I would be a little more specific and describe what's in this paper as a model of executive function.

Calling it "consciousness" is undoubtedly an exaggeration, but terms like "intelligence" and "learning" are being applied everywhere with no less abandon nowadays. On the one hand I prefer more precise terms like the one that you suggest and on the other hand I can tolerate the more vague terms which indicate what we might be trying to achieve.

Quote
My biggest active question about executive function implementing a goal/need hierarchy is, how should it decide when one level has been "satisfied" so it can move to the next?  So I was watching for ideas that might be a help toward answering that question.

There's the assumption that the hierarchy is pretty well defined at the lower levels. Recently I heard of the rule of threes with respect to human survival. A person can survive for three minutes without air, three hours without shelter (in the more extreme climates), three days without water, and three weeks without food. Beyond that, our emotional and intellectual needs are a bit less urgent but ultimately just as important or our existence would be futile. If there were a tie, I would choose convenience, roll a die, or do nothing.

Quote
For me the most interesting thing in the paper was the bit about using Bayesian optimization to set the constants in the function that calculates the weights.  I've been in that position of having a reasonable function that combines multiple variables, and knowing what outcomes I wanted it to yield, but not really knowing how to tune it (outside trial and error).  This Bayesian optimization appears to be a formal method for tuning constants based on a set of test cases that are labeled with their desired outcomes.  That could be really useful for a wide variety of systems.

Have you experimented with electric motors using feedback loops? Implementing a PID (proportional/integral/differential) controller is a good way to get a feel for this sort of thing. There are three weighted variables of decreasing significance so you can "manually" optimize P, then I, then D in that order to get a marvellous result. Since contests are won or lost by optimization in the margins and most processes that require fine tuning are not nearly so clear cut, mastering techniques to handle those situations will be as invaluable as it is important.

Quote
There is still important material missing, though.  I was thinking about issues like "when should an agent consider its food stockpile to be large enough?"  And the agent described in this paper doesn't seem to do any long-term planning or risk assessment; the need weights are only determined by how high its drives are at the moment.  So it searches for food and eats until the hunger value becomes negligible, then moves on to other things -- very simple, but also inadequate.

That's where the magic black box is needed.

Quote
Another issue that isn't addressed is the possibility of pathological oscillations.  I remember reading somewhere (I forget where) about a hypothetical agent that works on solving whichever need has maximum value right this moment, the way the agent in the paper does.  Suppose that it is both hungry and thirsty, but its food and water are in different locations, and it can't carry them. It gets stuck in a horrible loop where it eats one mouthful of food, feels its hunger level drop below its thirst level, runs over to the water, drinks one sip, then runs back over to the food again ...

I assume the test agents in the paper avoid this problem by treating possible actions as atomic events rather than processes.  E.g. "eat" could mean "eat all the food you have" or "eat until full," and the need weights aren't re-evaluated until such a discrete action is complete.  A more sophisticated version of the agent might need to make finer-grained evaluations, and also have some mechanism for avoiding oscillations.  Perhaps what's missing here is an assessment of which of the needs on the table will be easiest (not just possible) to fulfill.

You are quite right. Operations must be atomic and implemented as transactions which either completely succeed or completely fail or the whole problem of management becomes unmanageable. When there can be more than one transaction in progress at one time you also need a way to avoid deadlock where two competing processes each possess a resource that the other one needs. For some more practical insight into this area, I'd recommend learning about ACID compliance which is a requirement of modern database systems.

https://en.wikipedia.org/wiki/ACID

It's a set of principles which guarantee some degree of success even in the face of unpredictable challenges and potential failures.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Conscious Control Flow
« Reply #12 on: April 22, 2020, 05:26:45 am »
So if a new discovery/ new data doesn't satisfy you completely and more than 50% is not explained or aligns with your knowledge, throw it away? But an answer is better than no answer... Note it can align 100% and be fully useless, ex. spiders wear robes, etc, and my discovery is spiders are God.
Emergent          https://openai.com/blog/

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

386 Guests, 0 Users

Most Online Today: 447. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles