Big

  • 15 Replies
  • 8923 Views
*

Zero

  • Eve
  • ***********
  • 1287
Big
« on: March 23, 2016, 11:11:52 pm »
 :stirthepot:
Right in the middle, there's something like GTA (Gran Theft Auto). It is a simulation of the world. Unlike a video game, however, this simulation is usually paused. The system can choose to run parts of this simulation if it wants to test/imagine a situation. The system can also modify any aspect of any entity of this simulation, as you would in Unity IDE for example. If you do a bit of game development, you already know that we often use Entity-Component-System for storing the world state in games. An entity is nothing but a group of components defining its behavior and its state.

Around this "paused simulation", we have a connection gate. The system will always connect what it feels (through its sensors) to the paused simulation. Each of these connections means "this particular perception is due to this particular entity". Sometimes, connections are made automatically (hey look, this cloud looks like a dog). Sometimes, the system decides to make connections (if I want, I can see two black profiles, or one white vase). Anyway, these connections are always associated to a strong or weak probability (look at her, I'm pretty sure she's more than 50 years old). These probabilities are part of bayesian networks.

Below the paused simulation and the sensor connection gate, there is the ongoing activities group. Each ongoing activity has its own desktop, which keeps track of related goals and achievements. Desktops can be paused, or they can be currently running. There are always calling for system resources (they want to be running). They have fast access to the data they need, depending on their purpose. For example, you can have a car driving desktop, which has fast access to anything road related. They are responsible for preparing and maintaining a list of possible next actions.

Jumping from desktop to desktop (depending of which one requires most attention), there's a focus. When the focus arrives on a desktop, it reads the list of possible next actions and make it flow through the system's emotion grid. If one of the possible next actions makes a hi-score, it is engaged. If none is satisfying, the desktop is asked to investigate further. Then, focus jumps elsewhere.

Above everything, there's the system's memory. It is a perception provider, based on perceptions provided. It is made of sequences of events. When events happens in the system, a sequence (containing these events) is excited and pops the rest of the sequence up (if I say one two three o'clock, four o'clock ..., what do you say? -> rock). The system's emotion grid is part of the memory. It's made of a lot of tiny modifiers. Each modifier is attached to sequence, and carries an emotional charge resulting from previous experiences.

Finally, the "skin" of the system is the sensor fabric. There are sensors outside the mind, and inside the mind. Sensors which are outside the mind are like eyes, ears, ...etc. Sensors which are inside the mind react to events and activities inside the system, so the system can feel what's happening inside of itself, observe itself, and evolve.

Re: Big
« Reply #1 on: May 25, 2016, 05:34:31 am »
The big question (hard problem) is how is the simulation seen, and how does 'that which does the seeing' control (alter the simulation, choose (in general) to access memory) the simulation. 

If an AI could see its inner world simulation, and choose what to do with the simulation (not because random generator tells it what to do, or because it is programmed what to want to do), aware of the simple rules of the simulation, but unaware of the limits of the simulation, unaware of the point or purpose of the simulation (in essence supplying freedom to explore, discover, learn, about itself and the simulation), can that really be all consciousness is? 

Information is fed through our eyes (and other senses), stored in our memory, passing through out internal simulation, 'what our internal awareness mechanism/s is' some how can see the information aeffecting our internal simulation, and organizing new information in our interlinked filing systems of memory; and that which is our awareness/that which sees, explores and learns how it can interact and create and utilize information seen and from memory to alter the internal simulation, continually alters the internal simulation (in the form of internal monologue, thought, and continual recognition of incoming sensual information);

I think perhaps the big difference with human mind and ai, is that ai mind the simulation realm, is the ai not 'seeing' complex detailed psuedo 3d/4d image/moving image?  But it sees electrons and/or binary and/or code? It does not see a psuedo physical realm that it can mould from all angles near the speed of light, and play with,

The question is, how can a mechanism be made which can see such a realm, and have the 'unforced' (free will, choice) motivation, to explore and create, itself and its inner world, understanding of its potential, discovery of its possibility;

and if such a mechanism can be made, is it automatically conscious?

Once a 'singular' system, contains a simulation, which it can 'accurately, well see', and play with, store its experiments in memory, search for new sets of information outside of itself, to store in memory and use to play with, (I am sure reason and purpose and desire fit in somewhere here), is that system conscious?



 


*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1307
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: Big
« Reply #2 on: May 25, 2016, 04:35:17 pm »
Nathaniel Gnarmeister raised some good questions such as, "The big question (hard problem) is how is the simulation seen, and how does 'that which does the seeing' control (alter the simulation, choose (in general) to access memory) the simulation. "

From the point of view of a college graduate, with a registered area of study in A.I. in a traditional major.  How a simulation is seen, is through a viewport, focused on by trigonometry and other advanced mathematical calculations. 

Whatever is unseen, is not calculated, saving an enormous amount of processing. This savings is a shortcut that  creates the impression, the computer is more powerful than it actually is.  In other words, if it had to calculate all of the unseen, the simulation would decelerate.   

There is really no reason this elegant technique should not be used on super computers which may not need to save processing.   The human brain does not calculate what it can not see, I think.   Or, maybe it does, somehow, what do you think?  If you are interested in Game A.I., we can discuss Chase and Evade algorithms, Collision Detection, and more.
My Very Enormous Monster Just Stopped Using Nine

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Big
« Reply #3 on: May 25, 2016, 07:33:25 pm »

Once a 'singular' system, contains a simulation, which it can 'accurately, well see', and play with, store its experiments in memory, search for new sets of information outside of itself, to store in memory and use to play with, (I am sure reason and purpose and desire fit in somewhere here), is that system conscious?


 If it uses a internal simulator, it would be intelligent consciousness, or primate consciousness.
 Lower animals have little use of a internal simulators. But they are low intelligent conscious creatures.
 And their physic engine is from instinct.

 For slow robotic movement you do not need a physics engine. For throwing a ball and catching it you do if you are
creature that learns from your parent and not directly form instinct.



*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Big
« Reply #4 on: May 25, 2016, 08:13:35 pm »
Prediction can be done with simulation. You run combinations on internal representation and check results.

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Big
« Reply #5 on: May 25, 2016, 11:54:45 pm »
There also the point of view you want to take in a simulation.
And there is how may simulation you want to run on top of each other. Or have a simulation run on top
of observed reality.
The time flow can be controlled in each so nothing crashes into each other. Which bring up
the controls, a temporal dialer.
A object selector, dialer. to select a object to focus on. Or a background.
A similarity dialer to file through similar object selected.
A background dialer to select different back ground and other objects not focused on.
A sub temporal dialer to dial through all the position a selected object can be in. like different poses.
A a direction dialer of the movement that a selected object can take.
And a post and past dialer.

The physics engine in higher intelligent consciousness is learned by trial and error and by
comparing observed of object moving in reference with one and another.

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Big
« Reply #6 on: May 27, 2016, 02:01:32 pm »

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Big
« Reply #7 on: May 27, 2016, 10:13:35 pm »
 A similarity dialer.
Voxel-Based Variational Autoencoders:

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1307
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: Big
« Reply #8 on: May 28, 2016, 03:22:07 am »
Keghn pointed out, "The time flow can be controlled in each [simulation] so nothing crashes into each other."

The simulation is a two dimensional flat surface.  Like a screen, mapped to, by three dimensions mathematically. Time is the fourth dimension. So, when a temporal dialer calls time .  With the distance continuously between, being automatically short, but not equal. The calls, I would say, are made in serial, in a series on the dialer, don't you agree?  There is a point to simulating parallel. The human brain is parallel. What would be the temporal aspects of simulating serial on a temporal dialer?
My Very Enormous Monster Just Stopped Using Nine

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Big
« Reply #9 on: May 28, 2016, 03:24:33 pm »
Yes, pretty much right, on all 8pla.net.
Dialer is measured with a counter register. That has controls of forward, backwards, at speeds of
zero 100 times faster.
In the brain, This counter would be nerve cells daisy chained together. A pulse travels from one
nerve to the next. when a pulse enters one of these nerve a second pulse is generated, sent
off, and enters The "Main Auto Encoder" neural network from a different direction.

A artificial autoencoder neural network can be trained to generate a image from just a value
from the dialer counter. One value for one frame. As such:




*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: Big
« Reply #10 on: July 28, 2016, 11:54:23 pm »
TOILETSSSSSSSSSSSSSSSSSSSSSSSS

I recently had a dream where I seen buses pass through eachother......

WAIT - What is the purpose of a chair turning into a table AS aproaches WHY WHY.
WAIT - What is the purpose of turning a high quality video into a messed blotchy blurred video???? Which I actually have never "remembered", nor dreamt.
Emergent          https://openai.com/blog/

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: Big
« Reply #11 on: July 29, 2016, 01:11:03 am »
 Auto encoder neural network stores their information as a group effort. Evere memory location
hold a small piece of the image. Regular movies store each frame in one location.

 If a location in the NN gets damaged the information can be recovered, from the other areas.
 The bad thing is it take a long time to record the information in. And it has not been perfected yet.

 

*

kei10

  • It's a honor to meet everyone!
  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 430
  • Just kidding.
Re: Big
« Reply #12 on: July 29, 2016, 01:21:18 am »
Interesting, as explained by keghn -- but as for why blurry video...

Greetings, signature.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Big
« Reply #13 on: July 29, 2016, 08:58:55 am »
Hi guys,

Quote
[...]can that really be all consciousness is?

Information is fed through our eyes (and other senses) [...]

No, that's not all consciousness is, if information is fed only through eyes and other external senses. But if you also have internal senses, which allow you to feel what's happening inside of you, and if your simulation is not only about the outer world but also about your inner world (= if you can also simulate your own mind), then yes, this is consciousness.

Quote
how does 'that which does the seeing' control (alter the simulation, choose (in general) to access memory) the simulation.

Choices are made by "focus" & "desktops". Choices are applied by virtual actuators that change the simulation parameters. Altering the simulation is like moving an arm.

*

Zero

  • Eve
  • ***********
  • 1287
Re: Big
« Reply #14 on: July 29, 2016, 02:16:07 pm »

Quote
The big question (hard problem) is [...]

Actually, the big question is: what do you want to achieve? When you picture yourself, in a near wanted future, thinking in the morning "hey, today is holiday, I'm gonna play with my brand new AGI!". What are you going to play with? A robot? A console command-line interface? An in-browser answering-machine? A virtual 3d avatar? A very special Linux distro?

I believe the problem is not software. Casual computers are very fast, they have enough memory. But hardware tech is not ready for it. We can't make skin, with thousands of sensor in 1 square cm. Today's batteries don't last long. This is the real problem: AGI needs a body, and we can't build one today.

That's why I tend to think that instead, AGI's body should allow it to live in the digital world: internet. And what kind of body can "live" in internet? Browsers. => AGI's body has to be a browser.The GTA metaphor is still accurate, just apply it to the web. What is the www?

We also need a mind, a middleware, capable of parallel computation and asynchronous event handling. NodeJS can do this. Plus, JS is today's BASIC. Anyone can use it, so the thing can be crowdsourced.

A browser as body + NodeJS as brain = node-webkit (now nwjs), or github's electron.

Then, what?

First you'll need good ol' procedures, the meat of action. But as I said in another thread, procedures should also be able to "feel" things: feeling and acting are two sides of the same coin. A man puts meat on fire, he cuts potatoes, he puts dishes on a table... What is he doing? Right, you felt it thanks to the very procedure you'd use to do it.

You don't need ontologies. Ontologies are an illusion. You just need jumps from questions to answers. Ask a child "What's an eagle?", you'll get a bird. Ask him "What's a penguin?", you'll get an animal, which is not ontologically correct. But it's still the most natural answer, the "correct" one.

You'll also need frames, because everything you do, you want to be able to observe it, since it's part of the world. Your mind is part of the world you're observing, Nathaniel.

I can feel you understood it all already once in your life, during a second. Just go back to this exact moment and stand still  8)

Have a nice day!

 


Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

521 Guests, 0 Users

Most Online Today: 571. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles