Recent Posts

Pages: [1] 2 3 ... 10
(ignorant assumptions+tweak bigger (so to generate likely good sense/motor 'actions') (used in short-term or long-term creation of knowledge) ex. hmm let's go down that tunnel or ex. "motor can use rods" then quickly (short-term) validating that generation)

Now, if you have many low level objects and only 1 high level objective and it must converge, then clearly under the 1 top level objective is say 3 objects, then 5, then 10, then 25, then at bottom area is 328, but below that is 100, and so the reason it grows bigger from start onwards but shrinks to 1 at top is because say AGI = top objective immortality, well the 7 below AGI maybe only 5 of them make AGI and yeah all 5 of them make AGI, and to do that you also use heuristic desires you narrow the path up the hierarchy layers and at a certain level of complexity find a desired node you call AGI or immortality. You actually begin with food/sex rewards, so that actually marks the node in a certain level layer.

As with motor 'action' networks, sensory 'action' networks can use Hindsight Experience Replay (HER) too. You generate an idea 'action', see the result (you successfully make a discovery just not one you wanted), and then in the future say you look at the topic ex. robotics and find that discovery you made in this field robotics plus go backwards how you came to the discovery. You can discover many things (even just a generation not validated, or a validation of a generation), or be taught many facts that are already discovered by others ex. you teach a Christian the AI field, and not believe it or desire it or think about it or use it, and come back later and do then (say you're interested in the validation of a generated fact). Hence your ranks on facts (your HER goal) morph/change.

If you parse Wiki, the AGI can learn lots fast. This would build the low level in the network and refine it very quickly, but not so quickly the higher layers. But still darn quick compared to manual teaching! Because "There is functioning tools." or "cats love" and other facts will all have ex. "There is" and that'll overwrite refining low level language snippets many times, but less so for the large snippet node "cats love  good food    and    dogs want good food". The low level will begin to have triangles popping out of it like a crown while the low level is the bottom of the crown hat. That's because many facts use the same atomic snippets, but on larger scale are very different.
General Chat / Re: Can you guess the lyrics of this song?
« Last post by LOCKSUIT on Today at 02:14:37 am »
Now we know why those sirens go off at that part!
If my assumptions are the case, then from here on I see myself making huge jumps at a steady faster pace. Because already I have made multiple huge jumps faster in the pasts months, incredibly greedy ones.
Boom. Look at this Two Minute Paper, watch the full 2 minutes of it. Notice it says curiosity (ignorant assumptions), and more!

Right like whats the point of going in getting your hands dirty in tweaking an image recognizer when you won't discover the AGI framework that way. Sit back, on the high level. Not saying learning all the lingo in deep nets isn't good idea, in fact it is probably what I need now I decided right this moment, since I'm working in the 2nd level network symbolic network above the object recognizer network  which is not much different than image recognizers.
Now let's take "Cats love good food on a trip to Hawaii because snowflakes are something that makes no sense." Here these can fit together, ex. good food, cats love, cats love good food. But not the large snippet that begins at "snowflakes...". Now let's find something new? that can fit together (fits=common, doesn't fit=legal just uncommon?) but isn't useful/common/important. Cats can move   giant stars. Of course verb object verb object and fitting ones fit ex. bird eats tree and leaves cat on box talking to cube. But not "cat swims cat swims barn eats plane". "New mountains have massive resources that AGI can use" fits and is newer near the bigger layers since humans have discovered many of the smaller levels ex. fish eat, birds fly, motors turn, motors and rods make robots. I think not possible, uncommon combos like in higher layer combos (focus on higher layer ones to understand this, not bottom level ones) don't fit because they can't and don't in physics. SO, the term "fits" means legal/common/*possibleInPhysics then. But it can be useless, not used and uncommon but works. So use possible combos, absolutely we'll be generating new combos at low and high layers ex. "dogs build  new stars      because  cats eat  clouds", and use heuristic desires so to generate likely good knowledge/tool discoveries.
General AI Discussion / Blindsight and rat-brain supercomputers
« Last post by infurl on June 24, 2018, 10:24:15 pm »

Here is a talk by Peter Watts about the nature of consciousness. It is simultaneously fascinating, hilarious, inspiring and frightening. Just wait till you get to the part about the guy who made a super computer using rat brains wired together.  :o
You may think, yeah nose eye can combine to create face, and so on higher up, but can diverse unrelated facts/tools? I.e are you sure we should build on from "hammer canBreak glass" to a hierarchy network like "cats love good food" or "If   the woman         was holding   a purse   and   a rake  ,          ..."? "YES. They are made of little atoms of facts/parts. Plus, like image recognition, there is a single goal at the top - food, immortality, AGI, and so on lesser down. Plus back to the small atomic parts, they do feel like mini/sub-rewards ever notice? And notice that not only do mr smith, a top city doctor, mr smith a top city doctor, all go together, but also that this one created say happens to be a very useful rewarding discovery/invention unlike other fitting ones that can fit but aren't important or common.
(Slightly edited to be more clear>) We recognize say a face (in the 1st level NETWORK), then we use ex. face, purse, motor, etc to come up with sentences/ideas in our symbolic 2nd level network (not 2nd level layer, 2nd level network) ex. "If   the woman         was holding   a purse   and   a rake  ,          she could   be doing   multiple things   at once.", each being an object recognized and transformed into a single symbol.

Like image recognition (but say the top of the hierarchy is blank-er), if it generates a face/AGI it'll know it by recognizing that dot product. But the higher the levels it goes are missing more. It only knows the bottom and top objects. It can find intermediate points like oh between edges and face is nose and eye. It can also extend say the middle object in the hierarchy and say "I want to create AGI, and that links to seeing it walk around and know commonsense and form discoveries", in a sense can kinda go top-down or bottom-up from either the top node objective, bottom, or any object in the middle.

In working up the hierarchy, the process for discovery/success should be the same, with possibly steady discoveries being made. If it tries an edge and curve together with tweaks to try and discover a hierarchy level-2 type of discovery and works 1/8th the time, it may be the same at higher levels, I find my own knowledge/smarts getting exponential, including our technology/evolution! Because we have a language/structure now, a sequential hierarchy for knowledge/tool and motor acquiring, discovering, and implementing, that can be in the form of text, auditory, vision (ex. looking at a "hammer smash glass" (but as realized, more like "If   the woman         was holding   a purse   and   a rake  ,          she could   be doing   multiple things   at once."), etc, everything is much more understandable/straightforward now.

Before I was saying you got a language ex. "hammer canBreak glass" and that it can be either text or vision ex. looking at hammer smashing glass, but now I see it (the language) as a hierarchy network ex. "good food", "cats love", "cats love good food", parsing, and finding useful common features and combining them into deeper layers. My whole project just got structured.
edited the end there, read that.
Pages: [1] 2 3 ... 10

Users Online

37 Guests, 0 Users

Most Online Today: 50. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)