Compound learning

  • 16 Replies
  • 7246 Views
*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Compound learning
« on: June 03, 2015, 09:23:49 am »
If an ai system was any good,  it wouldnt have a linear rate of learning.    Its supposed to have exponential learning.  so the more it knows, the more it can learn.

Thats the equivilent of making cells respond more often to the sensor as it learns, reducing the sensor difference.  so as I learn, I learn new ways to fire the cells,  and as my cells fire more, I get to learn more from the cells firing.

Thats got to be either a recipe for success - compounding knowledge, or a recipe for complete disaster - compounding nonsense.


*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Compound learning
« Reply #1 on: June 03, 2015, 11:01:38 am »
Surely an AGI has to ‘compound’ all knowledge, even nonsense, otherwise how would it know its nonsense.  I think knowledge is based on a hierarchical structure, simple information at the base/ beginning combining into more and more complex the higher/ longer you go. 

An AGI’s knowledge structure is not anything like a standard pyramid though, because high level learning has to guide/ influence new low level sensory learning and vice versa; it is based more on time than dimension.  A newly discovered low level pattern has to be able to influence all other learned patterns.  We do this by storing each facet of a piece of knowledge in a different location in our cortex and then recombine the knowledge when required.  So we have only a couple of locations that store the meaning/ concept of say… water; if we learn something new about water it will be integrated into all knowledge that includes water, and water is of course stored/ broken down into its facets. Just like we combine lines to recognise letters and letters to recognise words etc, We can convey any knowledge by simply recombining 26 letters.


If your system had an ‘alphabet’ that comprised of a number of standard vector angular velocity changes; by recombining the ‘alphabet’ in different sequences you could recognise/ describe objects. I would also store the ‘alphabet’ as positions/vectors in a matrix not as numbers, so you can generalise from the position of the new incoming vector velocities what the new object looks similar to… for categorising etc.

Anyway I’m ranting… My AGI’s learning curve is not exponential, it learns low level information at a steady rate; but combines the data into more and more complex ‘thoughts’ over time. 

Hmmm… perhaps it might be exponential.  (grabs calculator lol)  :)
« Last Edit: June 03, 2015, 11:55:13 am by korrelan »
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: Compound learning
« Reply #2 on: June 03, 2015, 12:01:48 pm »
I value your rantings.

We are heading down a similar track.   Id say me and you are sorta sensor first based ai type guys...  like all neural net people.  We are the ones developing the small yet working development tricks off the sensor, just not getting quite to the real intelligence. :)

Quote-> "A newly discovered low level pattern has to be able to influence all other learned patterns."

Compound is like saying that the other way round, to me.    All learnt patterns, influence a newly discovered low level pattern. (how it will enter the system.)  Although you can read which way you want to mean whatever. :)

In other words,  because of what I know, I can apply this new knowledge to it, even knowing what it is at all... which I had to work my way up to begin with.  So the phrases tell the words how to combine, and the words tell the letters how to combine,  to become a step up from just raw data,  and become more of a symbolic cause.  Where raw data will now insert itself in places where it would not have known where to go, just as the letters themselves.


I wonder whos going to go finish this damn thing off in implementation,  I wonder if I actually finally work it out (Because it was obvious!!!)  and then itll end up uncomputable to a degree,  or something crap happens and I wont be able to finish it.

I hope not.      I also wonder where my greed will stop, in wanting to brain child intelligence,  when will I be satisfied with what toys ive made...  because im not sure how much of a genius it makes me, if at all... so its got to be for the toys im going to get.

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Compound learning
« Reply #3 on: June 03, 2015, 01:02:10 pm »
If processor load/ speed are becoming a problem try segmenting the video with a square or circular matrix, then only track the patches of pixels that occur under the intersections (see D)



Once you have your ‘alphabet’ of velocity changes that relate to the template you can apply the template easily to other areas too. (see ABC).

You can also use a scale factor to compensate for perspective.

If a particular rotational/ movement vector is detected you could apply all known templates containing that vector/ speed/ variation with the correct intersection over the pixel group to see if any other matches are found. :)


« Last Edit: June 03, 2015, 01:50:27 pm by korrelan »
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: Compound learning
« Reply #4 on: June 03, 2015, 01:14:41 pm »
If an ai system was any good,  it wouldnt have a linear rate of learning.    Its supposed to have exponential learning.  so the more it knows, the more it can learn.
It's called "combinatorial explosion" in AI expert systems. Without regulation, it is a recipe for an exponential decrease in speed and reliability.
CO2 retains heat. More CO2 in the air = hotter climate.

*

ivan.moony

  • Trusty Member
  • ************
  • Bishop
  • *
  • 1729
    • mind-child
Re: Compound learning
« Reply #5 on: June 03, 2015, 01:58:56 pm »
If an ai system was any good,  it wouldnt have a linear rate of learning.    Its supposed to have exponential learning.  so the more it knows, the more it can learn.
It's called "combinatorial explosion" in AI expert systems. Without regulation, it is a recipe for an exponential decrease in speed and reliability.
This can be resolved in some degree with genetic algorithms. You pick to check just particles that more often participated in good formulas findings. With reasonable size of knowledge base you can never check all of possible combinations. Did you ever thought of something and said "it's so simple, how didn't I think of it before?". It is because there is too much combinations to check them all, so we check only some, hoping that we will be lucky with checked combination subset.

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: Compound learning
« Reply #6 on: June 03, 2015, 02:05:32 pm »
Korrelan,  that looks excellent and useful! (im stuck tracking every pixel, but I dont quite understand the optimization.)   I have a question, what do you plan on doing with the 2d velocities?   I think tracking's only purpose is to convert perspective to orthogonal, and maybe help with a bit of inferencing, then a (fancy ass with all the mod cons) 4d (3d space + 1d time, rot invarience, and symbolic interchangement - thats the crazy ass idea so far) chain model can handle the rest.

Don.  Combinatorial explosion I think is going 2^bitcount and you cant linearly address anything on a single gpu texture beyond feesably 24 bits or so. Otherwise your dealing with storing whats there and disregarding what isnt, because you cant store it all.

What I meant was a problem with a compounding learning system, making a mistake, and then learning on its mistake, the computer failing to assimilate the sensor properly.     Nonsense is perfectly sensical to learn, as long as it happened, but the nonsense i was referring to is failure to assimilate the sensor.   So mistakes have to be weeded out before ever being learnt apon.

Ivan,  that sounds interesting, to try and get over the combinatorial explosion, although I only mean trying to store the stupid model, but I think if you invariate the data properly, with less cells representing the information, gathering what is the same, then this will also deal with the problem just with some fancy hebbian learning based system. (+1, -1)  But, of course it will always be a problem when trying to do something like this.
Its impossible to store models this large, in ram alone.  So best get out the disc paging boys. :)

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Compound learning
« Reply #7 on: June 03, 2015, 03:03:51 pm »
Combinatorial explosion can also be largely avoided by having a high number of input variables, resolution of sensory data.  Eg when searching for a word combination take into consideration the overall topic, who is saying it, the time of day, the current weather, the location etc… the more information considered by the system helps narrow down wide tree searches (chat bots).  A variable stack/ array that contains the current mindset ie last topic=cars, current topic=holiday, main subject=beach, time=any, user=bob can be used to help effect how the system interporates a sentence.

I avoid it by using prediction, as soon as a pattern is recognised, because it’s been used before and it’s learned from experience, the system predicts all possible known combinations within the current extent of its knowledge; possible pathways are either primed or inhibited for the next pattern, this greatly limits the parameters/ paths for the next possible pattern.

Quote
i was referring to is failure to assimilate the sensor.   So mistakes have to be weeded out before ever being learnt apon.

Mistakes can be weeded out automatically, a system should learn all sensory input, then strengthen or weaken the value of it depending on how often its proved correct over time… this is part of learning.

Quote
I have a question, what do you plan on doing with the 2d velocities?

If I was writing a system like yours I would store the velocity tracks on a 2D map, and then use the map areas to match/ find similar tracks.  The shaded areas below would be the three tracks of sets of pixels under the intersections on the polar grid over time.  Each set of video frames would produce a set of tracks converted into the 2D space on the map.



By segmenting the map it would be easy to assign a unique code to that particular pattern, this could then be assigned a name ie. A cat.

Imagine a stack of these 2D maps on top of each other. Any pixel track plotted at any location can be followed down through the stack and another match found; the rest of the plots on the two maps can then be checked against each other.  If a % match is found overall then it’s been recognised.  You could store the intersection pixel plots for shape primitives, cube, sphere etc and build up a 3D scene using 3D privatives from the maps.

I haven’t given it much thought but it seems to work in my head.
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Compound learning
« Reply #8 on: June 03, 2015, 05:36:15 pm »
Ok I’ve thought some more…



If you start with a 2D array (B) and store the Trig cords to remap each intersection into the polar grid (A).  Use a higher resolution grid/ matrix as your processor allows.

Using the polar cords track the movement of a set of pixels (say… 10 by 10) at an intersection in the X,Y (C) over several frames.  Assuming time is a constant we can re-plot the movement of the pixels onto a new graph (D) with time on the X axis.  This is a letter in your ‘alphabet’.  There will be a large number of possible tracks and you might want to implement an averaging formula or fuzzy logic to find/ match them in your database/ array.  I would segment the graph (D) and use a % match against the database.  Anyway… this track (D) has returned a 90% match which we can translate to (E) in our 2D array/ matrix (B).  You would have to run this for every intersection in (A) against the selected video frames.  You would then end up with a template (B) containing a unique pattern of matches from your ‘alphabet’ database that represents the viewed object/ scene.  The unique track (B) can be given a name ie sphere, there could be hundreds of these tracks per object viewed depending on how many learning sessions have been run.  Together they would represent how (A) saw the groups of pixels move on the objects surface over a period of time/ frames. To recognise an object you could again do a % search on all of the tracks (K) against the track to be recognised.

Oh yeh! If you train it on generated shape primitives and store the shape, center of rotation etc with each track, you could later superimpose the primitives on the video scene.

There’s also something bugging me about negating scale and rotational invariance by how you extract the data from (A) along the arrows x and y; more thought needed.

Well… It’s a starting point :)

Edit: Got it! When % searching the stack of tracks, if you offset the grid position by 1 in the X plane ( loop times x) and use a wrap around for the data you will gain rotational invariance… doing the same in the Y plane will give scale invariance.
« Last Edit: June 03, 2015, 07:08:01 pm by korrelan »
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: Compound learning
« Reply #9 on: June 03, 2015, 07:06:55 pm »
I see your classing through motion...  I dont quite get every bit of it,  thanks for sharing with me,  and good brain storming man!
I would just by my thinking simply using the track on the detection sphere-grid to be the name into the match into the classes, and then use the relative ratio of distances from the centre, and adding rotational invarience by using angle differences, in the actual unique identifier.  and then you could share classes between similar movements, as well as getting geometric invarience, through algebraic constants, not for loops.

I just worked out what I was doing here, with this compounding idea.

here, synonym detection in language processing->
http://nl.ijs.si/isjt12/proceedings/isjt2012_21.pdf

if you do a synonym pass, connect the words to the same cell,  then each time you do a new pass you will compound your synonyms.

Error will compound, and I dont know how to detect it.    And maybe it hasnt got a connection to animation and sculpture, like I thought it did...  but ill have to keep thinking about it,  because how they compound could be pretty cool, cause as far as the computers concerned they are null and void statistically inseparable. XD

Im thinking for my first synonym detector,  I just pick the two words from the text im studying with the closest matching string of counted probabilities after,  and see what I get.

I do it 2 words at a time only,  but you need lots of 1 and 2 words, just so you can eventually collapse the english, and form your little taxonomy of the synonomy, to get my maximum possible ounce of worth out of its nonsense.

Then just keep compounding it (reparsing it)   and seeing what it would do to a text generation!  using the synonym bags, instead of the raw data,  and pick a random synonym each time it generated a token because as far as the computer is concerned they are null and void statistically equal. XD

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Compound learning
« Reply #10 on: June 03, 2015, 07:34:38 pm »
Quote
and good brain storming man!

Yeah! I’ve not thought about serial programming methods in ages, blew a few cobwebs out.

My AGI actually uses multiple methods because its visual cortex is based on the standard mammals.  Edge, point, shade gradient, contrast gradient, motion, colour blob all come from the same visual orientation map.

The downside is... look at the graph on the right... this is my AGI base code... I'm going to call it... WTF language lol.




Cheers  :D

« Last Edit: June 03, 2015, 08:00:35 pm by korrelan »
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: Compound learning
« Reply #11 on: June 03, 2015, 08:49:13 pm »
sorry i couldnt see it the video seems to be private.

Your orientation map looks cool (i actually found you on youtube before here.),  and your nets look really thrilling to look at,  mine just look like a bunch of boring light globes but i dont mind the aesthetic anyway. :)


Sorry for my self indulgence, but i just noticed my title is AUTOBOT NOW!!!  damn i wish i was a predacon from beastwars,  ill have that sexy evil spider girl bot for my shag friend.

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: Compound learning
« Reply #12 on: June 03, 2015, 09:15:42 pm »
cool i saw it now.

is that operating on cpu or gpu??

And what are you going to do with the WTF spikes?  :)
« Last Edit: June 03, 2015, 09:39:08 pm by ranch vermin »

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: Compound learning
« Reply #13 on: June 08, 2015, 12:14:18 am »
Soz for the delay I’ve been working on my Bot and watching the DRC finals etc.

The software is running on multiple CPU threads at the moment but can easily be moved to GPU (16 * Titan 980 Ti  :D), it’s a massively parallel design and all my sensory inputs vision, audio, tactile etc have loop buffers so I can simulate real time inputs whilst running the simulation at slow speeds (>1ms cycle).   Each sensory input can run on a virtual/ physically separate machine and are piped through a 1GB network or internet (I have 16 physical CPU’s available ATM, currently using 2)

My AGI design is based on biological concepts.  I grow each one from a small network and then use Neuro Genesis, etc during ‘sleep cycles’ to enhance its function.  Once a network has begun integrating sensory information it can not be edited, because all senses (vision, balance etc) go to all areas of the cortex and sub cortical structures (kinda holographic); altering any area physically will destroy its ability to communicate its learning to all others as they are synced though circadian rhythms.  I can influence the network though through altering attributes.

The video clip shows one cortical column and its local synapse/ dendrite tree receiving and mixing audio and visual sensory trains.  The spikes on the graph are the internal language of the frontal to back/ visual ‘executive’ network whilst receiving additional input from this particular column; the base ‘seed’ network has 500 columns so far and growing.  This one developed epilepsy bought on by a visual input frequency of 2 cycles per second toward the red end of the spectrum.  I altered axons between the visual cortex and LGN and started again.

The real intelligence comes from the system learning which patterns combine correctly to produce/ add to the overall pattern (resting to executive), which is the full concept or ‘thought’ pattern.  I think we get ‘eureka’ moments when the similarity between two concept patterns are so similar they merge and link all the underlying learning from both patterns into one. 

When we think about a problem all the relative patterns for each facet of the problem are running and firing other linked knowledge that shares the same facets.  Now if we have already learned that a particular combination of facets produces a correct result in one instance, because the same facts are in the new problem… eureka.  :)
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: Compound learning
« Reply #14 on: July 27, 2015, 02:37:09 pm »
16 TITANS!  god thats going to hit your wallet!!!!
I like the idea about the sensory coming through a network,  but the latency would be pretty bad.

NeuroGenesis?  So your network is producing something?
All I can produce in my one is geometry invarience,  like scrolling + rotation,  nothing new other than that, and its an issue when meeting something new.   Ive got a faint idea just by orring enough.  (what this thread is about)  you should be able to or the shit out of it,  and get some kind of basic knowledge base working by connecting enough together.

Also I was wondering how many cells are you running at once  -  not that that is a direct indicator for intelligence, depends on how you use them too.

How far off finishing this beast into a working sentience are you?

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

385 Guests, 0 Users

Most Online Today: 380. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles