1st near neighbour solutions

  • 7 Replies
  • 2580 Views
*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
1st near neighbour solutions
« on: March 01, 2018, 07:20:45 am »
Having an exact pattern match, is nowhere near as powerful as a near matching system.   Just imagine if your ai hits a dead end, if you have a near matching system  you can match up to the closest different string instead of an exact one, and get back on your stored pattern track  and it makes playback alot better out of a pattern store system.

But near neighbour systems naively take comparing the whole pattern store per access, so they are a square worse to implement over exact matches.

There is a kdtree implementation.   and neural nets/backprop nets do it, but they have to run the expensive matrix of synapses.

If it were comparing a photo to a store of photos: (which is a hypercube case.)
The naive approach is count 1:1 all pictures in the store to the query picture, even tho doing all of that finds the distance to the whole system,   and the kd-tree implementation and my *combo tree* implementation, work like a little soldier running into the spacial division tree trying to find in quickest amount of movements, without being tricked into picking the wrong one.       

[NOTE:   I dont mean near neighbour in a 2d only case, i mean N dimensions! of up to 2048 and more.  bit of a grade of difficulty there.]
« Last Edit: March 01, 2018, 10:12:21 am by ranch vermin »

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: 1st near neighbour solutions
« Reply #1 on: March 01, 2018, 08:51:59 am »
So if anyone has a solve I havent heard of for 1st nn in the high-k dimensional case can u please let me know about it?


ok so heres one im working on at the moment.
 (DISCLAIMER: i think this got a bit involved and boring,  but im just putting it here so at least I can read it, because my mind is getting very forgetful.  also this is doing *n dimensions of a binary hypercube*, so its a little harder than the examples on youtube for example that just do 2 dimensions,  that task is so simple i wouldnt bother with it.)


I was writing out this more special personal implementation,  but I forgot how it worked!!!
But thankfully I just remembered.

Its similar in the end to a kd-tree,   it has to run so many passes before you get the result,  but it could still be a fairly decent optimization.

Say we are getting the near matches of pictures.

So say you have a N-ary tree,  and the N is how many pixels in your picture are a 1 or a 0.

If you imagine a number line which works like this, all pixels exist on all pathways, and the index is which pixel is being flipped, so in this case, youd have a picture by picking 1 position along it,  but 1 picture appears in many places.

1234123412341234123412341234123412341234123412341234123412341234  ... ->  ->
1111222233334444111122223333444411112222333344441111222233334444  ... ->  ->
1111111111111111222222222222222233333333333333334444444444444444  ... ->  ->
1111111111111111111111111111111111111111111111111111111111111111  ... ->  ->   
                                                  ^- picture                  ^-another picture some pixel flips away
so 1 is the first pixel flipped, 2 is the second flipped, 3 is the third pixel flipped,  it actually is flawed in the fact pixels are flipped twice or more.  and any one of those
lines is a slot for any one of the pixels to flip against.

If my picture has n pixels, it has 2^n permutations, and if I put it in this pixel flipping dimension, it means i have n^n different permutations, and the pictures are repeated along it.

Whats good about this structure, is I can go left or right from a position, and the first picture I hit is the nearest neighbour, with so many pixels changed.

So If I have so many pictures in my model,  they would appear on this line.

But because putting out all the combinations of a picture, of every pixel swapping location would be too computational, and too hard to put down, you have to get this line in a more theoretical way, in the form of combinatorial invarience.

So If I have a N'ary tree,  N being the amount of pixels, and every junction of the tree is a pixel in the image.

The idea is, when I advance along in the tree, I do it in a combinatorially invarient way, as in I take any one of the key members (which has to be a vector) and apply it to the accessing the next node of the tree,  and I do this until all slots have been used, and I end up with a distance from the query pattern.

If I add one more restriction,  and I must not pick a combination that was *less* distant from the last,  I keep getting samples and making sure I dont go the same direction as before, I should descend apon the result.


But I have to check if this works yet...   but my mind is terribly leaving me.   

I call it a combinationally invarient tree,     as a solve for 1st nn (in its proper case, a hypercube),  and getting the power of the machine learning going in a more powerful way without using backprop nets,  in a more pattern matching type method.

My search for the best matching situation, made me steer away from backprop nets, because of the cumbersome square they have to run to give me a weighting between objects, and went more to trees, because they are instantly assigning type things,  but just have the major flaw of not having near matchability.


« Last Edit: March 01, 2018, 10:21:13 am by ranch vermin »

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: 1st near neighbour solutions
« Reply #2 on: March 01, 2018, 10:39:18 am »
Hi Ranch

Even without using any kind of neural net I still think a feature based recognition system is the way to go.

As with a convolutional network the first stage of any recognition schema is building/ getting a set of primitives, 2D blocks of say (6x6) pixels that can be applied/ searched for against an image to extract details.  The blocks would comprise of gradients and line segments of various angles.

So as an example run a line/ edge filter against a video frame, then search the total 2D area looking for matches against a library of 6x6 primitives.  Note the relative positions/ vectors/ tensors of the found primitives against each other.  This could then be stored/ searched in a hierarchical manner.  You could probably do this just using a brute force search without the use of neural nets.

If you have enough basic primitives to match/ cover most of the video frames details then just storing the primitive’s indexes would give a decent compression ratio; and also allow you to rebuild the original image to an extent.

 :)

Ed: Are you looking for a non neural method of searching images or a better/ faster hierarchical tree?
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: 1st near neighbour solutions
« Reply #3 on: March 01, 2018, 11:04:33 am »
Ah i see,  so i have less dimensions to look through if i use a fixed amount of features?   this implementation is doing it to the actual dot/dimension count exactly.     Maybe im making a mistake...

About those feature hierarchies, I have a bit of experience there to say you cover alot of ground with very few of them  (well fewer than I would first expect).  So I definitely have a shared feeling there.  I remember from doing some work with them,  they respond quite well to random images not trained on.

This system could involve that, indeed,  its a prior step to what you actually want to do with the actual design.       

The difference with doing it with this,  is if not all features come up, i can still get a match to the closest one that shared the most features, FROM the whole memory of indices at once.   which is slow normally - but it is fixable with a bit of a headache and work which gets it going faster than just running through the lot  (even tho like u said?? - it would be less dimensions with features...).

A fun thing I can do after I get this done is run a video frameset into it and make it skip at near moments to make loops :)   machine learning art is tripped out.

[EDIT]  im going to have a hard think and come back...

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: 1st near neighbour solutions
« Reply #4 on: March 01, 2018, 11:20:21 am »
Im back from a thinking,  and ive decided im going to do it this way,  hardcore counting all the pixels in a near match should work as long as i can do it.  using features id have to make sure id do it right and still bruteforcing would still take too long off the whole memory at once.   call me mad magnus, im still going to do it this way! :)

*

Korrelan

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1454
  • Look into my eyes! WOAH!
    • YouTube
Re: 1st near neighbour solutions
« Reply #5 on: March 01, 2018, 12:02:53 pm »
Cool… who needs hair anyway lol.

Another couple of things to consider are working from a centre point in polar cord schema, if you map your vectors from a centre relative point it will help with scale/ rotational invariance at a later date.

Use a detail density highlighting algorithm to find points of interest/ high detail/ saliency and then move your centre point of focus.  Most images/ video frames have large areas of solid colour etc, you could limit your search hierarchy massively.

I think you would gain in the long run from working from a centre point of focus outwards, I understand the top left 0,0 can move in a video frame sequence anyway but working from a centre would give more versatility.

Good luck and let us know how it pans out.

 :)

ED: Something like this... you can still un-roll the polar cords to form a X,Y cord image shown upper left.

https://www.youtube.com/watch?v=O-wCh_mw7cU

 :)
« Last Edit: March 01, 2018, 12:38:30 pm by korrelan »
It thunk... therefore it is!...    /    Project Page    /    KorrTecx Website

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: 1st near neighbour solutions
« Reply #6 on: March 01, 2018, 01:02:16 pm »
Theres nothing ontop of my head,  god just tricks me in the mirror that its still there! :)

I read that,  that sounds like how the "symantic balance" would possibly go.      Thanks,   but if im counting the pixels raw I must have some secret weapon how I took the situation so primitively.  >:D

Good luck on your optimization for ur ai!   Maybe having a gpu isnt so important as I first thought!

*

keghn

  • Trusty Member
  • *********
  • Terminator
  • *
  • 824
Re: 1st near neighbour solutions
« Reply #7 on: March 02, 2018, 04:04:57 pm »
 I going with chase outline with in focus. Turn the whole world into sketch and stored in memory, as a form of compression.
 Then later it is converted back to original when needed.

 


OpenAI Speech-to-Speech Reasoning Demo
by ivan.moony (AI News )
March 28, 2024, 01:31:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

334 Guests, 0 Users

Most Online Today: 396. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles