So if anyone has a solve I havent heard of for 1st nn in the high-k dimensional case can u please let me know about it?

ok so heres one im working on at the moment.

(DISCLAIMER: i think this got a bit involved and boring, but im just putting it here so at least I can read it, because my mind is getting very forgetful. also this is doing *n dimensions of a binary hypercube*, so its a little harder than the examples on youtube for example that just do 2 dimensions, that task is so simple i wouldnt bother with it.)

I was writing out this more special personal implementation, but I forgot how it worked!!!

But thankfully I just remembered.

Its similar in the end to a kd-tree, it has to run so many passes before you get the result, but it could still be a fairly decent optimization.

Say we are getting the near matches of pictures.

So say you have a N-ary tree, and the N is how many pixels in your picture are a 1 or a 0.

If you imagine a number line which works like this, all pixels exist on all pathways, and the index is which pixel is being flipped, so in this case, youd have a picture by picking 1 position along it, but 1 picture appears in many places.

1234123412341234123412341234123412341234123412341234123412341234 ... -> ->

1111222233334444111122223333444411112222333344441111222233334444 ... -> ->

1111111111111111222222222222222233333333333333334444444444444444 ... -> ->

1111111111111111111111111111111111111111111111111111111111111111 ... -> ->

^- picture ^-another picture some pixel flips away

so 1 is the first pixel flipped, 2 is the second flipped, 3 is the third pixel flipped, it actually is flawed in the fact pixels are flipped twice or more. and any one of those

lines is a slot for any one of the pixels to flip against.

If my picture has n pixels, it has 2^n permutations, and if I put it in this pixel flipping dimension, it means i have n^n different permutations, and the pictures are repeated along it.

Whats good about this structure, is I can go left or right from a position, and the first picture I hit is the nearest neighbour, with so many pixels changed.

So If I have so many pictures in my model, they would appear on this line.

But because putting out all the combinations of a picture, of every pixel swapping location would be too computational, and too hard to put down, you have to get this line in a more theoretical way, in the form of combinatorial invarience.

So If I have a N'ary tree, N being the amount of pixels, and every junction of the tree is a pixel in the image.

The idea is, when I advance along in the tree, I do it in a combinatorially invarient way, as in I take any one of the key members (which has to be a vector) and apply it to the accessing the next node of the tree, and I do this until all slots have been used, and I end up with a distance from the query pattern.

If I add one more restriction, and I must not pick a combination that was *less* distant from the last, I keep getting samples and making sure I dont go the same direction as before, I should descend apon the result.

But I have to check if this works yet... but my mind is terribly leaving me.

I call it a combinationally invarient tree, as a solve for 1st nn (in its proper case, a hypercube), and getting the power of the machine learning going in a more powerful way without using backprop nets, in a more pattern matching type method.

My search for the best matching situation, made me steer away from backprop nets, because of the cumbersome square they have to run to give me a weighting between objects, and went more to trees, because they are instantly assigning type things, but just have the major flaw of not having near matchability.