Basic Feature DetectionWith any kind of feature detection you are basically looking for a pattern within a pattern.
The simplest method is to apply a line/ outline convolution filter to your image and then search for known recognisable pixel patterns. The simplest pixel pattern is a 3x3 block (fig A); there are various ways of defining the block but the simplest is to assign a binary value to each surrounding pixel from X/Y. So 20 using this encoding method would equal a P1/ top left right angle.
This is the method I used do get the outlines of the shapes in this vid. Except I use the recognised binary shapes to move a point around a perimeter. I could easily mark the corners of shapes by noting the binary numbers for corners. Etc.
A 5x5 (top right) or greater dimension block can be used but you would probably need a different encoding method; storing the pixel vector positions relative to the centre x,y as s simple list of vectors to be checked for example.
You then scan your image, summing the binary values of the surrounding pixels of each x,y point; the value returned can be easily checked against an array of known shapes (P1). Store the positions/ vectors in an array/ stack. This method is easy because you are just checking a one byte (0 to 255) value against a know array of byte/ shape values.
Shape RecognitionOnce you have all the vectors for the found feature patterns and their relative positions to each other you can then find shapes (fig B) by checking for alignments etc.
Eigen VectorsThis is another method for checking one pattern against another without using an outline convolution filter; though usually a contrast filter is applied to bring the values of the image being checked in line with our set of eigen features
The RGB or Greyscale values of the surrounding pixels of the x,y point being scanned are subtracted from the pixel values being checked and run through a linear distance formula to see how similar they are. Rather than a simple binary array of numbers to check against; it uses small shaded bitmaps similar to (fig A top right).
‘s1=greyscale for position 1 (on binary grid above)
‘c1=greyscale for position 1 from the image being checked.
.v1=s1-c1
‘do same for all nine surrounding pixels storing V#
.dist=SQR((v1*v1)+(v2*v2)+(V3+… for all nine surrounding pixels.
The dist returned is a measure of how similar the block of pixels being checked is to the block of pixels in our stored array of eigen corner shapes.
http://www.wikihow.com/Find-the-Distance-Between-Two-PointsThis method has the advantage that ‘eyes/ mouths’ can be defined as small blocks of pixels that can be checked against an image.
Motion Detection/ Optical FlowIf you have the relative positions for known eigen features within an image you can check them against the next image in the video stream and measure the displacement to log what’s moved/ how far and how fast.
Other MethodsThere a loads of other methods or finding corners/ features in images. You could for example detect pixel changes around a circumference from your x, y scan point (fig C) using the relative angles to find corners.
You could use the binary method to get a rough idea where the corners are and then apply a more precise method to each found vector to weed out false positives.
My MethodBecause my AGI is based on the human connectome/ nervous system I use a model of the human visual system to detect features.
Neurons in the AGI’s visual cortex become trained to recognise lines/ corners etc through experience and only fire when their receptive fields detect their chosen pattern of inputs. This is like running several convolution filters at once as scale/ rotation invariance/ gradients and movement can all be learned by the same V! cortex model.