Here's my picture of it:http://advancessss.deviantart.com/art/Language-668322152?ga_submit_new=10%3A1489225699
I found that my AI is not a neural network - the core of it is a mechanism but isn't a NN really-really, CNNs/etc are truly.
When it came to my AI recognizing or speaking many words/sentences it doesn't work - each selects the match to recognize/speak but ISN'T able to recognize lots NOR say lots.....the exploit showed me that we use a NEURAL NETWORK to recognize sentences plus output em long lines of talk.
I've had enough dreams/imaginations and last night's dream proved it - NEVER-BEFORE-SEEN (created) thousands/millions of STRAIGHT lines making of circuit-board like shapes but VARYING colors plus BLENDING colors placed each aligned to eachother and zoomed in, stopped, zoomed in, stopped (by zooming into a 3D word or Auto-creating what it would look like), went over to areas. Would've took a year to oil paint for any AIs in my brain !
This is where the Auto-Creator comes in. It Creates something new but similar to what you visually/auditorily/etcly want. It can start with noise or acquire stored 2D/3D segmentated pixels/points/mesh/particles even liquid/gas plus acquire stored videos of such by *either: hearing "trump in box" which matches "flower in vase" or "Elvis said vejocazo smell" and gathers the segmentations/sounds and does what my example-picture shows (even to create ex. 2D ex. mesh from 3D ex. particles - or any other combination of what acquired, plus create videos-of) OR by a Language/Image Neural Network as my other example-picture showed to (without saying the key objects nor creation ex. it's as if you're talking using Elvis's voice - again it just flows out from nowhere) select the segmentations or sounds of Elvis, and creates the product. Notice only Language summons it.
You can individually select as many objects even videos as you want or select all & copy then paste copies 1 by 1 ex. each slanted diferent in air and cut an object perfectly unless want jagged/swervy cut and where want of the object or objects that you're staring at on a table or in a imaginary 3D world, see it fly onto the table upright or slanted and through table if want, morph or get dents inward, see it copy as a straight line usually unless want swerve, scale, rotate, flipped, become glass in replace of computer speaker, crop and overlayer images or sounds as if heared 3 things simultaneously, combine songs sounds video images for new type of song sound video image, fade even overlayers to 100% see-through, see a video of them waving in the air like a long cloth, while stays where is unless it's moving in the re-constructed 3D world or imaginary world that gets ray-traced (even stationary as a video so you can look at it for a while, after that you can remember it) and sent to you as Partially-"Seen" FainstSense input. During creation it can use excretory and inhibitory signals for what does and doesn't work.
We could use this Language NN to control the Auto-Creator within an Evolutionary Algorithm combined with trying every possible neural/atomic configuration to find AI, technologies, knowledge.
The LNN could also be used for recognizing/remembering vision, recognizing/building technologies/knowledge. So could the Auto-Creator, E.A., and trying all possibilities.