Ai Dreams Forum
Member's Experiments & Projects => AI Programming => Topic started by: goaty on September 06, 2019, 09:43:34 am
-
Lockster made me do it, and im just showing off.
https://www.youtube.com/watch?v=fW2VjLWWmQU&feature=youtu.be
-
I am supreme goddess of all, master of none
-
here something more edumacational - this actually highlights the forks - other than that its a bit of a cheating device. O0
https://www.youtube.com/watch?v=8bv6nxm35rc&feature=youtu.be
Thanks Lock for giving me the big 1 gig text store.
(https://scontent.fper6-1.fna.fbcdn.net/v/t1.0-9/70229285_987587431589314_809856134230835200_o.jpg?_nc_cat=106&_nc_oc=AQmDoSEbp4n4WdsApYrE-oGhPjJ9VLvcVaw_1V4xdk0fLh8nMTP9NEmUvI_6RExri8I&_nc_ht=scontent.fper6-1.fna&oh=8d9baad43de06d17c50f9f6053dd0c63&oe=5E0C905B)
-
Don't forget to tell them it runs on GPU power!
-
heres my code to dispatch the video card.
<code>
INVOKE(dc, derezz0, 0, views, nullptr, nullptr, 0, output_uav, 1, 1, 1 ); //derezz the closest match score
views[0]=query_srv[0];
for(j=0;j<PAGES;j++){views[j+1]=biblepage_srv[j];}
INVOKE(dc, runmatch, PAGES+1, views, nullptr, nullptr, 0, output_uav, (BIBLEPAGE_SIZE*4*SQRTPAGES)/32, (BIBLEPAGE_SIZE*SQRTPAGES)/32, 1 ); //HERES THE BIG GPU BASTARD CALL! //THIS THING CALLS 2048x2048x16 THREADS AND STILL GOES RELATIVELY QUICKISH. its the limit of my little card.
views[0]=query_srv[0];views[1]=output_srv;
INVOKE(dc, stepquery, 2, views, nullptr, nullptr, 0, query_uav[1], 1, 1, 1 ); //step the query along.
</code>
-
Looks like ive gone embarrassed myself again, banned off all seven seas off the internet.
(https://scontent.fper6-1.fna.fbcdn.net/v/t1.0-9/69563038_987725258242198_8424255571426803712_n.jpg?_nc_cat=110&_nc_oc=AQlzkKpCtuHjhK9kc79mAwKF8EF5_P3ZEuVsxH1ffaM-sJMHUH9hoGijNGhTYlZQ9RE&_nc_ht=scontent.fper6-1.fna&oh=5925ad5bd15787cb860da42542aabf30&oe=5DFD1ECC)
-
The algorithm searches up to 250MB of text for an exact match on GPU. It comes out just as fast as CPUs in the end :P
-
But Lock, you don't realize the beauty of the situation.
Its working acceleration structureless! And that's going to mean something, so back to work I go.
Next thing im adding is order invariance (which would work whether its linear search or not.)
Then it should yoda similar to sound, itll start saying verbs like they are nouns.
and a synonym generator (its done simple, and words which have identical sorroundings become the same symbol, and u do it as long as u want but it guarantees less sense each compounding.)
-
My synonyms are way better ;)
GPU has it all spread on many cores like random access but it's as slow as CPU.
-
The more you say words are substitutable with each other, the more jumps your system gets, and then the more attention span you can give it for so much data in the system.
But - how do you get it to communicate instead of just spitting out a random story?
-
But - how do you get it to communicate instead of just spitting out a random story?
I think that's why we're all here :)
-
So what's happening here exactly? Are you just running Lock's generator on your computer? Running Lock's generator with your own improvements? Or is this a different generator entirely?
-
Lock generators is wayyyyy better ;)
This is just the most basic simple search engine running on GPU.
-
So what's happening here exactly? Are you just running Lock's generator on your computer? Running Lock's generator with your own improvements? Or is this a different generator entirely?
Lock gave me an idea, just to have 60 meg or so just raw ascii text then every new letter it just brute scans it with all the gpu power of my gtx980 - that way I don't need to store it in a binary tree, and I can actually get lossy matching as well.
Turns out its pretty wasteful, and it needs the proper binary tree to go faster.
I have an idea of how to get it chatting, but I need a source of proper chat text, and I then I have an idea that if I search during the jumps, I can make the robot try and direct the other speaker to say a word, or change their mood, if I have that in the jump information.
I think that if I had videos of people chatting to each other, I could use computer vision with computer audio, to get a useful set of data, but I think brute searching it all is a bit of a waste of time, (it was a failure, when I said it was running quick, I only meant because it was actually linear searching/brute forcing.) so ill get back to it later.
-
I proved tree storage is worse, you waste RAM like no tomorrow. You can see my new little invention has extra low mem and extra fast search. Btw, for Next Word prediction, searching 1GB as 1Search is not long at all. That is not issue...
Actually let me think about binary trees while I sleep, I know of a way i think that stores them without the huge overhead they make.
-
You have to pay some kind of price no-matter what u do, and the speed binary trees afford you means paying the 10x or whatever to get the overlappings is worth the benefit.
You need more experience Lock, we all do, the picture only gets clearer the more you implement.
-
oh yeah, even if trees were same memory, you gotta save overlaps haha, cus you chop the data every yyyw herr rre
-
This is near matched, with binary trees you can only get exact matches.
(https://scontent.fper6-1.fna.fbcdn.net/v/t1.0-9/70399680_990799894601401_140866479108128768_n.jpg?_nc_cat=109&_nc_oc=AQm5s9jc3d5ywqKbblLITVUhTErEOGhSNbsbxmlaLBPrqdC-V_NYRfC_cvZ3vJsvzN8&_nc_ht=scontent.fper6-1.fna&oh=432eb1e10e3fb2888cb8519a12588440&oe=5DFB5BC8)