Recent Posts

Pages: 1 [2] 3 4 ... 10
11
General Chat / Re: Dendrite Processing
« Last post by frankinstien on November 27, 2021, 05:42:06 pm »
When you realize that a neuron can have 100K plus connections where the combination of those inputs determines if a neuron will fire you can understand a beautiful quality or feature of this kind of system. Neurons are very good at identifying partial features of data since the combinations of inputs can vary quite a bit depending on how the neuron has configured its dendrite spines. For example; if the neuron statistically needs only 50 inputs to trigger a firing then from a combinational perspective, meaning the order is not important, there are 1.00891 E32 combinations!  The beauty of this is the neuron literally can respond to situations, combinations, that it hasn't even trained on.

Since I've been establishing a means to do partial feature identification through hash tables such lookups that could mimic what a neuron does would require a key for each combination! Not practical, but there is a way to do this without having to have a key for each combination. So, how can it be achieved you may ask? If each feature is its own key for each context or concept then I need only count how many features any concept has with respect to the inputs being received. Yes, I have to apply the test for all inputs to each concept and that may seem a bit clumsy at first. Realize I can constrain the lookups on a context basis so that it shrinks the number of comparisons I need to do. Since these lookups are not linear and can be done concurrently such a process can quickly sift through a lot of instances to find qualified candidates.

To give you an idea of how much can be done using this approach here's an article on a GPU hash table. The author used a GTX 1060 where 300 million insertions and 500 million deletions per second could be achieved, the lookup statistic was not stated but would be on the order higher than the deletions.  With a more powerful GPU, say an RTX 3080 that is 7 times more capable than a GTX 1060 then applying lookups on the order of 3.5 billion a second is not unfeasible.  From an NLP perspective where GPU cards having 8 to 10 GB of memory and concepts encoded with just a simple integer identity and a complex of features encoded numerically using bytes and some vectors where needed can use 16-bit float, a footprint of a concept with 100 features is well within 1KB! Caching the GPU with up to 80% of its memory with concepts allows for 8 Million concepts. Where finding a match based on some set of inputs could be achieved in 0.01 seconds to do a full match, where partial matches take even less time!  :o
12
General AI Discussion / Re: Artificial God?
« Last post by LOCKSUIT on November 27, 2021, 02:49:29 pm »
Sentient beings seem all about input/output. A being perceives some input and responds to that by some output. Then it is all together fed back to input, which generates some other output, and so on, while the being lives.

The current situation is that we have a way to train artificial neural networks on supercomputers that can be interpreted on average computers, providing an output for a certain input. But certain outputs are unacceptable, or ethically questionable, so it wouldn't be wise to always pick the first output from the stack. The solution might be in programming a counterpart to ANNs, which we could call "God". The God's purpose would be to judge if a certain output will be performed, or another output would be requested to judge about.

Now we have two entities to worry about: a sentient being and a God. The sentient being seems more-or-less solved, it could be simulated by ANN, but the God... That should be a real complication, shouldn't it? The question I would like to rise is: "How to program an universal God that handles any ANN trained instance?"

You may have seen GPT-3, DALL-E, JUKEBOX, CODEX on openAI.com, and now NUWA made by Microsoft, and you probably have not had the time to fully understand how they work and how legit they are, but if you also see Facebook's Blender, you would see it is GPT-X but has the ability to guide what to say and avoid what to say. One similar project called it taming, or forcing the model. It's not god, it's just another part to the ANN. Just like embeds, recency priming, etc.
13
General Project Discussion / Re: Concept Modeling
« Last post by infurl on November 27, 2021, 12:25:28 am »
Have you explored semantic parsing techniques such as Combinatory Categorial Grammar?

https://groups.inf.ed.ac.uk/ccg/software.html
14
General Project Discussion / Re: Concept Modeling
« Last post by frankinstien on November 26, 2021, 11:10:18 pm »
Working with the parent-child relationship with the NLP parser made it hard to figure out what states and relationships exist amongst the different segments of a sentence, also figuring out the sequence of the phrases wasn't available either, however, it does have a child index method, but I need a bigger picture concept to work relationships based on parts of speech and word descriptions or definitions. So the vertical structure is a self-similar object that has a descriptor base abstraction class. This allows for a sentence to be relatable to individual words, phrases, and other sentences. So, with this kind of structure iterating through it is easy enough, it can also allow itself to be analyzed in parallel, so all phrases can get processed simultaneously and the big helper is that one can jump around the sentence for a look ahead process that helps prep analysis.

Click on the image to get a bigger pic.
15
General AI Discussion / Re: Artificial God?
« Last post by HS on November 26, 2021, 06:56:42 pm »
I think a sentient/conscious ANN would be better off learning from the feedback of the environment itself. Having an unconscious program limiting the actions of a conscious being seems like it would be dystopian. But if we're talking about an ANN that's just an idea generator, then a "God" could be a good solution. Maybe the "God" could apply a version of Maslow's Hierarchy to each sentient being within its sphere of influence. Then it could let through outputs from the ANN with a high probability of optimizing each estimated hierarchy.
16
General Project Discussion / Re: Pattern based NLP
« Last post by MikeB on November 26, 2021, 06:17:06 am »
An update.

I changed the IIR filters to Resonators centered around 600hz, 1250hz, and 3150hz and now have double the signal-to-noise with more stable numbers.

This amplifies the signal for certain types of sounds, but in order for this to work I feel like I need about 10 filters, centered around different frequencies.

One FIR filter or a Fast Fourier Transform (normal approaches to speech rec) are approx 50-100x slower than one pre-calculated IIR resonator filter, so there's plenty of room...

Signal is only 12.5%-25% over noise background, and you need to speak close to the mic, so SNR needs to be improved by at least twice again to work...

17
AI Programming / Re: java/kotlin to python
« Last post by chattable on November 25, 2021, 04:23:32 pm »
is there documentation to use your chatbot with python?

18
General AI Discussion / Re: Artificial God?
« Last post by ivan.moony on November 25, 2021, 11:06:52 am »
Abuse is a common answer to abuse. I wish there is some another way.
19
General AI Discussion / Re: Artificial God?
« Last post by MagnusWootton on November 24, 2021, 08:57:44 pm »
computers require feedback from the output back to input to be a finished machine, otherwise your stuck as a feedforward perceptron.   (even tho that works anyway.)

when your preventing rape and abuse with abuse, it isnt the solution...

20
General AI Discussion / Re: Artificial God?
« Last post by yotamarker on November 24, 2021, 07:15:55 pm »
I've tried something like that :
https://www.yotamarker.com/t46-yotamarker-artificial-intelligence-walkthrough-and-source-code?highlight=bees+knees#65

when the program feeds on its own output you simply jam it up
it cant learn, and it eventually loses "steam" and becomes silent

(.net makes me wanna puke)
Pages: 1 [2] 3 4 ... 10