My HAl Rig

  • 43 Replies
  • 12888 Views
*

frankinstien

  • Replicant
  • ********
  • 652
    • Knowledgeable Machines
Re: My HAl Rig
« Reply #30 on: November 08, 2021, 09:09:02 pm »
Quote
If you think logic based AI like NARS makes sense because it tries to use less data/ resources and more intelligently look at the context with "constraints/ rules" to predict the next word, you are mistaking some things... NARS [may] have some good ideas, but they should be able to be added, and Should, to GPT. How? I am not sure what NARS does, but I have a feeling it needs humans to write in Properties of Things, Relationships, Verbs, etc, (in Narsese and not natural language), which is too much work to be practical, then it can say "I bought a ___" where it can be anything because when you say bought, it can be anything, but when you say "a snake ", it can't be anything really usually, "a snake bit me" is ok, "a snake gift sold" is a bit uncommon, "a snake car"....so bought> allows more possible things to follow, other words don't, some require even very similar matches look: A similar word to car is: truck/ van/ vehicle/ etc. A different word than car is: ---anything---, and so there is how that mechanism works. It can 'learn' this, and predict for unseen words that they too probably can go there if most other words do, or most other felines do.

You're looking at this problem from the ANN's regression where it's looking at patterns of text and associating them with some term or response. That's not how the symbolic approach works. When you read you don't predict you evaluate what is being said. So, humans don't need an example of "Snake car" because we can evaluate whether the term "Snake" is a noun or an adjective that describes a car.  Now, if you've never heard of a "Snake car" you'd ask what is a "Snake car" , with an ANN it can't ask what a Snake car is, but a symbolic approach can. The symbolic approach can learn immediately and without having to re-train on a gazillion GBs of data. 8)

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: My HAl Rig
« Reply #31 on: November 08, 2021, 09:23:51 pm »
Umm....an Ann can ask what a snake car is.....

You need to take ex. GPT, and make it usually predict (using favoring force) "ask what is X" if it predicts (after recognition) "I'm in-confident at predicting the next word".

You give it context, it predicts what is common or favorite (food/ ask twice!/ computers), then it predicts again further.

No not retrain, it's called fine-tune (continues training), and prompt engineering (topic: poem, write like <title> newline <poem briefing> etc), and goal learning (learns to predict 'ASI' all day now, new goal, always on mind!). That controls the whole model.
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 652
    • Knowledgeable Machines
Re: My HAl Rig
« Reply #32 on: November 08, 2021, 09:48:20 pm »
Umm....an Ann can ask what a snake car is.....

You need to take ex. GPT, and make it usually predict (using favoring force) "ask what is X" if it predicts (after recognition) "I'm in-confident at predicting the next word".

You give it context, it predicts what is common or favorite (food/ ask twice!/ computers), then it predicts again further.

No not retrain, it's called fine-tune (continues training), and prompt engineering (topic: poem, write like <title> newline <poem briefing> etc), and goal learning (learns to predict 'ASI' all day now, new goal, always on mind!). That controls the whole model.

Seriously, you compare fine tuning to the immediate learning step that a symbolic approach can do? Also, you don't have to pay  to teach the symbolic approach.

And yet again, if you look at the page I referenced, because of the huge amount of resources used by GTP3 you always have to check if you're abiding by OpenAI's rules...

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: My HAl Rig
« Reply #33 on: November 08, 2021, 10:06:34 pm »
The Persona that Facebook's Blender uses, which is also called taming/ forcing/ controlling the model, can immediately make it strongly favor to predict something new.

As for getting a new memory 'in-to' the network, i.e. GPT uses Backprop, it may be a bit hard to make it 'store' something new without forgetting other memories or whatever, maybe, I'm unsure, but if we used a trie tree, you could quickly just store exactly just the new memory, or relationship 'snake car' similar to 'car' by ex. 70%.



Anyway, not saying your way won't work, just, I don't know what your way is maybe... Can you explain "how" your AI can take a natural prompt (human English sentence question, or image), and generate the rest of that image or sentence? Like, where does it get the next word from, and how does it know that 'we>ate' is more likely than 'we>floored'? How does it deal with 1234567> and 12th567> and 1 2 3 4 5 6 7 ? and 12567> ? Mine uses delay and hole matching, and merges all matches to get 1 set of predicted words, softmaxed (each word how much % of 100% is predicted).

What are the rules in your AI? Like give me an answer like this: nests, inheritance, hierarchy, goals, invariant spots (we bought <anything goes here it learnt> today), exponential function, multi-sensory, human in the loop (i dislike these AIs), etc. So I can understand the parts of your AI then "narrow down on them and see em all easy".
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 652
    • Knowledgeable Machines
Re: My HAl Rig
« Reply #34 on: November 08, 2021, 11:49:47 pm »
The approach for this is not finding text patterns and applying what seems to be the best fit, but to evaluate the logic of a sentence and compute a logical response. So you're examples of 1234567, 123th67, 1 2 3 4 5 6 7 would be handled as follows:



So notice that the NLP broke the sequence into two numbers, if you look up at the part of speech those items resolve to is CD(cardinal number). So the NLP saw a break in the sequence when it saw t, which is not a CD. As it evaluated th67 it realized that there is no word within that sequence so it labeled it a form of number. When the logic starts to evaluate the sentence's meaning it will validate whether th67 is actually a number, which it isn't, and will ask for clarification from a mentor. Note further down the sentence it labels 1234567 as a single number.

Now look at this image:



I replaced the 123th67 with 123main24 and notice that the NLP didn't break up the sequence because it found a real word, "main", so that could be an address with missing spaces or maybe not. Notice that it identified it as a cardinal number. The logic that evaluates the relationship between words will notice that the labeling of CD on the sequence is incorrect and this will prompt a correction to add spaces between the numbers and the word "main" because the wording of the sentence implies an address, if the sentence did not imply an address it then asks what is 123main24.

And here is the list of 1 2 3 4 5 6 7 where each number is what the NLP identified each one to be, a number.



And to answer the question as how does it predict if needed, the approach does something similar and that is it anticipates a contingency based on its experience. There is also partial feature fit where it can associate with an image or audio that it has heard before and apply whatever algorithm(s) to process it. This includes lists of functions it has learned in the past to reach an idea goal.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: My HAl Rig
« Reply #35 on: November 09, 2021, 04:57:06 pm »
Are you trying to build AGI / human level AI?

If so, then how would your AI solve some problem like IDK, let's say this:

The rabbit was trapped in a glass jar and Jimmy saw this, and a lake of lava was in front of Jimmy too surrounding the rabbit. So to help him, Jimmy ___________________________________?
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 652
    • Knowledgeable Machines
Re: My HAl Rig
« Reply #36 on: November 10, 2021, 03:06:56 am »
Are you trying to build AGI / human level AI?

If so, then how would your AI solve some problem like IDK, let's say this:

The rabbit was trapped in a glass jar and Jimmy saw this, and a lake of lava was in front of Jimmy too surrounding the rabbit. So to help him, Jimmy ___________________________________?

The sentence has certain concepts that Amanda needs to learn first, and that is what is "lava". From that perspective, a descriptor object for lava needs to be created with vectors such as the class "threat" which includes labels like hot, cold, danger, injury, destruction, pain, etc. I am working on building a speech recognition interface to build descriptor objects from conversations. Again, because I can constrain the problem domain achieving something similar to what Google did to order a pizza over the phone is possible to build descriptor objects. There also needs to be a sense of self that relates to qualities of emotional states. Then Amanda needs to learn to empathize and to what degree, which requires descriptors of empathy with emotional nestings, and since the rabbit is surrounded by the lava then there may not be a means to rescue the rabbit. Then again if Amanda had been bitten by rabbits in the past there might not be much empathy for them.  >:D

You see, I'm not trying to build a chatbot, where it would answer from some scanning of pet owners posts about rabbits and how they would rescue the rabbit at all costs and say something like: "I'll think of something or call someone to help me rescue the rabbit." Which is what most would think is such a human-like response. I want to teach Amanda to be human, not to respond using a statistic of words that are usually associated with a topic.

So, I still have more work to do.  Right now Amanda just knows that a rabbit is a form of animal :)




*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: My HAl Rig
« Reply #37 on: November 10, 2021, 05:36:53 pm »
You want to build human level AI? AGI?


my solution BTW :) >>

The rabbit was trapped in a glass jar and Jimmy saw this, and a lake of lava was in front of Jimmy too surrounding the rabbit. So to help him, Jimmy took a big jump over the lake of lava, went up to the glass jar and twisted it as hard as he could, and grabbed the rabbit out of the glass jar.


I'm still very unsure how you can get around "GPT"/ my AI mechanisms, for example to answer that question above you need to heavily use pattern finding mechanisms, like taking the key words at least like trapped, rabbit, lava lake, jar, and their order strung together (sentence), and this is how you pull the next word (answer) basically..."I walked down the>street", so you can answer it even if see "i then actually walked so fast down some new long long>". Recency tells us maybe "long" should come next, but eventually stop too from boredom. I call this match recognition >then> prediction. I don't understand how your AI can get the next word to predict. I'd like if you could a full clear explanation of all mechanisms that change predictions it found for an unseen context/problem.
Emergent          https://openai.com/blog/

*

frankinstien

  • Replicant
  • ********
  • 652
    • Knowledgeable Machines
Re: My HAl Rig
« Reply #38 on: November 10, 2021, 07:47:44 pm »
You want to build human level AI? AGI?


my solution BTW :) >>

The rabbit was trapped in a glass jar and Jimmy saw this, and a lake of lava was in front of Jimmy too surrounding the rabbit. So to help him, Jimmy took a big jump over the lake of lava, went up to the glass jar and twisted it as hard as he could, and grabbed the rabbit out of the glass jar.


I'm still very unsure how you can get around "GPT"/ my AI mechanisms, for example to answer that question above you need to heavily use pattern finding mechanisms, like taking the key words at least like trapped, rabbit, lava lake, jar, and their order strung together (sentence), and this is how you pull the next word (answer) basically..."I walked down the>street", so you can answer it even if see "i then actually walked so fast down some new long long>". Recency tells us maybe "long" should come next, but eventually stop too from boredom. I call this match recognition >then> prediction. I don't understand how your AI can get the next word to predict. I'd like if you could a full clear explanation of all mechanisms that change predictions it found for an unseen context/problem.

Well, what if the lava bank is too wide to jump over? Why didn't your solution ask the question: "How wide is the lava bank?" and/or "Whose rabbit is it anyway?" See, your AI has no self-awareness, it just responds with words that are the best fit from the statistics it collected. My approach uses word vectors to determine abilities, threats, etc and experience (Amanda needs to know that jumping has an efficacy which it can learn from conversations that depict various scenarios where jumping was involved), it applies vectors to verbs, adjectives, etc to determine logical relationships that imply contexts of action and relations to consequences, such as can Jimmy jump and is it worth jumping if there is too much pain if Jimmy fails to make it across.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: My HAl Rig
« Reply #39 on: November 10, 2021, 09:15:02 pm »
"Why didn't your solution ask the question: "How wide is the lava bank?" and/or "Whose rabbit is it anyway?" See, your AI has no self-awareness"

I think you have no self awareness lol, why didn't you answer my question at the top of my last post? 0>0 Or my how yours works question? X>X

My AI if finished, and GPT, can easily be trained on chat data or made to ask multiple questions before writing a sentence completion/ extension. That is very easy to add to GPT.


You can see below how DALL-E can be (in my plans) made to have motor executive control, and desires to think a certain way (RL), and to look around on notepad using motors and do tasks like https://openai.com/blog/grade-school-math/

>> something i had written:

My 1st question/theory was: If I predict/imagine a scene/movie like DALL-E, that already describes visually all of the expected motor actions naturally (of my plan, ex. to raise my arm up), all that's needed now is the limbs to act (activated is each limb in the image (its rotation and speed)) and the Cerebellum to correct numerous small errors so the input matches the desired target image in brain that is stored. No motor cortex is needed, sensory hierarchies store the same things. Only leaf 'limb nodes' are needed.

And but so my 2nd question was how can I think of the movie, and decide to do it or not do it. For example DALL-E may spit out a prediction "<a movie of raise hand> + DO IT !!", and so clearly it has a plan in mind and also is going to do it in real life. The problem with this theory though is I can think of the scene and predict the do_it and still withhold myself from acting it out in real life. I know it needs to think of a movie plan and I know it needs to predict the 'do it' memory, it can't just predict the movie plan and expect RL to handle deciding to do it or withhold itself, it must predict using sensory, because it is all context based and requires simply a prediction to decide to do it in real life, and using RL should be to control sensory prediction like Facebook's Blender chatbot (which is cooler than GPT because it uses word desires/goals, a forcing called Persona). I'm thinking now maybe my goal is strong enough that says to not_do_it, hence when I predict to do_it and I don't do it, I am actually not hearing it but in the background the weight is stronger still. To understand what I mean, see Facebook's Blender chatbot. It uses such, called Persona, forcing certain words in the background no matter if heard/said other words. So: it is against me no matter if I scream in my brain 'do it' constantly and in different ways ex. 'act it!', 'move!', 'initiate plans!'.
Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: My HAl Rig
« Reply #40 on: November 10, 2021, 11:12:30 pm »
I'm a little different that I don't think I think like my Ai does,    I think its got huge intelligence defecits.    :2funny:

*

frankinstien

  • Replicant
  • ********
  • 652
    • Knowledgeable Machines
Re: My HAl Rig
« Reply #41 on: November 12, 2021, 02:26:48 am »
Quote
using RL should be to control sensory prediction like Facebook's Blender chatbot (which is cooler than GPT because it uses word desires/goals, a forcing called Persona). I'm thinking now maybe my goal is strong enough that says to not_do_it, hence when I predict to do_it and I don't do it, I am actually not hearing it but in the background the weight is stronger still. To understand what I mean, see Facebook's Blender chatbot. It uses such, called Persona, forcing certain words in the background no matter if heard/said other words. So: it is against me no matter if I scream in my brain 'do it' constantly and in different ways ex. 'act it!', 'move!', 'initiate plans!'.

Really? Look at the videos below of the Blender Chatbot. The hype behind this stuff is pretty high. This reminds me of a tool I built for a company where everyone thought that because the tool can highlight a circuit trace on a schematic diagram and even continue the trace across all desperate pages that a circuit could be a part of, everyone thought the machine knew the connectedness of the components. But, in fact, it did not, it only drew a highlight on the circuit lines that connected the components. But that's all a human needs to get the impression that the machine has a higher level of intelligence. Get it?  These bots respond in complete sentences but they aren't really understanding what the person is saying or what they're saying in response. Looking at the video below the responses are what has been the norm for chatbots for the last decade. I mean in the first video the bot responds with "I have a crush on my coworker" Which we know the bot doesn't have, so it's just random gibberish that is baked just well enough that an ANN can apply it. The guy in the video then is elated that it responded the way it did, stating "It acts soo human!" Remember the schematic tool, same kind of thing here as well...

Oh, and I tried to install ParlAI, what a nightmare! The documentation is terrible. The install bombed because rust wasn't on my machine, but the documentation never mentioned that rust had to be installed. Then issues with Torch, not good so far. But in any case, looking at the videos the response time of this chatbot is terrible.




*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *******************
  • Prometheus
  • *
  • 4659
  • First it wiggles, then it is rewarded.
    • Main Project Thread
Re: My HAl Rig
« Reply #42 on: November 12, 2021, 05:10:00 am »
You keep saying "baked in replies". This is far from how GPT works. GPT is literally the opposite, and that is why DALL-E as shown on openAI.com can complete the rest of an unseen image you hand it so good. Also Transformers achieve the best general-purpose context prediction scores, and it's no wonder. Also I know how to make an easy to explain architecture 'like' GPT and so I completely to the bones understand 'how' it solves unseen prompts.

No, nope, Blender is not a chatbot and doesn't just get told to say 'cars', it is GPT or the similar and when you tell it to predict more often 'cars', it will leak to similar words like 'trucks' and 'goods'. This controls the Whole model so it literally sounds like it is a car hobbyist. You can set its Goals/Personas as phrases too, or at least in theory obviously.

And this guy below went a step further and gave it a notepad diary. We are getting close to AGI now, if this was DALL-E (DALL-E is multi-sensory).

ebook:
https://www.barnesandnoble.com/w/natural-language-cognitive-architecture-david-shapiro/1139957470?ean=2940162202622

If you want a paperback version of the book you can buy it here:
https://www.barnesandnoble.com/w/natural-language-cognitive-architecture-david-shapiro/1139957470?ean=9781668513118

Emergent          https://openai.com/blog/

*

MagnusWootton

  • Replicant
  • ********
  • 646
Re: My HAl Rig
« Reply #43 on: November 12, 2021, 11:56:39 am »
Locky's right -   that if you just have a sentence  "cat jumped over the fence"   you could say the computer knows nothing about its just a string pattern right, but it actually still counts as a tiny bit of knowledge, thats even less than cat or fence or jump,   it only knows one little tiny thing, and thats just the string itself.

Why OPEN-AI calls it a transformer, IMO, is because its going to take this string, and try and get more out of it,  by putting it through processes, which make it more plastic and more useful for different uses/applications.

There is alot of information there,  but if u just chuck it in a markov chain, it doesnt count for much,  and thats how FRANKENSTEIN is right.   its not in a form where its useful yet,  unless u make it useful.

 


Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
Attempting Hydraulics
by MagnusWootton (Home Made Robots)
August 19, 2024, 04:03:23 am
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

413 Guests, 0 Users

Most Online Today: 413. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles