The Athena Project

  • 184 Replies
  • 97419 Views
*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1307
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: The Athena Project
« Reply #75 on: December 24, 2014, 03:09:24 pm »
I currently have some Artificial Neural Networks (ANN) which I really enjoy as a hobby at Elizabot.com. 

What's fun about interacting with them, is that while simple, they feel sort of like pets, or unique A.I. experiences, which are alive.

One ANN, learns how three random color numbers for red, green, blue combined may turn more red or blue, and then it mixes those colors together, like a bucket of paint, so you may visually check its learning. 

Another ANN attempts very simple NLP learning equations and inequalities between our world and a ball.

What's feels so natural about ANNs, I think, may be that each learning opportunity, provided by you as training, is subject to failure.

What are your thoughts about Artificial Neural Networks ?

Happy Holidays, Snowman.

« Last Edit: December 24, 2014, 03:32:47 pm by 8pla.net »
My Very Enormous Monster Just Stopped Using Nine

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6860
  • Mostly Harmless
Re: The Athena Project
« Reply #76 on: December 26, 2014, 08:56:05 pm »
Freddy:
I haven't spent tons of time working on the cube. I wish I was as half as talented as you. I've seen your youtube vids showing your work with unity. It was amazing. A person couldn't ask for a better avatar than a woman standing in beautiful scenery. Even if the Ai was poor (not saying it is), people would still buy it just for the quality you've managed to assemble.

Thanks for the kind words. It's like anything, the longer you chip away at it the better you get, just like your own project. I've been working on the graphics side of things for some time now, it's nice to finally have some good results. A lot of dead ends, but a lot learnt at the same too. I'm glad people have enjoyed what I have got done so far.

The bot attached to Jess in that longer video was just for demonstration. In the new year, now that I have built my AIML interpreter, I can start working on Jess's character and conversations. She'll live on my Linux box which I built for her a while back. I'll customise the interpreter as I come to things that I think would be good to add.

Looking forward to what we can all make in the coming year :)  8)

*

Snowman

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 145
    • minervaai.com
Re: The Athena Project
« Reply #77 on: December 27, 2014, 03:01:26 am »
Don (wildman) Patrick :D :

I actually never really looked too deeply into professional NLP techniques. I just figured I was experienced enough by my excessive talking to just wing it :P .

I want to write down a thought here. Please give me some of your thoughts about this.
I was discovering that there are two types of relationships structures. There is a simple relationship which consists of a relationship, category, and entry. There is another type of relationship we know as a sentence structure.

Here are three examples of how a simple relationship can be used:

relationship: ownership
category: Aaron
entry:gun

relationship: action
category: Aaron
entry: shoots

relationship: property
category: Aaron
entry: height

The first example shows ownership. If you were to search 'what does Aaron own', then you would discover the entry 'gun'. I am using the 'category' to show the more significant word. The second example shows that an action which Aaron can do is 'shoot'. The final example indicates that one property of Aaron is height. So if you want to get a list of properties of Aaron you would find 'height'. As a result of knowing this,  I wrote a vb class to manage and search these relationships.
   
So that's one kind of relationship. The other type is a sentence-structure-type relationship. We presume that everything in a sentence is related to each other. Off hand, I can't think of an exception to this. A sentence usually expresses a single thought. So if every word is related to each other in some way it is it's own type of relationship. I think a sentence is a List of basic simple relationships.

ie. A dog runs home.

relationship: agent
category: <this sentence>
entry: a dog

relationship: action
category: <this sentence>
entry: runs

relationship:  destination
category: <action>
entry: home


The action (runs) and agent (dog) are related directly to the sentence while the destination (home) is related to the action (runs). So the destination is related to the sentence via the action.

If you were to lookup “what runs home?” it would respond with 'a dog runs home'.

Here's some terminology I came up with. This is on a sample and is not comprehensive. However, it goes further than just finding the do-er, relationship, done-with, done-at list that you mentioned.

:Terminology:

essential:
Agent = the object doing the action
Action = the event the agent incurs

non-essential:
Patient = the object that is directly effected by the action
Instrument = the object used by the agent to accomplish the action
Recipient = the object that received the Patient
Direction = the direction an action is taking
Location = the location of the action or object
Time  = start-time, duration, end-time of action
Cause = the action that initiated the sentence' action to begin
Effect = the action that initiated as a result of sentence' action
Count = the count of actions or objects

Assign = the direct assignment of attributes to objects or actions
      ie. agent(bill) assign(weight(fat) height(6'5”)) aka. Bill is fat and is 6'5” tall.


On a side note:
Now I think WordNet likes dealing with word relationships. It also deals with conceptual lookup. I want to call them ladder lookups because it climbs a ladder of meaning in order to figure out what the user means.

What the Ai already know:
Ai: bird can fly
Ai: list of bird (robin, duck, ostrich)
Ai: ostrich can not fly

Quesion: can robin fly?
Ai: robin is bird, bird can fly, robin can fly

Have you tried this before?

Hello 8pla.net .. if that is your real name ;)

I like to take one step at a time. I move between lots of Ai subjects to keep things fresh. My desire if possible is to integrate NN with NLP like I've stated in an earlier posts. Occasionally, I will lay back... or go on long walks and just brain storm ideas on how to do this. I seen that thread you posted about ANN and I went and visited it online. I see you are using categories, I guess a feed-forward network. I don't all the details but I'm working on it.

You have three inputs and one output. Place in three words and get back a true or false. I guess the ultimate goal would be to put anything into it and get anything you want out. Basically, it learns logic from lots and lots of lessons until it can figure things out on its own, which I think is the Hall-mark of NN. Usually, people hard code logic by hand into programs because it takes such a long time to teach those same rules to an NN. What's usually necessary is a mix between the two to create an advantage.

Human minds have both hard-coded aspects (ie. instincts) as well as regular NN. I suppose if someone could hard-code logic directly into NN without one having to teach it, it would make one hardy AI. I suppose I've been integrating them by putting them side by side. I'm not sure which is better at my current knowledge level.

*

8pla.net

  • Trusty Member
  • ***********
  • Eve
  • *
  • 1307
  • TV News. Pub. UAL (PhD). Robitron Mod. LPC Judge.
    • 8pla.net
Re: The Athena Project
« Reply #78 on: December 27, 2014, 03:29:29 am »
Quote
ie. A dog runs home.

relationship: agent
category: <this sentence>
entry: a dog

relationship: action
category: <this sentence>
entry: runs

relationship:  destination
category: <action>
entry: home

Have you given any thought to SVO yet?   

SVO is very common.  It stands for Subject Verb Object.

There is a lot published about SVO, if you find that useful. 

With SVO, it may look something like this, for example.



This sentence: A dog runs home.

relationship: agent
category:  ie. A dog runs home.

relationship: start
category: <subject>
entry: a dog

relationship: action
category: <verb>
entry: runs

relationship:  destination
category: <subject>
entry: home


I am sure you can improve on this example.  It is only meant to share thoughts for discussion purposes, (not a as correction or a suggestion of any kind).    By the way, I was watching you on YouTube today.  It was your video demonstrating  the Haptek player.
My Very Enormous Monster Just Stopped Using Nine

*

Snowman

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 145
    • minervaai.com
Re: The Athena Project
« Reply #79 on: December 27, 2014, 06:28:55 am »
I'm not going to take offence. I was just using some stuff I learned from linguistics, and a few other things I learned on my own.

subject = agent
verb = action
object = patient

I'm trying to separate the true meaning behind language and so sometimes I have to back away from what I've been taught in public schools to get a fresh perspective. For instance, an English Verb may have an ending of 'ing' as in walking. But that 'ing' only shows the show the progressive aspect of the real verb 'walk'. 'Ing' is just a way of adding extra meaning to a verb in English. However, an 'action' as I have defined it, (or the name of an action) will always be the base verb.

In English:
I will be walking.

My VL language:
agent(I) action(walk(time(future progressive_

Of course, I don't have my VL language perfectly defined... but its more of an ends to a means so sometimes I'm not consistent.

Thanks for trying to keep the discussion rolling. I really appreciate it.

Its been awhile since I've seen that video on the Haptek player. I wonder if that program works anymore. Its hard to tell because no one gives me feedback. I'm not even sure if the Athena Reader works for anyone. If it is working then everyone seems to be keeping it secret. Its a little discouraging for a programmer.



*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: The Athena Project
« Reply #80 on: December 27, 2014, 10:57:24 am »
(i scratched what was here before)

Heres another personal brainstorm.

an NLP data extraction process isnt complete until youve used the data.

In levels of complexity->
1) simple query based system
2) running monologue  (like a word predictor, except this time from a knowledge base.)
3) chat bot
4) full narrative sim, with people talking inside it, with motives.

« Last Edit: December 27, 2014, 12:05:36 pm by ranch vermin »

*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: The Athena Project
« Reply #81 on: December 27, 2014, 04:05:01 pm »
Winging it is fine  O0. I only read the gist of scientific papers. Grammar does contribute, but I believe one should structure the information according to meaning for best consistency. As Art illustrated, grammar is not consistent when one looks at meaning.
Some scientists believe that the indirect object should be a fourth element to agent-action-patient structures, but as one can have multiple indirect objects I preferred the most basic structure possible. In my experience, the more basic you stay, the more flexible and powerful your system gets. Yours may be even simpler than mine as it connects two elements instead of three. The question then however is how the program is to recall that these relationships are related to eachother when looking up who shoots who.

Your list of elements looks pretty complete. It is interesting that you consider recipient and instrument as separate roles, it makes me wonder if there are more roles that an indirect object can have. A few more things to take into account: Action and ability are different relationships. e.g. a bird can fly, but that does not mean it is currently flying. Some relationships can also be theoretical ("might"), be desirable ("want"), or necessary ("must"). On advanced territory, both specifications of time and location can be relative to events or objects, e.g. "5 seconds after I set the alarm, it went off". And in similar way to your <this sentence> sentence relationships; an agent, patient or action may be an entire event, e.g. "It(event X) took me by surpise.". I think you are handling this by cause and effect, but I'm not sure if those are the only relationships one could have between events. Interesting.

Quote
Question: can robin fly?
Ai: robin is bird, bird can fly, robin can fly
Indeed I have done this, and greater scientists before me. This is (inductive) inference, a proven and powerful way to know more than one strictly learned. Inference is not very common among chatbots (exceptions exist, e.g. Mitsuku) but all the more common in AI with knowledge databases (e.g. "expert systems). It works quite well and can enhance just about any word-related information process. However I've found it very hard to find examples of conversational AI that use methods of inference as a central mechanism.
To cut a long story short, here's a demo of my program that shows what I covered in the area of NLP last year. You'll notice two such inference chains at the end of it: http://artistdetective.com/arckon21_test.swf

I find it interesting to follow your train of thought.
CO2 retains heat. More CO2 in the air = hotter climate.

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: The Athena Project
« Reply #82 on: December 27, 2014, 08:11:30 pm »
Don,

That was nicely done and it's apparent you've invested a good amount of work in it so far.

I liked the speed reading part of that text file too! Nice that the bot not only "digested" the
file but found a new word as well.

Can or does it "remember" or learn new information on the fly of does new info go into a separate file / log
for later examination by you before it becomes part of the bot's memory?

Does your bot have compartments for it's memories? Long term, short term, ephemeral (short lived and without real meaning or significance - it rained today)?

Thanks for sharing your labor Don!
In the world of AI, it's the thought that counts!

*

Snowman

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 145
    • minervaai.com
Re: The Athena Project
« Reply #83 on: December 28, 2014, 07:53:37 am »
I was theorizing that the term 'meaning' is just another relationship between itself and other words. For instance, you can have synonyms, which are two words with the same meaning. Sometimes we define terms by higher or lower level words. We can define words by there characteristics. For example:

A car is a type of vehicle.
A car can carry people.
A car is made of metal parts.   

Again these are just relationships. The word 'vehicle' is a higher level word in terms of concept when compared to car. It is a hyponym. I'll try to translate these into relationship terms.

relationship: hyponym (conceptual category)
category: car
entry: vehicle

relationship: carry
category: car
entry: people

relationship: meronym (parts of)
category: car
entry: metal parts

So meaning is created by the relationships we feed the AI, and not by some list of definitions it can recall on demand. A table can be devised that matches words to similar meanings. These words are called Polysemous words. However, some words have slightly different meanings but be almost the same. That's where understanding the differences in words comes into play. I think a relationships database can provide this... maybe. I've seen a chart before that cross-matched meanings with words.

Quote
“The question then however is how the program is to recall that these relationships are related to each other when looking up who shoots who.”

That's why we need two types of relationship databases. One is the Relative Relationships, the other, Sentence Relationships. The Relative Relationships DB will contain basic information. Now let's say we parse this simple sentence “The ball rolls in circles” into our RR DB.

relationship: action
category: ball
entry: roll

relationship: adverbial (option)
category: roll
entry: circles

So here we demonstrate that the only thing this RR database will tell you is that a 'ball can roll' and that the action word 'roll' has an option of doing so 'circularly'. It does not store that a 'certain ball rolled in circles'. That is what the SR (Sentence Relationships) DB is for. In that database we will store the entire sentence relationship as one unit. Below is an example of what the storage of a sentence into a relationship structure might look like.

<sentence>
   relationship: agent
   category: <sentence>
   entry: ball

   relationship: action
   category: <sentence>
   entry: rolls

   relationship: adverbial (option)
   category: rolls
   entry: circles
</sentence>

(Just for the fun of it. Here is my VL equivalent.)
sen(agent(ball) action(name(roll) option(circle_

So if we search for 'what rolls in circles' in our SR DB we will find the complete sentence and, therefore, will discover that the word 'ball' satisfies that query.

Quote
“it makes me wonder if there are more roles that an indirect object can have”

I question the validity of indirect objects. In fact, Objects themselves are not always needed, as with intransitive verbs. The only thing that makes a sentence valid is the agent and action. Although either one can be implied, they still are required to make a complete thought. Objects, indirect objects, and everything else is just additional information. I define an action as a base-verb, no tenses, no auxiliaries. Anything else is just additional information. Might be, Could be, Can be is just additional.  You already know that a verb can be viewed as a noun. Because a noun is defined as a person, place, thing, or idea. So 'The Run' is an idea, therefore, making it a noun. An 'idea' can be a complete sentence, or as grammar will call it 'an independent clause'. So this is acceptable in VL as well too. I like using VL because it helps me divide the information up better into bite-sized pieces. I then take those pieces and can make a SR or RR database easier.

The running of the bulls is scary.
sen(agent(bull(plural) agent(run(progressive))) assign(scary))
or
sen(agent(running of bulls) assign(scary)) 

Quote
“5 seconds after I set the alarm, it went off”
Here's my attempt at writing this in VL :P
 
sen(cause(segment(agent(I) action(set) patient(alarm))) effect(segment(agent(alarm) assign(state(option(off))) delay(5, seconds_

Now let's try to rewrite it in relative terms. Yep, this would be my first attempt at writing a complex sentence in these terms. So this might be scrapped for something better later on. Brain starting to smoke a little. Where's Art when you need him? ;)

Basically, I'm trying to show how a complex sentence can be stored in terms that can be easily mined for data. The RR DB is just a general knowledge database that saves parts of sentences. It is would be a smaller database and potentially faster. However, the SR DB will be more complete, not loosing information from every sentence it has learned. Although, over time this database would grow huge and potentially be very slow to search.

Here it is in Relative terms:
<sentence>

   relationship: cause
   category: <sentence>
   entry: <cause/segment>

   relationship: effect
   category: <sentence>
   entry: <effect/segment>

   relationship: delay
   category: <sentence>
   entry: 5 seconds

   relationship: agent
   category: cause/segment
   entry: I

   relationship: action
   category: cause/segment
   entry: set

   relationship: patient
   category: cause/segment
   entry: alarm

   relationship: agent
   category: effect/segment
   entry: alarm

   relationship: assign
   category: effect/segment
   entry: <effect/segment/assign>

   relationship: state
   category: effect/segment/assign
   entry: off
</sentence>

Quote
“but I'm not sure if those are the only relationships one could have between events. “

There is actually several other word categories (I've been calling them keyword) that I did not list besides patient, instrument, recipient, time, etc. I went pretty far working on this visual language. I guess I should attempt at making a video showcasing the grammar. Then I can compare it to what I'm learning with English grammar.

Discussing all this is helping me get a better grasp of it all. Although, I can't really describe the big picture that's in my head that I have already. 

A bird might fly.
sen(agent(bird) action(fly(possible(might_

*

Snowman

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 145
    • minervaai.com
Re: The Athena Project
« Reply #84 on: December 28, 2014, 08:06:41 am »
Hey ranch vermin

Quote
In levels of complexity->
1) simple query based system
2) running monologue  (like a word predictor, except this time from a knowledge base.)
3) chat bot
4) full narrative sim, with people talking inside it, with motives.

Give me some examples of what you mean.
I suppose you mean this for #1
User: can a bird sing?
Ai: Yes.

#2
User: I ate some pie today?
Ai: I guess you will eat pie tomorrow.

#3
User: I found a rock in my shoe.
Ai: You can find rocks anywhere.

#4
Ai: Boy, I'm very bored today.
Ai: Well, I guess I shouldn't be.
Ai: I wonder if Art is reading this?


*

Don Patrick

  • Trusty Member
  • ********
  • Replicant
  • *
  • 633
    • AI / robot merchandise
Re: The Athena Project
« Reply #85 on: December 28, 2014, 11:13:27 am »
 :) That sounds like a plan. So you're basically clustering related relationships by means of a "sentence relationship" database, while also keeping a fragmented "relative relationship" database (which is most useful for inferences). I suppose I sort of have that too, except I store only the relative relationships permanently on file and only keep the sentence relationships around during runtime for conversational purposes, and because of that I have greater difficulty piecing things back together again. The benefit is a compact database, but with your idea, perhaps you could make the sentence database as memory address pointers to the relative relationships or something. I like where this is going.

I try not to diverge too much from Snowman's project here, Art ;), but I appreciate your interest. The answers are: It stores new facts to its memory files on the fly, including mistakes (hence no public access), from which moment on it can recall them. I do monitor its processes but only correct things afterwards when necessary. The AI has one single perpetual, corrigible, time-indexed memory. It only marks some completely unspecified or hypothetical subjects for deletion from memory later, something I have yet to automate. At this stage of testing I just reset its memory every now and then. Memory division is an optimisation that I haven't needed yet.
CO2 retains heat. More CO2 in the air = hotter climate.

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: The Athena Project
« Reply #86 on: December 28, 2014, 02:18:43 pm »
I just meant you need to do something with the data once collected, whatever it is. :) but its obvious. hehe

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: The Athena Project
« Reply #87 on: December 28, 2014, 04:46:43 pm »
Yes Snowman, I am always reading and watching and lurking in the shadows of...ok...nevermind.

Looks interesting but how would your system handle some of the tough ones like, "Next Thursday there will be tryouts for the choir. They need all the help they can get!" or "I shot an elephant in my pajamas" or "Time flies like an arrow - fruit flies like a banana."?

Enough of the play-on-word examples, but also as important as the double meaning of record, wind, wound (etc.) and these words: two, too, to, won, one, etc.

How about one dismissive category as we've experienced (briefly  ;) ) in the past...Ephemeral word / phrases / sentences? These can actually be as bad as those ridiculous Facebook and Twitter postings regarding every facet of a person's life, as if other people really cared that you walked your turtle or bathed your goldfish!  "I ate some pie today. - I found a rock in my shoe."  These while included in a sentence of conversation to perhaps help embellish an experience to another, is certain not important and not worthy of being retained by a bot for future use. (Unless it's giving some advice to beginning rock climbers to wear well-laced boots to avoid getting rocks or stones in one's shoes).

Years ago there was a section that dealt with Ephemeral knowledge in the Zabaware forums and why it was important to not keep unimportant information.

Question is, how will or does your bot handle it?
« Last Edit: December 29, 2014, 01:33:16 am by Art »
In the world of AI, it's the thought that counts!

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: The Athena Project
« Reply #88 on: December 28, 2014, 07:14:29 pm »
Maybe it would be good to keep its knowledge factual,   it has to know whats reality and what is silly metaphors.
Its not important to me to handle that.  (Watson is the ai handling the cryptic questions.)

I just want so much knowledge,  and tell me random never ending objectively simple stories (kiddie books) about it...  when I
go about mine.  Then have people speaking inside the narration,  and then I can take the narration from the speaking objects
point of view, talking to the user.

Thats where im at now,  recording playback chains is all where i came from...  this is rough...
« Last Edit: December 28, 2014, 10:32:30 pm by ranch vermin »

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Trusty Member
  • **********************
  • Colossus
  • *
  • 5865
Re: The Athena Project
« Reply #89 on: December 29, 2014, 01:45:06 am »
ranch,

That is sort of my point exactly. We don't need EVERY bit, snippet or blurb of gossip or tripe. What we do need is info or perhaps, access to the information. Most don't have access like Watson which had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage including the full text of Wikipedia. It had NO Internet access during the games.

If you want "factual" up-to-the-minute information then a connection to the net would be in order. If you simply wish for your bot to be a stand alone chatbot for personal use, then you could buy a 1 terabyte drive for less $100. and construct a pretty significant database of knowledge.

One other consideration is the code for separating the good from the bad. The net if Full of information of all kinds, but not all of it is acceptable or usable information.

It is no easy task and some have been at it for quite some time and still no concrete solutions. Time changes and so do ideas, structures, methods and technology. Write it, test it, modify it, get others to test it, release it or just have fun with it and learn from it. At this point, the bot really doesn't care. ;)
In the world of AI, it's the thought that counts!

 


LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

457 Guests, 0 Users

Most Online Today: 457. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles