Also very complicated at first glanceYes, the learning curve might be a bit steep. It's all seriously out of the box. I was hoping to alleviate some of that pain through a good UI. I'm also not expecting many people to ever build networks completely from scratch, as I did, but instead add to the already existing code (for instance, make aici, based on an example, copy files). Or take a few existing modules to create a new network (for instance, take the 'core functionality', a module to do calculus and a module for visual processing). People with a programming background should find many things familiar, like variables, assignments, conditional statements (aka if/do/while/for each). At it's core it's very similar to a tuned up stack machine.
Have you managed to get the correct answer yet?Almost, I bumped into a small delay this weekend (a bug-fix caused the core to run a little faster, revealing a few new deadlock situations, which had to be fixed to get back to testing). This week, the bugs should be removed from the count algorithm if there are no more unexpected issues. Off course, I will than have to move the search code over to the noun resolvers, which don't have the fancy search yet.
Have you tried using easier input:Less statements are not equal to easier for AICI. It doesn't matter how many or few you give for the algorithm. What's important is the structural complexity of the statements, their order, completeness and if it can be learned ('on' is not yet handled correctly by aici and this is a word who's meaning can't be learned, it needs to be coded) . Also, I wanted to test a few more (hidden) situations that aren't included in your statements. Here's a detailed description of each statement:
EG:
User: I have 2 hands.
User: Each of my hands, has 5 fingers.
User: How many fingers do have on 2 hands?
Also have you tried putting in some basic mathematics.This is actually easier to implement compared to the counter. The network has mathematical instructions (+-/*%), though AICI doesn't yet know the word 'plus'. Once it know's this, it is simply a matter of attaching the 'add' instructions.
EG:
User: what is two plus two.
Less statements are not equal to easier for AICI. It doesn't matter how many or few you give for the algorithm. What's important is the structural complexity of the statements, their order, completeness and if it can be learned ('on' is not yet handled correctly by aici and this is a word who's meaning can't be learned, it needs to be coded) .
User: I have 2 hands.This would almost work. I still have to do 'each', similar as 'both'. If you were to leave out that word and the comma, it would work. You basically declared the smallest amount of data that needs to be supplied to do the calculation.
User: Each of my hands, has 5 fingers.
User: How many fingers do have?
I must confess that although I have read your thread, it was difficult for me to understand what is actually going on.Ye, the blog structure doesn't help since you see the end first. Perhaps it would be easier to start with the earlier posts. The application's help file also tries to explain some and the white paper explains yet another part.
To me, it becomes more interestingly when I can try out things like:
-a human has 2 hand, 5 fingers each -> multiple items not yet handled correctly
-i am a human
-i am missing 1 finger -> negation is not yet handled correctly.
or: my left hands only has 4 fingers -> overwriting of abstract data in the concrete is not yet done
-how many fingers do i have.
@sunama: In case you're interested, AICI can now count stuff.
already done, check http://janbogaerts.name/ (http://janbogaerts.name/)
I think the previous version can already do that. You can try the AICI client. In any case, I'll be releasing a new version of the client tomorrow or so, than you can try for yourself.
That said, the association is done similar to how a thesaurus works. In fact, in the designer, this data is displayed (and can be edited) in the form of a thesaurus.
When an unknown word is found that is parsed as the 'value' of something and the 'what' that got extracted out of the sentence is 'color', the network asks the word 'color' to handle this unknown (in other words, the word 'color' needs to tag and bag the word 'green'). 'Color' creates the thesaurus relationship and indicates that the word 'green' can be used as adjective and noun.
Is this association done automatically or does the user have to feed the ai with the relationship between 'color' and 'green'.This depends. Currently, when you say: my eyes are green, and 'green' is not yet known, it will always ask what green means. Even though you had previously said something like: 'we are know going to talk about colors'. Though this is technically fairly simple to add, I just haven't reached that far yet.
How does aici know that 'colour' has to handle 'green'.Perhaps first a little bit about the network, you can look at it as a free form database. If you want to create a structure (record, class,...), you can have a neuron representing the object. Fields + values are done with links. The field name is represented by the link-meaning, the value is where the link points to. A table (list, collection,...) is done with clusters, which are simply specialized neurons, that can also be used as the object, value or field name.
Right now, I am concentrating on getting my AI to retrieve data. Once I have this reliably complete, then I shall move onto calculations.Ok, I understand.
Right now, my AI doesnt classify each single word as a verb, adjective, noun, etc as I feel this is a bit superfluous.How are you handling multiple 'meanings' of a word? For instance, 'mail' can be a verb or 'noun' and when noun, it can mean multiple things. There are very few english words that only have 1 meaning (even 'be' has multiple). I don't only classify in noun, verb,.. I classify for each single possible meaning.
This would require immense memory/hard disk/cpu powerThis is true, might I suggest to keep an eye on this since it is the determining design factor of the core. It's not like you can keep all your data in memory and existing (relational?) database systems tend to be far to slow.
It's not like you can keep all your data in memory and existing (relational?) database systems tend to be far to slow.
With all due respect a car doesn't have tiers...it has TIRES.My bad, spelling clearly has never been my major strength. I am not very good at it in Dutch, which is my native tongue, nor am I any good at it in English. I am even worse in French and German, but if needs be, I'll manage. How well do you spell in other languages?
While there are many words that sound alike and some that areOther people's bots I have come across indeed don't appear to be able to handle this very well. Please note, this is not some future feature I would like to build in. This was one of the core design issues. AICI can already do this, so thanks for the luck, but not needed ;D.
spelled the same but have different meanings it is difficult to
"teach" that information to a bot.
Good luck with that.
got it, let me make a small demo with 'mail' as in :
send a mail to x or print mail and mail x or something similar.
or what about this:
I am blue -> feel down
my eyes are blue -> color.
This last example is already possible since no special verb.
The break is well placedAre you a student? and if so, what courses are you taking?
I'm not a student.
I've got to be honest. If you were cooking meat and asked me to go buy the rub I'd have to ask you what the hell you were talking about. ;D
copy 'x' to 'y'It's a bit similar to Art's 'record' example, but a bit closer to the pc world ;)
delete the copy
(https://aidreams.co.uk/forum/proxy.php?request=http%3A%2F%2Fjanbogaerts.name%2Fwp-content%2Fuploads%2F2010%2F06%2Fimage2.png&hash=cbe521421b7aff115d69e5553224e45e0f8ab9e9)
Art, were you thinking of something like this:
(https://aidreams.co.uk/forum/proxy.php?request=http%3A%2F%2Fjanbogaerts.name%2Fwp-content%2Fuploads%2F2010%2F06%2Fimage2.png&hash=cbe521421b7aff115d69e5553224e45e0f8ab9e9)
Art, were you thinking of something like this:
(https://aidreams.co.uk/forum/proxy.php?request=http%3A%2F%2Fjanbogaerts.name%2Fwp-content%2Fuploads%2F2010%2F06%2Fimage2.png&hash=cbe521421b7aff115d69e5553224e45e0f8ab9e9)
Q3. Can your AI relate words yet, in the follow sense:
user: my eyes are green.
user: what colour are my eyes?
Can you AI relate that green is a colour?
Yep, TrueAndroids is getting it. If 'green' was known, it would simply store the data. If 'green' was unknown, it would ask what it is.
1 difference, to request for the info again,
You'd have to ask, what color are my eyes.
Also, is your AI written in Prolog?No, AICI is made with my neural network designer. The neural network designer is a WPF app, the core is currently C#.
Q1. What sort of duration is taking for you to get your AI to reply to your questions? Less than 1 second?This depends greatly on the complexity of the sentence, the number of filters/shortcuts I have been able to add and the hardware it is running on.
Can you give us some idea please?
Speaking of colors (or colours)...
A person can be GREEN with envy...
so sad that they're BLUE or
so angry, they're seeing RED or
so happy they're tickled PINK or
so afraid to try something they're YELLOW.
So you see...colors can exceed the limits of a crayon box....;)
OK... don't get me started on the or, oar / to, two, too / Won, one / tri, try / rite, right types of words.... :D
** I also agree that this thread would be a good addition to the Concepts section Freddy!
Initially, all input always starts on 1 processor: it's the processor itself that splits into multiple instances, all of them identical to the first, but with 1 small difference in each. For instance, if a processor wants to transform the word 'call' into it's part of speech and finds 2 possibilities (that can not yet be reduced to 1), the processor will perform a split, so both possibilities can be tried.
It's exactly this trick which is not possible on current hardware (no X86 instruction to perform this, impossible to do a set up of a single core correctly, using any kind of X86 instruction combination).
Surely by using 2 (or more threads), you can achieve your aim of splitting up the analysis of the input data?In short: No. Why: the analysis of 1 input string isn't split over multiple processors, the string is analysed in multiple different ways. When we begin, we don't know in how many different ways the analysis needs to be done.
It then decides to split the calculation.The calculation isn't split. A single processor still needs to process the entire sentence. It is not so that I divide the workload to put the final result back together, 1 processor calculates the entire result on his own.
Now, also, at present, if I gave the following input:
My name is Abu.
What is my name?
How many threads (approximately), will be used to derive a response to the 2nd line of input?
Hi, Sunama,
Thanks for asking. Ye, I guess I lost track of time a little. I've been busy as a bee, working on the next release. You probably beat me by a week or so.
Anyway, lots of stuff has changed, both in Aici and the designer. I've mostly focused myself on improving current stuff (no new tricks like counting). I've added 2 new stages to Aici: one to do a grammatical analysis of the possible results (used for things like making a distinction between 'a' as an article/determiner or as a noun, like the letter). The other stage is for semantical processing. Where I previously only did a simple convertion, I now have a complete stage that allows me to do further filtering, based on semantics. Though this needs further work (I'm currently doing the analysis in fixed order, which needs to be changed I think in a variable one). I've also changed the final output stage into something more elaborate with 3 sub-stages to simulate conversation techniques (you know: acknowledge what was said, state your own opinion about what was said, formulate a new question. If only I could do that). Unfortunately, I'm not yet making use of this last technique in the output rendering as it should.
The time spend on the designer and core, mostly went into getting the 'prototype taste' out of them. I did some memory and performance profiling, which has so far produced some impressive results. I've also managed to find a few very nasty threading bugs, but there's at least 1 more left.
And finally, I've also made a web-based version of Aici. I hadn't done any announcements on that one yet, cause it was still behaving funky at times, though this should be better now.
Pffjeuuwie. That was more than I imagined it to be in my head. Anyway, in a week, perhaps 2 I should be doing a new major release.
As previously mentioned, a new release was imminent, so here we are. I've updated the designer, Aici Client and also the Aici-web (http://janbogaerts.name/index.php/2010/11/01/nnd-0-5-1/).
The next step will most likely be complex sentences. I need them to make the system more 'teachable'.
Could you perhaps tell me what you did exactly?Never mind, I did a little try and I think I know what you saw.
Hi Sunama,
Could you perhaps tell me what you did exactly? Cause I can't seem to find the log in which something goes wrong due to out of order processing. That said, I don't suppose it's hard with the web-based version to get such a behaviour at the moment. I am getting the type of time-outs that is suggesting me the server is over-used.
You are correct though, Aici currently doesn't have anything in place to make certain that the input order is preserved during execution (which it should).
Disabling the send button itself would perhaps be a bit to much. What would happen if there was no reply? It's parallel processing method makes it not so evident to keep track of which thread belongs to which input (though it is possible to add if really required). That together with the fact that multiple users can chat at the same time, makes it not so convenient to disable the send button during processing.
user: my eyes are greenSunama, it turned out there was still a bug in the prev version which made this fail (I missed it originally). Things like this should work now.
user: what colour are my eyes?
AI: confusion!!!
It couldn't process this information.
WMM is ok, if a little clunky, but it's usable.Yes, I was missing a little more control over timing, but for my purposes, and skill level, it's just fine.
I really like the way you can teach it (can I say 'it' ?) and the way things are learned.Oh it's an it, definitely, more like a string of tricks, so I'd label it more of a recording device, for the moment.
so, would the solution be:Try something like:
usa is a country
usa has a president
the current president of usa is obama
would this work?
//callback for var-path
//builds a new input pattern, based on the last input statement found in the log
//The input pattern is added to a new rule (which is added to a special 'learned' topic.
//The rule is linked to the CallbackIn, which is an output pattern.
//callbackOut will contain the newly created input pattern when done (single textneuron).
Cluster LearnPattern(code)
{
this()
{
var iInputs = GetLastInputFromLog();
int iCount = Count(iInputs);
if(iCount > 0)
{
string iSpace = " ";
CallbackOut = CCToS(Interleaf(ref(iInputs), ref(iSpace))); //we need to convert the list of words back to a single sentence, where each word is seperated with spaces, so we have a visual representation for the pattern.
var iCurrent = iInputs[0];
AddLink(CallBackOut, iCurrent, ParsedPatternStart);
for(int i = 1; i < iCount; i++)
{
var iNext = new(Neuron);
AddLink(iCurrent, iNext, iInputs[i]);
iCurrent = iNext;
}
AddLink(iCurrent, CallBackOut, Rule);
var iRule = MakeCluster(Rule, CallbackOut); //add it to a new rule
var iOutput = MakeCluster(OutputPatterns, CallbackIn); //link the rule with the output.
AddLink(iRule, iOutput, OutputPatterns);
AddChild(AutoLearned, iRule); //add the rule to the default topic.
}
}
GetLastInputFromLog(): var
{
var log = GetFirstOut(OutputSin, CurrentConversation);
if(count(log) > 0 && ChildCount(log) > 2)
{
var iFound = ChildAt(log, ChildCount(log) - 3);
return GetChildrenRange(iFound, 0, ChildCount(ifound) -1); //each log item also contains a date-time stamp at the end, which has to be removed.
}
}
}
//callback for var-path
//builds an output pattern based on the content of CallbackIn and returns this (through CallbackOut).
Cluster BuildOutputPattern(code)
{
this()
{
int iCount = Count(CallBackIn);
if(iCount > 0)
{
CallbackOut = CCToS(CallBackIn); //we need to convert the list of words back to a single sentence, to form the neuron that represents the pattern (also useful for the editors).
var iParsedPattern = MakeCluster(ParsedPatternOutput, CallbackIn);
AddLink(CallbackOut, iParsedPattern, ParsedPatternOutput);
}
}
}
You: kiss kiss
bot: What should I reply to that?
You: kisses for you too
bot: Ok, I see. Ask the question again, see if I got it.
You: kiss kiss
bot: kisses for you too
Yes, that's still for the deskop version. Perhaps this can point to the download page: http://bragisoft.com/download/ (http://bragisoft.com/download/)
Aici is an infant competition and show case bot for resonating neural networks. It's abilities and performance will be improved on as time goes by and experience has been gathered from competing against other bots.
/*
Activates a series of functions to modify the asset defined in CallbackIn.
Then looks up a pattern, based on how the value for 'CallbackIn' maps to that pattern. CallbackIn should be an asset, the attribute of all the children, makes up the pattern name.
Finally, activates the 'Output' section of the pattern that was found (which can be a topic or rule) and returns the result that was rendered. When the input was a topic, the 'questions' section will be activated, if it was a rule, the 'conditionals/output' section will be rendered.
You can optionally specify an argument, value 'true' or 'false' to indicate if the do-patterns also need to be executed or not.
Execution is done in another thread, to make certain that any var changes done by the rendering, don't affect any other parts.
*/
Cluster CreateOutput
{
this()
{
var iFound = ^Bot.TextRenderers.*; //get all the callbacks that need to be called before getting a pattern to render.
foreach(iFound in iFound)
{
PushValue(callbackIn); //need to manually add the function arguments for the callback.
PushValue(^(iFound).Args);
call(^(iFound).Callback); //a renderer is an asset with children: Callback (value = the cluster to call), and 'Args' (possible extra values for the callback, that were supplied by the user). The callback usually modifies the cluster found in CallbackIn, so pass that also along.
}
var iPath;
foreach(var iChild in CallbackIn) //get all the attribute values so we can build the name of the pattern to activate.
Add(ref(iPath), GetFirstOut(iChild, Statics.Attribute));
iPath = GetCommonParentWithMeaning(Statics.CompoundWord, iPath); //get the compound from all the pattern names, this links to the pattern.
if(count(iPath) > 0)
{
var iPattern = GetFirstIn(iPath, Statics.NameOfMember);
if(Count(iPattern) > 0)
{
var exec = New(neuroncluster);
var callresult = New(neuroncluster);
if (Count(callbackargs) > 0)
AddChild(exec, callbackargs);
AddLink(exec, callresult, Render);
AddInfo(exec, callresult, Render, OutputSin, PatternMatcher.SplitResults, iPattern);
BlockedSolve(exec);
callbackout = Union(callbackout, GetChildren(callresult));
Add(ref(PatternMatcher.ItemsToFreeze), GetChildren(exec));
Delete(exec, callresult);
return; //it worked, lets get out of here so that there is no error rendered.
}
}
Error("Can't map the path '" iPath "' to a pattern");
}
}
I see you have a new team member - congratulationsYep, thanks. He offered to help which is always welcome.
I was wondering if your bot can run as a web service yet ? So say I have it set up on my server, I can query the bot in a similar way to how Pandorabots works ?yes. Though it's been a while since I tested this feature. There is currently a simple api interface (I believe there is 1 for regular text and 1 for xml). It's also pretty easy to add new API interfaces.
Also, how is the character creator going and what is in the pipeline if anything ?That's currently in the deep-freezer. I have to many other things to do right now, like finishing up the integration of the full code features...
I'll attach the file in case you have some time to look at itGreat. I'll have a look at it. I'm currently working on this bit anyway: I've changed a lot of stuff getting the system AIML compatible and including the 'if', 'while',... stuff. I already fixed a lot of things but haven't made a new installer yet.
Some times when there is an error(?) the chatbot avatar disappears.Yes, this is something I also noticed sometimes, but never dug into to it yet.
One other thing is that my Ivona Emma voice is not working. I have it selected but it's still using the old Microsoft Anna voice.The system allows you to switch between managed and unmanaged MSAPI (tools/options). Perhaps it's in the other setting as you have now.
I have a few questions on the web deployment side, but I don't want to bombard you !No worries, just ask.
The system allows you to switch between managed and unmanaged MSAPI (tools/options). Perhaps it's in the other setting as you have now.
I think I already fixed the problem but haven't released it. When I loaded up the aiml file, it worked fine for me, no errors and always another bit of trivia.
<AnimationName>New Animation</AnimationName>
<AnimationName>blink</AnimationName>
Ohh,.... got it to work with Emma voice, here are the settings :)Glad to hear you got it working.
I found a change in the naming of animations. Tyler was not blinking. I had this in the CSS file :Code<AnimationName>New Animation</AnimationName>
Which I changed to :Code<AnimationName>blink</AnimationName>
(And a second occurrence too)
Now she blinks again, looks like spaces are not supported now. I only mention this in case it affects your documentation...
As for the web version. What does my server need to have ? I am assuming it has to be an MS server ?Yes. You need an MS webserver that supports MVC 3 (which is normally the case) and .net4 (these days also not a problem). At least, these are the current specs. I have not yet tried this, but it should be possible to build a webserver in mono and have it run on linux.
When scrolling through larger sets of data it freezes and sometimes it sends you to the bottom of the data. So it's difficult to navigate.Yes. this is a problem caused by the WPF library. With larger topics, I need to turn on 'Virtualization' or it uses tons of memory and takes a couple of minutes to load, but then you get the problem you just described. I know how it can be solved, but that requires a couple of months to fix (basically a complete re-write of the UI component that uses the lowest level of drawing api's instead of default WPF).
The 'find' feature seems to freeze the program unless I am not using it right. Say I load 'update1' and use 'find' to search for 'What is your name'. The search results appear, but when I click on one the whole thing freezes.Yes, I recently fixed an issue with the search features that had crawled in there after adding some of the features for AIML support. I haven't yet tested it again on the alice set though, which is fairly big. I might also have to add some speed improvements here and there.
So I currently solved it by providing a master-detail view, which remains fast. In the editors' local toolbar, the 2 last buttons allows you to switch between view.
about the webserver. I haven't tried this, but there is this: http://www.mono-project.com/Mod_mono (http://www.mono-project.com/Mod_mono)
You: what is my name ?
bot: Your name ? Is that correct English?
You: My name is Freddy
bot: How can I help you,#user.name:nameorvalue.
You: What is my name ?
bot: I don't know. What is your name?. I'm so happy we are having this conversation.
Still getting the avatar disappearing...yes, I hadn't done anything about that yet. Do you know how to easily reproduce this?
Notice how the variable is used rather than the name supplied. And it looks like the username is not being recorded.Yep, something's definitely going wrong there. I'll have a look into it. Thanks.
In general it's really looking good though, well doneThanks.
So, I think I fixed most of the issues with the AIML importer and the character engine. There's a new release available from the download site.
Freddy, you are working with an SSD harddrive, right? How is the operational speed of Alice on your computer? There is currently still a bottle neck happening when a large block of data needs to be read from disk, I have some possible solutions in mind, but I was wondering what happens on SSD disks.
I've also written a series of articles about the library itself:
If you mean generally chatting to the bot, then it is fine responses all seem to come within a second or less as far as I can tell.That's interesting. Not on my pc. It all depends on the order. If I start with something like 'What is your name' ('what is' is often defined in alice), I get a slower response, but because of the buffering, this gets better, and can occasionally drop again.
OK, busy to day so will take a look asap.No worries....