NND & AICI

  • 155 Replies
  • 51330 Views
*

Bragi

  • Trusty Member
  • ********
  • Replicant
  • *
  • 564
    • Neural network design blog
Re: NND & AICI
« Reply #120 on: July 17, 2012, 01:50:10 pm »
 O0

*

victorshulist

  • Trusty Member
  • ****
  • Electric Dreamer
  • *
  • 118
Re: NND & AICI
« Reply #121 on: July 23, 2012, 09:00:28 pm »
Good to know you are continuing work on your project.

*

Bragi

  • Trusty Member
  • ********
  • Replicant
  • *
  • 564
    • Neural network design blog
Re: NND & AICI
« Reply #122 on: July 24, 2012, 11:22:24 am »
yep, I think it's about to enter an interesting phase.  I'm currently finishing up 'refactoring' the source code that I exported out of the network (it was a bit of a monolithic chunk of C-like text that needed some touching up in order to make it more readable). Next on the list: get a new language feature up and running (dubbed 'bindings', which basically replaces structured types as found in many other programming languages), which will bring the '^noun.text' and #bot.mem.who' type of writing that was currently only available in the pattern definitions, to the network's scripting language. Next, get the dreadful 'sync' feature out of the android version, which will be possible once I've replaced the 'do-pattern-parser' with the full scripting language (so that there are 'if' and 'while' type of statements available in the 'do-patterns').
And then, finally, the interesting bit: separate the input pattern definitions from the output patterns and put an extra layer in between that generates or forms the output. The Input patterns will be defined as I've already started with the Aici bot (in the form of 'subject-verb-object'), and output patterns will be in a similar form. In between, there can be several different layers like the 'not' or 'likes/dislikes', introvert/extravert or 'uncertainty' layers. Each of which gently modifying the output.
With the correct layers in between, you could have the start of a creative bot (output doesn't need to be rendered as text, but could be transformed into other things).
Also, I'm not yet certain how to call this new programming language. I was originally thinking of 'neural network language', but it's syntax is so very much C-based (more C# then C or C++), so I was thinking of perhaps calling it neural C. In the end, it probably wont matter much as I'll most likely be the only one ever using it (though I believe it is an incredible platform to develop any type of AI with).

*

Bragi

  • Trusty Member
  • ********
  • Replicant
  • *
  • 564
    • Neural network design blog
Re: NND & AICI
« Reply #123 on: August 09, 2012, 12:47:13 pm »
Small update: The 'bindings' feature in the new programming language is almost working, I'm currently getting out the bugs. I've also already written the initial code-snippet responsible for decoupling input from output and putting the 'creativity' in between, just the actual layers in between haven't been written yet. 
Here's how the output-creation algorithm looks like at the moment (not tested)

Code
/*
    Activates a series of functions to modify the asset defined in CallbackIn.
    Then looks up a pattern, based on how the value for 'CallbackIn' maps to that pattern. CallbackIn should be an asset, the attribute of all the children, makes up the pattern name.
    Finally, activates the 'Output' section of the pattern that was found (which can be a topic or rule) and returns the result that was rendered. When the input was a topic, the 'questions' section will be activated, if it was a rule, the 'conditionals/output' section will be rendered.
   You can optionally specify an argument, value 'true' or 'false' to indicate if the do-patterns also need to be executed or not.
   Execution is done in another thread, to make certain that any var changes done by the rendering, don't affect any other parts.
    */
   Cluster CreateOutput
   {
      this()
      {
         var iFound = ^Bot.TextRenderers.*;                                                  //get all the callbacks that need to be called before getting a pattern to render.   
         foreach(iFound in iFound)
         {
            PushValue(callbackIn);                                                           //need to manually add the function arguments for the callback.
            PushValue(^(iFound).Args);
            call(^(iFound).Callback);                                                        //a renderer is an asset with children: Callback (value = the cluster to call), and 'Args' (possible extra values for the callback, that were supplied by the user). The callback usually modifies the cluster found in CallbackIn, so pass that also along.   
         }
         var iPath;
         foreach(var iChild in CallbackIn)                                                   //get all the attribute values so we can build the name of the pattern to activate.
            Add(ref(iPath), GetFirstOut(iChild, Statics.Attribute));
         iPath = GetCommonParentWithMeaning(Statics.CompoundWord, iPath);                    //get the compound from all the pattern names, this links to the pattern.
         if(count(iPath) > 0)
         {
            var iPattern = GetFirstIn(iPath, Statics.NameOfMember);
            if(Count(iPattern) > 0)
            {
               var exec = New(neuroncluster);
               var callresult = New(neuroncluster);
               if (Count(callbackargs) > 0)
                  AddChild(exec, callbackargs);
               AddLink(exec, callresult, Render);
               AddInfo(exec, callresult, Render, OutputSin, PatternMatcher.SplitResults, iPattern);
               BlockedSolve(exec);
               callbackout = Union(callbackout, GetChildren(callresult));
               Add(ref(PatternMatcher.ItemsToFreeze), GetChildren(exec));
               Delete(exec, callresult);
               return;                                                                             //it worked, lets get out of here so that there is no error rendered.
            }
         }
         Error("Can't map the path '" iPath "' to a pattern");
      }
   }

I'm a bit in lingo about what to do next. I need to finish these layers + add more pattern definitions (input + output) + I still need to fix the android version.
BUT, I just found this Kaggle thing, which is soooo tempting. There are a couple of contests in there that I think I can do (within the timeframe of the contest). Some things are actually fairly similar to what I did for the  QA1WATT bot (which was written in something like a day, maybe 2). And others are in areas that I have already been preparing for since long ago (visual pattern matching for instance) .
Furthermore, with google and nuance getting into the personal assistant market, I'm feeling seriously outclassed and will always have to play a catch-up game with them, starting with getting rid of my startup-bugs and ending with a proper STT engine (that would take me years).
So, I think I'm first going to try and enter in this competition and focus on that for a little bit (once the binding bugs are done).

*

Bragi

  • Trusty Member
  • ********
  • Replicant
  • *
  • 564
    • Neural network design blog
Re: NND & AICI
« Reply #124 on: September 06, 2012, 08:32:21 am »
So, I recently entered my first kaggle competition:  http://www.kaggle.com/c/detecting-insults-in-social-commentary I'm currently at 66th position, so somewhere in the middle of the pack. The final scores will be tested with a new dataset (the current results are based on a known dataset, so you could manually label things). That means scores can still seriously change for better or worse. So, fingers crossed!!
It's been an interesting competition so far, I again managed to seriously improve things, still have to do lots of work though. I'm hoping to get a spell checking algorithm finished before the deadline. This should push my results up some points.
Anyway, I'm definitely going to enter some more competitions on this site. Since it's allowed to form teams in most competitions, I was wondering if someone feels like making up a team?

I've been doing this work in the new programming language with the bindings and stuff. Works like a charm. I'm now able to easily customise the pattern matching algorithm, depending on requirements.
Also, I'm adding a new 'query' feature (like sql databases), which is based on the new programming language. This is all for being able to properly work with 'big datasets' like some of the competitions (datafiles of several gigs are not unusual).

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6856
  • Mostly Harmless
Re: NND & AICI
« Reply #125 on: January 27, 2013, 05:06:54 pm »
Bragi, I see you have a new team member - congratulations  :)

I was wondering if your bot can run as a web service yet ?  So say I have it set up on my server, I can query the bot in a similar way to how Pandorabots works ?

Is that something currently supported or planned ?

Also, how is the character creator going and what is in the pipeline if anything ?

Many thanks  :)

*

Bragi

  • Trusty Member
  • ********
  • Replicant
  • *
  • 564
    • Neural network design blog
Re: NND & AICI
« Reply #126 on: January 27, 2013, 05:53:53 pm »
Quote
I see you have a new team member - congratulations
Yep, thanks. He offered to help which is always welcome.

Quote
I was wondering if your bot can run as a web service yet ?  So say I have it set up on my server, I can query the bot in a similar way to how Pandorabots works ?
yes. Though it's been a while since I tested this feature. There is currently a simple api interface (I believe there is 1 for regular text and 1 for xml). It's also pretty easy to add new API interfaces.

Quote
Also, how is the character creator going and what is in the pipeline if anything ?
That's currently in the deep-freezer. I have to many other things to do right now, like finishing up the integration of the full code features...

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6856
  • Mostly Harmless
Re: NND & AICI
« Reply #127 on: January 27, 2013, 09:35:03 pm »
I've downloaded NND and it's working here.  Some times when there is an error(?) the chatbot avatar disappears.  Closing the chatbot view and reopening it brings her back.  Mika is cute, she makes me want to make another avatar again, like I did with Tyler.

Well, first off I thought I would load up some AIML just to start with, then maybe I can work out the more complex stuff in time.  Although AIML seems pretty complex to me at times when I see what SquareBear does.

I have this trivia.aiml which appears to import and it shows stuff in the project tree.  But when I type in a trigger like...

Tell me some trivia.

I get : bot: No output defined!

I'll attach the file in case you have some time to look at it :)

One other thing is that my Ivona Emma voice is not working.  I have it selected but it's still using the old Microsoft Anna voice.

I have a few questions on the web deployment side, but I don't want to bombard you !

Cheers :)

*

Bragi

  • Trusty Member
  • ********
  • Replicant
  • *
  • 564
    • Neural network design blog
Re: NND & AICI
« Reply #128 on: January 28, 2013, 07:33:03 am »
Quote
I'll attach the file in case you have some time to look at it
Great. I'll have a look at it. I'm currently working on this bit anyway: I've changed a lot of stuff getting the system AIML compatible and including the 'if', 'while',... stuff. I already fixed a lot of things but haven't made a new installer yet.

Quote
Some times when there is an error(?) the chatbot avatar disappears.
Yes, this is something I also noticed sometimes, but never dug into to it yet.

Quote
One other thing is that my Ivona Emma voice is not working.  I have it selected but it's still using the old Microsoft Anna voice.
The system allows you to switch between managed and unmanaged MSAPI (tools/options). Perhaps it's in the other setting as you have now.


Quote
I have a few questions on the web deployment side, but I don't want to bombard you !
No worries, just ask.

I'll make a new release later today, cause I have fixed a lot of silly errors the last few days that had crawled into it again...

*

Bragi

  • Trusty Member
  • ********
  • Replicant
  • *
  • 564
    • Neural network design blog
Re: NND & AICI
« Reply #129 on: January 28, 2013, 08:53:22 am »
Freddy,
I think I already fixed the problem but haven't released it. When I loaded up the aiml file, it worked fine for me, no errors and always another bit of trivia.

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6856
  • Mostly Harmless
Re: NND & AICI
« Reply #130 on: January 28, 2013, 02:04:57 pm »
Quote
The system allows you to switch between managed and unmanaged MSAPI (tools/options). Perhaps it's in the other setting as you have now.

Not that...I tried all three settings and I still get croaky Anna.

Quote
I think I already fixed the problem but haven't released it. When I loaded up the aiml file, it worked fine for me, no errors and always another bit of trivia.

Cool  O0

As for the web version.  What does my server need to have ?  I am assuming it has to be an MS server ?

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6856
  • Mostly Harmless
Re: NND & AICI
« Reply #131 on: January 28, 2013, 02:07:00 pm »
Ohh,.... got it to work with Emma voice, here are the settings :)

*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6856
  • Mostly Harmless
Re: NND & AICI
« Reply #132 on: January 28, 2013, 02:33:13 pm »
I found a change in the naming of animations.  Tyler was not blinking.  I had this in the CSS file :

Code
<AnimationName>New Animation</AnimationName>

Which I changed to :

Code
<AnimationName>blink</AnimationName>

(And a second occurrence too)

Now she blinks again, looks like spaces are not supported now.  I only mention this in case it affects your documentation...

*

Bragi

  • Trusty Member
  • ********
  • Replicant
  • *
  • 564
    • Neural network design blog
Re: NND & AICI
« Reply #133 on: January 28, 2013, 03:08:36 pm »
Ohh,.... got it to work with Emma voice, here are the settings :)
Glad to hear you got it working.
Yep, that's the thing with those MSAPI voices. Not all support the same thing Emma probably didn't support ssml. That's why I added this option to select various things.

*

Bragi

  • Trusty Member
  • ********
  • Replicant
  • *
  • 564
    • Neural network design blog
Re: NND & AICI
« Reply #134 on: January 28, 2013, 03:10:18 pm »
I found a change in the naming of animations.  Tyler was not blinking.  I had this in the CSS file :

Code
<AnimationName>New Animation</AnimationName>

Which I changed to :

Code
<AnimationName>blink</AnimationName>

(And a second occurrence too)

Now she blinks again, looks like spaces are not supported now.  I only mention this in case it affects your documentation...

I don't think I tried this before. It's been a while since I did anything serious to this part of the application. I'll see if can put this in the documentation somewhere.
Thanks.

 


OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am
Nvidia Hype
by 8pla.net (AI News )
December 06, 2023, 10:04:52 pm
How will the OpenAI CEO being Fired affect ChatGPT?
by 8pla.net (AI News )
December 06, 2023, 09:54:25 pm
Independent AI sovereignties
by WriterOfMinds (AI News )
November 08, 2023, 04:51:21 am
LLaMA2 Meta's chatbot released
by 8pla.net (AI News )
October 18, 2023, 11:41:21 pm

Users Online

206 Guests, 0 Users

Most Online Today: 359. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles