AGI methods

  • 56 Replies
  • 14814 Views
*

Freddy

  • Administrator
  • **********************
  • Colossus
  • *
  • 6178
  • Mostly Harmless
Re: AGI methods
« Reply #45 on: April 08, 2015, 10:05:04 pm »
It's hard to come up with new names. I'm working on a new scripting language that was inspired by my work making my AIML interpreter. It's very simple. But yep names, I went through a whole list of Mythical creatures and characters before I found something.

CogLog sounds catchy and easy to remember.

I don't know if this is useful to you. I'm going to try to use it to check my script is valid. I'm using it to help create my language. It can be made to return results too, but in that respect it seems to have it's limitations. The main work of the intepreter still has to be done separately. But I think it will be useful.

http://www.codeproject.com/Articles/28294/a-Tiny-Parser-Generator-v
« Last Edit: April 08, 2015, 10:31:59 pm by Freddy »

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AGI methods
« Reply #46 on: April 08, 2015, 10:21:45 pm »
@Freddy: Thanks for the link, I like it. Interesting compiler generator for .Net. A lot of users too :)
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AGI methods
« Reply #47 on: May 30, 2015, 02:12:26 pm »
I ported Moony Parser from Earley algorithm to recursive descent algorithm. I gained some speed up, but still have to reimplement left recursion support to follow natural priority of parsing rules. More intelligent tag support is done.

I'm looking forward for natural language processing (among other uses) tool kit one day. I'm thinking about renaming project to "Metafigure".
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AGI methods
« Reply #48 on: June 10, 2015, 06:56:01 pm »
Intuitive natural order of left recursion is done with the parser. Algorithm was taking the least deep left recursion it can, but now this is reimplemented to follow plain logical behavior (what is noted first, gets parsed first). Wasn't such a trivial task, but I've made it. Wasn't so important either, but I wanted to do it.

I'm still waiting for Javascript 6 multiline strings.

What's next? To fix bugs and to allow to mix parsing rules and parsing texts in the same file or string. After that things might get interesting with deduction...
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AGI methods
« Reply #49 on: July 13, 2015, 05:53:28 pm »
How about a natural language search engine?

A web crawler would visit sites, extract data and store it to a central server database. The server would provide a search engine that answers natural language questions like: gimme sites where I can buy an airplane ticket to North Pole for less than $500. If an answer is not found, the server could offer to mail the user when the answer becomes available.
Wherever you see a nice spot, plant another knowledge tree :favicon:


*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AGI methods
« Reply #51 on: July 13, 2015, 06:58:59 pm »
Not bad for today (in fact very good), but people are talking about web 3.0 (semantic web) that is aware of specific tabular or otherwise structured data. There are two possible paths:

1. to introduce new standards for storing web pages and data that would be accepted by majority of independent www sites
2. to analyze existing web sites through natural language processing.

I'm impressed by possibility of having a big knowledge base (or an index) of everything that is available on web. That way a new knowledge could be concluded or extracted by analyzing existing one. A good start could be importing Yago-Sumo or dbpedia knowledge base, extending it later by custom web-crawler data. This web crawling even could be done from client side (when searching something, crawl one other or several other uncrawled sites). Result indexes (structured knowledge) would be then submitted to central server, graduately growing out global its index. It would be real distributed computing, this client side index composing and there is a lot of idle time on users' machines.

Be patient, I'm building universal knowledge base file format that could serve as index for any knowledge item, including data and code.
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

infurl

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 276
  • Humans will disappoint you.
    • Home Page
Re: AGI methods
« Reply #52 on: July 20, 2015, 12:14:46 am »
Have you considered using the Tomita parsing algorithm? It was invented in 1985 (i.e. quite recently in computer science terms) and is many orders of magnitude faster than the Earley parsing algorithm for "interesting" input. It can handle unrestricted context free grammars and ambiguity very easily making it suitable for parsing natural language efficiently. There are numerous references and examples on the web.

By the way, if you are finding that recursive descent parsing is faster than the Earley algorithm you are probably doing something wrong. However you can get a substantial speed boost by using function memoization which will turn a recursive descent parser into a pseudo chart parser like Earley or Tomita and it's fairly simple to do.

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AGI methods
« Reply #53 on: July 20, 2015, 10:29:25 am »
Infurl, thank you for your insight. Tomita is too complicated for me to understand it and I'm chasing some alternatives. If I didn't have other options, I'd probably use Tomita, but I've found out something simpler.

I'm experimenting with recursive descent algorithm with memoization. So far it turns to be 10x faster than Earley version. I also added support for direct and indirect left recursion (with some adjusted memoization technique) and it works cool. I can easily switch between CFG mode and PEG mode (PEG is 4x faster than CFG), I'll probably add some keyword for user to choose by which mode she/he will want to parse which fragments of grammar. I'm also adding predicates to parsing algorithm, so you can parse e.g. a word (just letters, ended by a space) and then check immediately a database if the word is there (modified, or not by some grammar rules; I need this for natural language parsing). I also have an option for handling ambiguity in CFG mode, but I'm still not sure what to do with it.
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AGI methods
« Reply #54 on: July 22, 2015, 01:04:55 pm »
I just thought of something. A starting knowledge base would be a few megabytes large, to be a fast solution when a smart bot is booting. When a user wants to talk about something that is not in knowledge base, a smart bot could search Wikipedia for relevant theme, parse it by NLP and upload the new knowledge to talk to user.
Wherever you see a nice spot, plant another knowledge tree :favicon:

*

infurl

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 276
  • Humans will disappoint you.
    • Home Page
Re: AGI methods
« Reply #55 on: July 22, 2015, 10:00:45 pm »
When a user wants to talk about something that is not in knowledge base, a smart bot could search Wikipedia for relevant theme, parse it by NLP and upload the new knowledge to talk to user.

Yesterday I came across this library which is designed to assist with tasks like that.

http://blog.dlib.net/2014/07/mitie-v02-released-now-includes-python.html

*

ivan.moony

  • Trusty Member
  • *********
  • Terminator
  • *
  • 757
  • look, a star is falling
Re: AGI methods
« Reply #56 on: August 03, 2015, 01:26:07 pm »
Drifting a plan a little bit further:

So a knowledge base (KB) of Web and Wikipedia (as a very good information source) would be maintained on central server. Users could browse KB in a tree-like self expanding texts, where sources of informations would be offered to navigate to. KB would be presented in readable way, summing up all knowledge previously collected and when a user wants to visit source site, she/he can navigate to it. Like a Google search, but more intelligent.

So, how to obtain KB? When users are browsing the web, KB is updated with new informations from sites they visit. This could be done by iframe inside HTML where sites would be shown up. Around iframe would be navigation buttons, bookmarks tabs and KB extracts for navigating web. Primary source would be Wikipedia with its outer links. If user can't find what she/he searches for, she/he can open any standard search engine inside iframe and there goes again NLP parsing and indexing of visited sites.

And now comes the interesting part: Users would choose where to invest their processor time through webworkers (multitasking implementation of HTML scripting). For example, user says: I want to dedicate one thread from my processor to discovery of new drugs for curing cancer. And the machine induces, abduces and deduces related unknown knowledge from facts already stored in KB. When it finds out interesting discovery, it alerts user and new knowledge extension becomes a part of shared KB. I know where I want to invest my processor time: for finding artificial food (I guess it is a kind of chemistry) :)

Yeah, that could be a life, while you are playing around the web, your computer thinks and improves the science world. If this works out, I wouldn't need a behavior algorithm for human like AI, because all I need is a subset of AI for improving science world. Anyway, I'm kind of afraid of autonomous machines, it would be a great responsibility to build one.

A lot of work to do in the future and I like the current plan.
Wherever you see a nice spot, plant another knowledge tree :favicon:

 


Users Online

27 Guests, 1 User
Users active in past 15 minutes:
tedathome
[Roomba]

Most Online Today: 39. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles