Recent Posts

Pages: [1] 2 3 ... 10
1
AI Programming / Re: Listing States / Processes / EventActs
« Last post by Zero on Today at 03:31:01 pm »
Here is a little EBNF. Well, almost EBNF because I didn't mention white spaces, and didn't define "text where special characters are escaped".


***************************************

   namespace prefix : "filename"

   item name -> [ meta | data ]

   item name -> relation type { slot1: item1, item2, ... ; slot2: item3; }

   item name -> ( behavior node, ... , ... )





   EBNF

   (proposed ISO/IEC 14977 standard, by R. S. Scowen)

   Usage             Notation
   -----             --------
   definition        =
   concatenation     ,
   termination       ;
   alternation       |
   optional          [ ... ]
   repetition        { ... }
   grouping          ( ... )
   terminal string   " ... "
   terminal string   ' ... '
   comment           (* ... *)
   special sequence  ? ... ?
   exception         -



source file =
{ namespace declaration } , { item definition } ;


item identifier =
text where special characters are escaped ;


namespace declaration =
item identifier , ":" , '"' , filename , '"' ;


item definition =
item identifier { "/" item identifier } , arrow , item value ;


arrow =
"->" ;


item value =
meta/data | relation | behavior ;


meta/data =
"[" , { meta/data content } , "|" , { meta/data content } , "]" ;


meta/data content =
text where special characters are escaped | item value ;


relation =
relation type , "{" , { slot description } , "}" ;


relation type
= text where special characters are escaped ;


slot description
= slot name , ":" , slot value , { additional slot value } , ";" ;


slot name
= text where special characters are escaped ;


slot value
= item identifier | item value ;


additional slot value
= "," , slot value ;


behavior
= "(" , atomic EventAct , { additional behavior content } , ")" ;


atomic EventAct
= text where special characters are escaped ;


additional behavior content
= "," , behavior content ;


behavior content
= behavior | item identifier | item value ;



***************************************
2
UltraHal / Re: Ultra Hal 7 - News Update
« Last post by Freddy on Today at 03:28:04 pm »
Cool. How does it look visually, there were no screen shots and does it use a newer version of Haptek ?
3
UltraHal / Ultra Hal 7 - News Update
« Last post by Art on Today at 02:48:22 pm »
After passing through Alpha testing it has recently gone into Beta and shouldn't be as such much longer before going to RC (Release Candidate), then Final.
Although I am not privy to a public release date, I would have to think it to be within a relatively short time frame.

Most testing has gone quite well and the new Hal 7 will have a lot of really nice and productive features.

http://www.ultrahal.com/community/index.php?topic=14077.0
4
General Chat / Re: What's everyone up to ?
« Last post by ivan.moony on Today at 02:09:02 pm »
I'm playing with logic in my theoretical programming language. It seems we can achieve "Sai Baba" effect when dealing with assumption sets.

Suppose we define semantic tables for Boolean operators not, or, and, implies and equals-to. Now, we enter an assumption set with any variables initialized to (True + False). If the derived result is false, it indicates that the assumption set is contradictory. If the negation of the result is false, it indicates that the assumption set is tautology, meaning that the assumption set always yields true, meaning in turn that the assumption set it is a theorem in very logic.

This gives us opportunity to start from merely semantic table of Boolean operators. Then we can construct random logic sentences and check if they are theorems. If they are theorems, we print them out, notifying the very rules of logic. That means we can extract the whole logic theory just from semantic tables! :o

I was having some similar thoughts before, but I didn't connect them with plain solving of Boolean expressions. Definitely worth of checking in reality. :stirthepot:

I wonder how to derive all the math formulas too, but maybe that's beyond capabilities of this method, as math deals with infinite sets of numbers (logic, in example, deals only with `true` and `false`, so we can easily check both cases for each variable). But we should be able to be certain up to some percent, depending on number of  checked examples.
5
AI Programming / Re: Listing States / Processes / EventActs
« Last post by Zero on Today at 01:01:41 pm »
Quote
just make sure you dont end up like the open cyc guys with a database that isnt geared to pump a reaction out of it everyframe.

Sure! I think the dynamic part of it could work a bit like in an Entity Component System, as used in video games. Let me explain my view.

In ECS, Entities and Components are the easy part. Systems are what's really interesting. They are like tiny engines, ready to work on any thing that matches what they like to work on. A System has two parts:
- A requirement part, that defines what an Entity should be like to be eligible
- A procedural part, that does the actual state-change job
As soon as an Entity corresponds to what a System wants, the System works on it.

For example, requirement can be like
- being part of this or that relation
- respecting this or that schema
- containing this or that item
- ...etc.

When an Entity's state changes, you check its new state against existing Systems. When there's a match, you register this item on this System. If it doesn't match anymore, you unregister it.

And every turn, you run every System.

Here, Entities are "items", and Systems are "processes".
6
Robotics News / Reading a neural network’s mind
« Last post by Tyler on Today at 12:03:40 pm »
Reading a neural network’s mind
11 December 2017, 4:59 am

Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.

During training, however, a neural net continually adjusts its internal settings in ways that even its creators can’t interpret. Much recent work in computer science has focused on clever techniques for determining just how neural nets do what they do.

In several recent papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Computing Research Institute have used a recently developed interpretive technique, which had been applied in other areas, to analyze neural networks trained to do machine translation and speech recognition.

They find empirical support for some common intuitions about how the networks probably work. For example, the systems seem to concentrate on lower-level tasks, such as sound recognition or part-of-speech recognition, before moving on to higher-level tasks, such as transcription or semantic interpretation.

But the researchers also find a surprising omission in the type of data the translation network considers, and they show that correcting that omission improves the network’s performance. The improvement is modest, but it points toward the possibility that analysis of neural networks could help improve the accuracy of artificial intelligence systems.

“In machine translation, historically, there was sort of a pyramid with different layers,” says Jim Glass, a CSAIL senior research scientist who worked on the project with Yonatan Belinkov, an MIT graduate student in electrical engineering and computer science. “At the lowest level there was the word, the surface forms, and the top of the pyramid was some kind of interlingual representation, and you’d have different layers where you were doing syntax, semantics. This was a very abstract notion, but the idea was the higher up you went in the pyramid, the easier it would be to translate to a new language, and then you’d go down again. So part of what Yonatan is doing is trying to figure out what aspects of this notion are being encoded in the network.”

The work on machine translation was presented recently in two papers at the International Joint Conference on Natural Language Processing. On one, Belinkov is first author, and Glass is senior author, and on the other, Belinkov is a co-author. On both, they’re joined by researchers from the Qatar Computing Research Institute (QCRI), including Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and Stephan Vogel. Belinkov and Glass are sole authors on the paper analyzing speech recognition systems, which Belinkov presented at the Neural Information Processing Symposium last week.

Leveling down

Neural nets are so named because they roughly approximate the structure of the human brain. Typically, they’re arranged into layers, and each layer consists of many simple processing units — nodes — each of which is connected to several nodes in the layers above and below. Data are fed into the lowest layer, whose nodes process it and pass it to the next layer. The connections between layers have different “weights,” which determine how much the output of any one node figures into the calculation performed by the next.

During training, the weights between nodes are constantly readjusted. After the network is trained, its creators can determine the weights of all the connections, but with thousands or even millions of nodes, and even more connections between them, deducing what algorithm those weights encode is nigh impossible.

The MIT and QCRI researchers’ technique consists of taking a trained network and using the output of each of its layers, in response to individual training examples, to train another neural network to perform a particular task. This enables them to determine what task each layer is optimized for.

In the case of the speech recognition network, Belinkov and Glass used individual layers’ outputs to train a system to identify “phones,” distinct phonetic units particular to a spoken language. The “t” sounds in the words “tea,” “tree,” and “but,” for instance, might be classified as separate phones, but a speech recognition system has to transcribe all of them using the letter “t.” And indeed, Belinkov and Glass found that lower levels of the network were better at recognizing phones than higher levels, where, presumably, the distinction is less important.

Similarly, in an earlier paper, presented last summer at the Annual Meeting of the Association for Computational Linguistics, Glass, Belinkov, and their QCRI colleagues showed that the lower levels of a machine-translation network were particularly good at recognizing parts of speech and morphology — features such as tense, number, and conjugation.

Making meaning

But in the new paper, they show that higher levels of the network are better at something called semantic tagging. As Belinkov explains, a part-of-speech tagger will recognize that “herself” is a pronoun, but the meaning of that pronoun — its semantic sense — is very different in the sentences “she bought the book herself” and “she herself bought the book.” A semantic tagger would assign different tags to those two instances of “herself,” just as a machine translation system might find different translations for them in a given target language.

The best-performing machine-translation networks use so-called encoding-decoding models, so the MIT and QCRI researchers’ network uses it as well. In such systems, the input, in the source language, passes through several layers of the network — known as the encoder — to produce a vector, a string of numbers that somehow represent the semantic content of the input. That vector passes through several more layers of the network — the decoder — to yield a translation in the target language.

Although the encoder and decoder are trained together, they can be thought of as separate networks. The researchers discovered that, curiously, the lower layers of the encoder are good at distinguishing morphology, but the higher layers of the decoder are not. So Belinkov and the QCRI researchers retrained the network, scoring its performance according to not only accuracy of translation but also analysis of morphology in the target language. In essence, they forced the decoder to get better at distinguishing morphology.

Using this technique, they retrained the network to translate English into German and found that its accuracy increased by 3 percent. That’s not an overwhelming improvement, but it’s an indication that looking under the hood of neural networks could be more than an academic exercise.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.
7
AI Programming / Inner self dashboard
« Last post by Zero on Today at 11:22:38 am »
Imagine a program that would be able to reproduce each function of a human brain. We would still need an execution model for it to work as a whole.

I propose an "inner self" dashboard, like the one in the picture, as central point of the system.

Instead of trying to create an execution model, we expose the program's states, processes and events through a virtual dashboard.



Making it visual helps figuring things out.

On the top, we have the Activity tabs. We need it to be able to switch activities, suspend them, go back to them, ...etc.

Right in the middle, there's the Main work zone. This is what contains the mental stuff we're working on right now, whatever it is.

On the left side, a Navigation panel lets us choose what we see in the Main work zone. The navigation panel only shows relevant/related stuff.

On the right side, there's a Tool box containing everything we need to act upon what's in the Main work zone. Again, it only shows relevant tools.

When interesting events occur, they pop up in the Notifications zone, below the Tool box.



Now, this dashboard would actually be "used" by a second program which thus would "drive" the first one. I guess it would be a neural net, with deep learning and a maximize pleasure / minimize pain goal. The first program (the one that can be driven through the dashboard) has sensors to feel what the second one does and report it, so we have consciousness. If learning and understanding new things is a source of pleasure, we'll have a nice little AGI.
8
AI Programming / Re: Listing States / Processes / EventActs
« Last post by ranch vermin on Today at 10:29:53 am »

Now, here is the relation syntax. It looks a bit like CSS. First you put the name of the type of relation, then between curly braces, a list of items following their role in the relation. Several items can have the same role.

rain wets -> explanation {
   fact: everyone is wet;
   causes: it rains, nobody has an umbrella;
   consequences: they'll catch a cold;
}



these will definitely work, its just a high level truth table (and truth tables EXECUTE!!) rather than just having the bits themselves, your relating them to other items.
making it all interrelate, as i imagine the whole thing is just cause and effect.    keep going brainiak!

just make sure you dont end up like the open cyc guys with a database that isnt geared to pump a reaction out of it everyframe.

they ended up coding a reaction on top of it, that just spat out what it knew.  but it should come from the truth tables themselves to knock out a reaction each frame.  of what do i do?   we all do things in life, especially in the toilet, and once one hasnt enough to do he gets depression so its big a part of life! :)

maybe its a bit like sherlock holmes.

one last thing to spit out of my head to you.

You may find that repetition in your big net, is going to be necessary to build it, without hiccups appearing in the formation where things wont fit.   i know it sounds ugly but it might help?  just a suggestion.
9
AI Programming / Re: Listing States / Processes / EventActs
« Last post by Zero on Today at 08:24:53 am »
The same problem occured often, during my AI research, and I'm pretty sure anyone really trying to find the "heart" of AGI had the same problem. It is the "modeling loop" problem. You work on a specifc aspect of the mind, and you can create a model of it that's quite accurate. Then you realize it's related to another aspect of the mind, which you have already modeled really differently. So you align the model of the other aspect, to fit with the first one. But doing so, you realize you have to align a third aspect with the second one. And aligning the third one, you realize you have to align the first again because it doesn't fit anymore. It's an unsolvable cycle.

For example, in the big list of the first post, you could say that "generalizing", with its subnodes "pattern recognition" and "pattern label creation", are strongly related to "interpreting" and its 3 subnodes, and also related to "percepting".

My solution to the "modeling loop" problem is not to align models anymore. This tree will just continue to grow, going deeper towards subnodes of subnodes, until things are simple enough to be executable. In the resulting system, there will be several equivalent representations of data, and a translation engine will continuously maintain integrity by updating representations that are related.

ED: The list is bigger now!
10
That sounds cool!  O0
What kind of environment do these bots evolve in? How does the language look like?
Pages: [1] 2 3 ... 10

Users Online

30 Guests, 0 Users

Most Online Today: 49. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles