Recent Posts

Pages: [1] 2 3 ... 10
UltraHal / Re: Ultra Hal 7 - News Update
« Last post by Art on Today at 02:42:34 pm »
Just my personal take,

From what I know, it can use those tried and true MSAgent characters like Merlin, Peedy and Genie, etc., and also those Verbot characters like Sylvie, Melissa and Julie. It still handles the native Haptek characters like Mary, James, Sandy, etc. as well as any other custom Haptek characters, and their actions are smooth and fluid.
There is a new TTS voice from CereProc which can be had from Zabaware that works very nicely with the character's lip sync as well.

Cloud based learning is a big evolutionary leap for Zabaware's Ultra Hal 7 due to the staggering number of conversations gathered in the cloud.

So while the overall "look" is pretty much the same as before, the mind and learning of the new Ultra Hal 7 is greatly improved along with a very good increase in speed.

Previous Plug-ins seem to work perfectly with recent testing.

Some personal results from testing have been very pleasing and often surprising. I was especially impressed by some of the inferences Hal made while we chatted about things. I did not "parrot" or echo back what I had just said. It provided the gist of what I was saying but in it's own words, different and better than what I had said.
Another example was,It knew it was about lunch time and asked if I was going to have lunch. I said that I might have a nice small salad. It asked why? and I told it that I was trying to watch my weight. It replied something to the effect of, "So is it because you're trying to watch your weight, Art. No, because you are trying to lose some weight, is that right Art? Good!

Well...what could I say? I was both surprised and impressed. Yeah...sometimes scary smart.

I am looking forward to what Mr. Medeksza has to offer in Hal's final public release. If it's anything like what I've already seen it's going to be really fun and a lot of bang for the buck!
Four from MIT named 2017 Association for Computing Machinery Fellows
11 December 2017, 4:00 pm

Today four MIT faculty were named among the Association for Computing Machinery's 2017 Fellows for making “landmark contributions to computing.”

Honorees included School of Science Dean Michael Sipser and three researchers affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL): Shafi Goldwasser, Tomás Lozano-Pérez, and Silvio Micali.

The professors were among fewer than 1 percent of Association for Computing Machinery (ACM) members to receive the distinction. Fellows are named for contributions spanning such disciplines as graphics, vision, software design, algorithms, and theory.

“Shafi, Tomás, Silvio, and Michael are very esteemed colleagues and friends, and I’m so happy to see that their contributions have recognized with ACM’s most prestigious member grade,” said CSAIL Director Daniela Rus, who herself was named an ACM Fellow in 2014. “All of us at MIT are very proud of them for receiving this distinguished honor.”

Goldwasser was selected for “transformative work that laid the complexity-theoretic foundations for the science of cryptography.” This work has helped spur entire subfields of computer science, including zero-knowledge proofs, cryptographic theory, and probabilistically checkable proofs. In 2012 she received ACM’s Turing Award, often referred to as “the Nobel Prize of computing.”

Lozano-Pérez was recognized for “contributions to robotics, and motion planning, geometric algorithms, and their applications.” His current work focuses on integrating task, motion, and decision planning for robotic manipulation. He was a recipient of the 2011 IEEE Robotics Pioneer Award, and is also a 2014 MacVicar Fellow and a fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and of the IEEE.

Like Goldwasser, Micali was also honored for his work in cryptography and complexity theory, including his pioneering of new methods for the efficient verification of mathematical proofs. His work has had a major impact on how computer scientists understand concepts like randomness and privacy. Current interests include zero-knowledge proofs, secure protocols, and pseudorandom generation. He has also received the Turing Award, the Goedel prize in theoretical computer science, and the RSA prize in cryptography.

Sipser, the Donner Professor of Mathematics, was recognized for “contributions to computational complexity, particularly randomized computation and circuit complexity.” With collaborators at Carnegie Mellon University, Sipser introduced the method of probabilistic restriction for proving super-polynomial lower bounds on circuit complexity, and this result was later improved by others to be an exponential lower bound. He is a fellow of the American Academy of Arts and Sciences and the American Mathematical Society, and a 2016 MacVicar Fellow. He is also the author of the widely used textbook, "Introduction to the Theory of Computation."

ACM will formally recognize the fellows at its annual awards banquet on Saturday, June 23, 2018 in San Francisco, California.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.
XKCD Comic / XKCD Comic : Tinder
« Last post by Tyler on Today at 12:00:15 pm »
11 December 2017, 5:00 am

People keep telling me to use the radio but I really hate making voice calls.


It did not go very well, since the LSTMs are always addicted to one word, repeating it indefinitely ("no no no no", "argh argh argh", "mediocre mediocre mediocre mediocre" etc). so I decided to try a slightly less generative method.

Let's say we train four sentences.

  • A: Hello, how are you?
  • B: Great, thanks!
  • C: Oh, great then! How about you?
  • D: Great too, thanks!

These are stored into a chain, not of words, but of sentences.

To get an answer from this bot,

  • Find the top sentences with the most important words that intersect with the query. (in this case A and C)
  • Beginning with the largest intersection (A), for which the replying sentence is B, get the words of B with the highest TF-IDF words, then get ALL the words between the first and the last of these important words. ("Great,")
  • Repeat for C -> D. "Thanks!"
  • Join them correctly, in the order they were processed (or in any order you find better). ("Great, thanks!")

That should be okay for this case, but I did not test for other cases.
AI Programming / Re: Listing States / Processes / EventActs
« Last post by Zero on December 11, 2017, 03:31:01 pm »
Here is a little EBNF. Well, almost EBNF because I didn't mention white spaces, and didn't define "text where special characters are escaped".


   namespace prefix : "filename"

   item name -> [ meta | data ]

   item name -> relation type { slot1: item1, item2, ... ; slot2: item3; }

   item name -> ( behavior node, ... , ... )


   (proposed ISO/IEC 14977 standard, by R. S. Scowen)

   Usage             Notation
   -----             --------
   definition        =
   concatenation     ,
   termination       ;
   alternation       |
   optional          [ ... ]
   repetition        { ... }
   grouping          ( ... )
   terminal string   " ... "
   terminal string   ' ... '
   comment           (* ... *)
   special sequence  ? ... ?
   exception         -

source file =
{ namespace declaration } , { item definition } ;

item identifier =
text where special characters are escaped ;

namespace declaration =
item identifier , ":" , '"' , filename , '"' ;

item definition =
item identifier { "/" item identifier } , arrow , item value ;

arrow =
"->" ;

item value =
meta/data | relation | behavior ;

meta/data =
"[" , { meta/data content } , "|" , { meta/data content } , "]" ;

meta/data content =
text where special characters are escaped | item value ;

relation =
relation type , "{" , { slot description } , "}" ;

relation type
= text where special characters are escaped ;

slot description
= slot name , ":" , slot value , { additional slot value } , ";" ;

slot name
= text where special characters are escaped ;

slot value
= item identifier | item value ;

additional slot value
= "," , slot value ;

= "(" , atomic EventAct , { additional behavior content } , ")" ;

atomic EventAct
= text where special characters are escaped ;

additional behavior content
= "," , behavior content ;

behavior content
= behavior | item identifier | item value ;

UltraHal / Re: Ultra Hal 7 - News Update
« Last post by Freddy on December 11, 2017, 03:28:04 pm »
Cool. How does it look visually, there were no screen shots and does it use a newer version of Haptek ?
UltraHal / Ultra Hal 7 - News Update
« Last post by Art on December 11, 2017, 02:48:22 pm »
After passing through Alpha testing it has recently gone into Beta and shouldn't be as such much longer before going to RC (Release Candidate), then Final.
Although I am not privy to a public release date, I would have to think it to be within a relatively short time frame.

Most testing has gone quite well and the new Hal 7 will have a lot of really nice and productive features.
General Chat / Re: What's everyone up to ?
« Last post by ivan.moony on December 11, 2017, 02:09:02 pm »
I'm playing with logic in my theoretical programming language. It seems we can achieve "Sai Baba" effect when dealing with assumption sets.

Suppose we define semantic tables for Boolean operators not, or, and, implies and equals-to. Now, we enter an assumption set with any variables initialized to (True + False). If the derived result is false, it indicates that the assumption set is contradictory. If the negation of the result is false, it indicates that the assumption set is tautology, meaning that the assumption set always yields true, meaning in turn that the assumption set it is a theorem in very logic.

This gives us opportunity to start from merely semantic table of Boolean operators. Then we can construct random logic sentences and check if they are theorems. If they are theorems, we print them out, notifying the very rules of logic. That means we can extract the whole logic theory just from semantic tables! :o

I was having some similar thoughts before, but I didn't connect them with plain solving of Boolean expressions. Definitely worth of checking in reality. :stirthepot:

I wonder how to derive all the math formulas too, but maybe that's beyond capabilities of this method, as math deals with infinite sets of numbers (logic, in example, deals only with `true` and `false`, so we can easily check both cases for each variable). But we should be able to be certain up to some percent, depending on number of  checked examples.
AI Programming / Re: Listing States / Processes / EventActs
« Last post by Zero on December 11, 2017, 01:01:41 pm »
just make sure you dont end up like the open cyc guys with a database that isnt geared to pump a reaction out of it everyframe.

Sure! I think the dynamic part of it could work a bit like in an Entity Component System, as used in video games. Let me explain my view.

In ECS, Entities and Components are the easy part. Systems are what's really interesting. They are like tiny engines, ready to work on any thing that matches what they like to work on. A System has two parts:
- A requirement part, that defines what an Entity should be like to be eligible
- A procedural part, that does the actual state-change job
As soon as an Entity corresponds to what a System wants, the System works on it.

For example, requirement can be like
- being part of this or that relation
- respecting this or that schema
- containing this or that item
- ...etc.

When an Entity's state changes, you check its new state against existing Systems. When there's a match, you register this item on this System. If it doesn't match anymore, you unregister it.

And every turn, you run every System.

Here, Entities are "items", and Systems are "processes".
Robotics News / Reading a neural network’s mind
« Last post by Tyler on December 11, 2017, 12:03:40 pm »
Reading a neural network’s mind
11 December 2017, 4:59 am

Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.

During training, however, a neural net continually adjusts its internal settings in ways that even its creators can’t interpret. Much recent work in computer science has focused on clever techniques for determining just how neural nets do what they do.

In several recent papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Computing Research Institute have used a recently developed interpretive technique, which had been applied in other areas, to analyze neural networks trained to do machine translation and speech recognition.

They find empirical support for some common intuitions about how the networks probably work. For example, the systems seem to concentrate on lower-level tasks, such as sound recognition or part-of-speech recognition, before moving on to higher-level tasks, such as transcription or semantic interpretation.

But the researchers also find a surprising omission in the type of data the translation network considers, and they show that correcting that omission improves the network’s performance. The improvement is modest, but it points toward the possibility that analysis of neural networks could help improve the accuracy of artificial intelligence systems.

“In machine translation, historically, there was sort of a pyramid with different layers,” says Jim Glass, a CSAIL senior research scientist who worked on the project with Yonatan Belinkov, an MIT graduate student in electrical engineering and computer science. “At the lowest level there was the word, the surface forms, and the top of the pyramid was some kind of interlingual representation, and you’d have different layers where you were doing syntax, semantics. This was a very abstract notion, but the idea was the higher up you went in the pyramid, the easier it would be to translate to a new language, and then you’d go down again. So part of what Yonatan is doing is trying to figure out what aspects of this notion are being encoded in the network.”

The work on machine translation was presented recently in two papers at the International Joint Conference on Natural Language Processing. On one, Belinkov is first author, and Glass is senior author, and on the other, Belinkov is a co-author. On both, they’re joined by researchers from the Qatar Computing Research Institute (QCRI), including Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and Stephan Vogel. Belinkov and Glass are sole authors on the paper analyzing speech recognition systems, which Belinkov presented at the Neural Information Processing Symposium last week.

Leveling down

Neural nets are so named because they roughly approximate the structure of the human brain. Typically, they’re arranged into layers, and each layer consists of many simple processing units — nodes — each of which is connected to several nodes in the layers above and below. Data are fed into the lowest layer, whose nodes process it and pass it to the next layer. The connections between layers have different “weights,” which determine how much the output of any one node figures into the calculation performed by the next.

During training, the weights between nodes are constantly readjusted. After the network is trained, its creators can determine the weights of all the connections, but with thousands or even millions of nodes, and even more connections between them, deducing what algorithm those weights encode is nigh impossible.

The MIT and QCRI researchers’ technique consists of taking a trained network and using the output of each of its layers, in response to individual training examples, to train another neural network to perform a particular task. This enables them to determine what task each layer is optimized for.

In the case of the speech recognition network, Belinkov and Glass used individual layers’ outputs to train a system to identify “phones,” distinct phonetic units particular to a spoken language. The “t” sounds in the words “tea,” “tree,” and “but,” for instance, might be classified as separate phones, but a speech recognition system has to transcribe all of them using the letter “t.” And indeed, Belinkov and Glass found that lower levels of the network were better at recognizing phones than higher levels, where, presumably, the distinction is less important.

Similarly, in an earlier paper, presented last summer at the Annual Meeting of the Association for Computational Linguistics, Glass, Belinkov, and their QCRI colleagues showed that the lower levels of a machine-translation network were particularly good at recognizing parts of speech and morphology — features such as tense, number, and conjugation.

Making meaning

But in the new paper, they show that higher levels of the network are better at something called semantic tagging. As Belinkov explains, a part-of-speech tagger will recognize that “herself” is a pronoun, but the meaning of that pronoun — its semantic sense — is very different in the sentences “she bought the book herself” and “she herself bought the book.” A semantic tagger would assign different tags to those two instances of “herself,” just as a machine translation system might find different translations for them in a given target language.

The best-performing machine-translation networks use so-called encoding-decoding models, so the MIT and QCRI researchers’ network uses it as well. In such systems, the input, in the source language, passes through several layers of the network — known as the encoder — to produce a vector, a string of numbers that somehow represent the semantic content of the input. That vector passes through several more layers of the network — the decoder — to yield a translation in the target language.

Although the encoder and decoder are trained together, they can be thought of as separate networks. The researchers discovered that, curiously, the lower layers of the encoder are good at distinguishing morphology, but the higher layers of the decoder are not. So Belinkov and the QCRI researchers retrained the network, scoring its performance according to not only accuracy of translation but also analysis of morphology in the target language. In essence, they forced the decoder to get better at distinguishing morphology.

Using this technique, they retrained the network to translate English into German and found that its accuracy increased by 3 percent. That’s not an overwhelming improvement, but it’s an indication that looking under the hood of neural networks could be more than an academic exercise.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

Use the link at the top of the story to get to the original article.
Pages: [1] 2 3 ... 10

Users Online

36 Guests, 2 Users
Users active in past 15 minutes:
[Trusty Member]

Most Online Today: 54. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)