avatar

Zero

Mixing Tcl and Io in General Project Discussion



As you know, I like programming languages. Here I do a little thought experiment: mixing John Ousterhout's language Tcl and Steve Dekorte's language Io.

An interesting feature of Tcl is that function evaluation isn't automatic. You have to put code inside square brackets if you want it to be executed. The square brackets and their content will be "replaced by" the output of the code inside them.

set x 24
set y 18
set z "$x + $y is [expr $x + $y]"
puts $z


Io is modern and minimalist. An object is a list of slots, where each slot may contain data/code. You'd call an object's method by naming the corresponding object's slot.

Account := Object clone
Account balance := 0
Account deposit := method(amount,
    balance = balance + amount
)

account := Account clone
account deposit(10.00)
account balance println




Mixing them would give something like this:

set Dog [Object clone];

Dog set greet [
   fun {
      print "Hello, [arg]. I'm [name].";
   };
];

set my-dog [Dog clone];

my-dog
   set name "Maxy the dog",
   greet John;
   
-> Hello, John. I'm Maxy the dog.


"set" means "create a slot in yourself, with this name and this value".

In the first line, we're talking to the Environment, saying: create a slot named "Dog". The value of this slot will be the result of the code in square brackets [Object clone].

With "Object clone", we're talking to the object "Object", asking it to return a clone of itself.

The general form of a command is
command ::= <subject> <verb> <arguments> { "," <verb> <arguments> } ";"

Example:
obj part3 subPartB doThis like that, doThat like this;
where
"obj part3 subPartB" is the subject
"doThis" is a function with arguments "like" and "that"
"doThat" is a function with arguments "like" and "this"

I choose not to use named function arguments. Arguments are accessible through "arg", which contains the list of every argument the function was called with.

This would blend perfectly with the pub/sub semantics of Birdy.

4 Comments | Started Today at 01:51:47 pm
avatar

Zero

Programming language designed specifically for AGI in General AI Discussion

It's Xmas soon, how about making a wishlist!

Can you imagine a programming language designed specifically for Artificial General Intelligence... What features should it have? Would it be compiled or interpreted, or both at the same time? Dynamic or static typing? Strong or weak? What paradigmS? What syntax? What semantics? What ecosystem? What code-sharing system?

Don't be shy, you can ask for anything, like an entire DeepNN running in GPU defined as easily as a javascript one-line function, and access to NLP tools from python, all in the same place!

What's a conscious mind developper's dream language?

EDIT: and don't tell me "C", unreality  ;)

30 Comments | Started November 16, 2017, 02:56:14 pm
avatar

ivan.moony

Theory of Everything in General AI Discussion

Just wondering, are physicians taking a wrong approach to understanding the Universe? If you think about it, they seek for math formulas involving all kinds of constants, testing it on some experiments and report their results on understanding the matter and its dynamics.

Now, wouldn't a search for general knowledge base definition and inference be potentially more fruitful? What if we could generally define what a problem is, in a sense of understanding it, and solving it, applicable on some area of time-space? With unique algorithm, mimicking what a human brain does, we could catch all the flies around in a single move.

Maybe a physics formula isn't just right what we are looking for, for understanding the Universe. Maybe that formula should have a form of knowledge system that describes not only how things behave, but also what *could* be known from some experimental data range. And that formula would be, guess what, an artificial intelligence inference algorithm!

9 Comments | Started Today at 03:51:17 pm
avatar

Freddy

When VR meets reality – how live concerts could be enhanced... in Virtual Reality

I was looking at the cost of tickets to see a band next year at the UK O2 arena and was daunted by the prices. I've seen a lot of bands but at smaller venues and have never been to a big venue like that. I'm not sure I like sitting a mile from the band or at my age jumping around in the mosh pit. Anyway, this got me thinking about VR concerts so I Googled and found a few articles, this one interested me most.

I've 'seen' bands like the Coors via VR, but from video recorded by fans, and it was close to being there, so maybe this will take off.

When VR meets reality – how live concerts could be enhanced by 21st-century opera glasses | The Independent

Quote
What do Coldplay, Stevie Wonder and the Imagine Dragons all have in common? The answer, and well done if you got this right, is that they have all had recent live shows broadcast in virtual reality.

Music fans with VR headsets like Google Daydream or Samsung Gear VR have had the chance to feel like they were at these shows without ever having to leave their couches. Underground dance streamer Boiler Room has been experimenting with something similar for VR clubs. Earlier this year it transmitted a DJ set via headsets from Berlin, for example.

Full article at The Independent : http://www.independent.co.uk/life-style/gadgets-and-tech/how-live-concerts-could-be-enhanced-by-21st-century-opera-glasses-a8002606.html

5 Comments | Started November 22, 2017, 06:07:50 pm
avatar

Tyler

Chasing complexity in Robotics News

Chasing complexity
22 November 2017, 5:00 am

In his junior year of high school, Ryan Williams transferred from the public school in his hometown of Florette, Alabama — “essentially a courthouse and a couple gas stations,” as he describes it — to the Alabama School of Math and Science in Mobile.

Although he had been writing computer programs since age 7 — often without the benefit of a computer to run them on — Williams had never taken a computer science class. Now that he was finally enrolled in one, he found it boring, and he was not shy about saying so in class.

Eventually, his frustrated teacher pulled a heavy white book off of a shelf, dumped it dramatically on Williams’s desk, and told him to look up the problem described in the final chapter. “If you can solve that,” he said, “then you can complain.”

The book was “Introduction to Algorithms,” co-written by MIT computer scientists Charles Leiserson and Ron Rivest and one of their students, and the problem at the back was the question of P vs. NP, which is frequently described as the most important outstanding problem in theoretical computer science.

Twenty-two years later, having joined the MIT electrical engineering and computer science faculty with tenure this year, Williams is now a colleague of both Leiserson and Rivest, in the Theory of Computing Group at MIT’s Computer Science and Artificial Intelligence Laboratory. And while he hasn’t solved the problem of P vs. NP — nobody has — he’s made one of the most important recent contributions toward its solution.

P vs. NP is a problem in the field of computational complexity. P is a set of relatively easy problems, and NP is a set of problems some of which appear to be diabolically hard. If P = NP, then the apparently hard problems are actually easy. Few people think that’s the case, but no one’s been able to prove it isn’t.

As a postdoc at IBM’s Almaden Research Center, Williams proved a result about a larger set of problems, known as NEXP, showing that they can’t be solved efficiently by a class of computational circuits called ACC. That may sound obscure, but when he published his result, in 2010, the complexity theorist Scott Aaronson — then at MIT, now at the University of Texas — wrote on his blog, “The result is one of the most spectacular of the decade.”

“We all knew the goal was to walk on the moon (i.e., prove P≠NP and related separations),” Aaronson later added, “and what Ryan has done is to build a model rocket that gets a couple hundred feet into the air, whereas all the previous rockets had suffered from long-identified technical limitations that had kept them from getting up more than 50 feet. … It’s entirely plausible that those improvements really are a nontrivial step on the way to the moon.”

Basic principles

Williams is the son of a mother who taught grade school and a father who ran his own construction firm and whose family indoctrinated Williams into one side of a deep Alabamian social divide — the side that roots for Auburn in the annual Auburn-Alabama football game.

Most of his father’s construction contracts were to dig swimming pools, and when Williams was in high school and college, he was frequently his father’s only assistant. His father ran the backhoe, and Williams followed behind the bucket, digging out rocks and roots, smoothing the ground, and measuring the grade with a laser level.

His father was such a backhoe virtuoso that, Williams says, “If I was going too slow, he would take the edge of the bucket and start flattening the ground and raking it himself. He would say, ‘Point the level here and see if it’s grade.’”

In first grade, having scored highly on a standardized test the year before, Williams began taking a class one day a week at a school for gifted children on the opposite side of the county. He was entranced by the school’s Apple II computer and learned to program in Basic. The next year, the class had a different teacher and the computer was gone, but Williams kept writing Basic programs nonetheless.

For three straight years, from 8th through 10th grade, he and a partner won a statewide programming competition, writing in the oft-derided Basic language. They competed as an undersized team, even though the state Technology Fair sponsored an individual competition as well. “It just didn’t seem fun to spend two or three hours straight programming by myself,” Williams says.

After his junior-year introduction to the P vs. NP problem, Williams was determined to study theoretical computer science in college. He ended up at Cornell University, studying with Juris Hartmanis, a pioneer in complexity theory and a winner of the Turing Award, the Nobel Prize of computer science. Williams also introduced his Yankee classmates to the ardor of Alabamian football fandom, commandeering communal televisions for the annual Auburn-Alabama games.

“It was pretty clear to the other people who wanted to watch television that, no, I needed it more, and that maybe I was willing to fistfight,” Williams says.

After graduating, he did a one-year master’s degree at Cornell and contributed a single-authored paper to a major conference in theoretical computer science. Then he headed to Carnegie Mellon University and graduate study with another Turing-Award-winning complexity theorist, Manuel Blum.

Leaps and bounds

Blum told Williams that he was interested in two topics: k-anonymity — a measure of data privacy — and consciousness. K-anonymity seemed slightly more tractable, so Williams dove into it. Within weeks, he had proven that calculating optimal k-anonymity — the minimum number of redactions necessary to protect the privacy of someone’s personal data — was an NP-complete problem, meaning that it was (unless someone proves P equal to NP) prohibitively time consuming to compute.

Such proofs depend on the calculation of lower bounds — the minimum number of computational steps necessary to solve particular problems. As a potential thesis project, Williams began considering lower bounds on NP-complete problems when solved by computers with extremely limited memory. The hope was that establishing lower bounds in such artificially constrained cases would point the way toward establishing them in the more general case.

“I had studied these things for years, and at some point it occurred to me that these things have a pattern,” Williams says. His dissertation ended up being an automated technique for proving lower bounds in the context of memory-constrained computing. “I wrote a computer program whose output — when certified by a human — is proving that there are no efficient programs for this other problem,” he says.

After graduating, Williams did one-year postdocs at both CMU and the Institute for Advanced Study, in Princeton, New Jersey. Then came his research fellowship at IBM and his “spectacular” result.

That result came from an attempt to bridge a divide within theoretical computer science, between researchers who work on computational complexity and those who design algorithms. Computational-complexity research is seen as more abstract, because it seeks to make general claims about every possible algorithm that might be brought to bear on a particular problem: None can do better than some lower bound. Algorithm design seems more concrete, since it aims at simply beating the running time of the best algorithm developed so far.

But in fact, Williams argues, the problems are more symmetric than they first appear, because establishing an algorithm’s minimum running time requires generalizing about every possible instance of a particular problem that it will ever have to face. Williams wondered whether he could exploit this symmetry, adapting techniques from algorithm design to establish lower bounds.

“Reasoning about lower bounds just seems really hard, but yet, when it comes to designing algorithms to solve the problem, it’s somehow just more natural for people to think about,” Williams says. “People are just naturally problem solvers. Maybe if you phrased the problem the right way, it would become an algorithmic problem.”

Computational jiu-jitsu

Any NP-complete problem can be represented as a logic circuit — a combination of the elementary computing elements that underlie all real-world computing. Solving the problem is equivalent to finding a set of inputs to the circuit that will yield a given output.

Suppose that, for a particular class of circuits, a clever programmer can devise a method for finding inputs that’s slightly more efficient than solving a generic NP-complete problem. Then, Williams showed, it’s possible to construct a mathematical function that those circuits cannot implement efficiently.

It’s a bit of computational jiu-jitsu: By finding a better algorithm, the computer scientist proves that a circuit isn’t powerful enough to perform another computational task. And that establishes a lower bound on that circuit’s performance.

First, Williams proved the theoretical connection between algorithms and lower bounds, which was dramatic enough, but then he proved that it applied to a very particular class of circuits.

“This is essentially the circuit class where progress on P not equal to NP stopped in the mid-’80s,” Williams explains. “We were gradually building up some steam with slightly better, slightly better lower bounds, but it completely stopped in its tracks because of this one pesky little class that nobody could get a handle on.”

Since Williams’s breakthrough paper, both he and other complexity theorists have used his technique for translating between algorithms and lower bounds to prove results about other classes of problems. But, he explains, that translation cuts both ways: Sometimes, a failed attempt at establishing a lower bound can be translated into a more efficient algorithm for solving some other problem. Williams estimates that he has published as many papers in the field of algorithm design as he has in the field of computational complexity.

“I’m lucky,” he says. “I can even publish my failures.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.

3 Comments | Started November 22, 2017, 12:01:04 pm
avatar

Art

What defines Human? in General Chat

I came across this question yesterday and gave it quite a bit of thought so now it's your turn.

What defines a Human? Standing & walking upright? (what about no legs then)?
Speech? Hearing? Touch? Smell? Taste? (there are many people who are unable to perform some or many of these tasks.
So arms are out as well.

Fast forward a few years (decade +- perhaps)...
If a person was in a horrible accident and left without a face or skull and his or her brain was still alive, functioning and kept in a laboratory vessel, would it still be Human?

How do we measure what a Human is rather than what makes us human?

What if the person's brain could still communicate with others via speech synthesis and audio input or brain to image transfer?

Is that person still Human? or a Human?

*(With apologies to my friend, Mav). O0

Also no intended poke at the heads in Jars from TV's Futurama.


40 Comments | Started November 07, 2017, 09:53:27 pm
avatar

unreality

Brain vs computer in General AI Discussion

Interesting. According to Intel, the home sapien brain runs at around 1000 Hz. It’s believed the human brain has around 100 trillion synapses, although some believe it could be as high as 1000 trillion. Multiply the two and you get 1E+17 to 1E+18.

Modern CPUs have about 800 million transistors and operate at around 3 GHz. Multiply the two and you get about 2E+18.

That’s a ratio of 2 to 20 (2E+18 / 1E+18 = 2 and 2E+18 / 1E+17 = 20). Average ratio is (2 + 20) /2 = 11.


The human brain while thinking hard takes about 20 watts. A web server is a good example because they don’t need high power graphic cards, and they have a lot of RAM, which is something my AI requires. A server under load takes about 200 watts.

That’s a ratio of 10 (200 / 20). Interesting. The synapse*frequency ratio between cpu & brain taken from above is 11:1, and power ratio is 10:1. Pretty much the same, it seems.


What about neurons? I haven’t seen any valid way to convert neurons to a computer. A few academics believe neurons have memory. A neuron is very simple. It’s no where near a cpu. DRAM memory chips are made of capacitors and transistors-- 8 of each per byte. There are 100 billion neurons in the human brain. A 128GB server would have about 1000 billion capacitors and transistors. That’s a 10:1 ratio. I’ve often said that my AI should have at least 500 GB of RAM, but then again I’m shooting for ASI-- beyond the human.

Idk, just thinking out loud, but the same ratio calculations in synapse*frequency vs power is interesting. Sure, the above figures will probably vary a lot. The fact that they're even remotely close is interesting.

If a 128 GB RAM server had the correct AI code, I truly believe it could be comparable to humans in thinking ability. I probably spend way way way too much time thinking about this lol. I'm like obsessed with it. After one has spent so much time analyzing something, they can get a good gut feeling about something. My gut feeling is shouting out loud that the hardware is here already, and to start coding the AI the correct way. The code is what's missing. Evolution has had millions of years on the brains program. We're trying to accomplish the same thing in software in just a few decades. It's not easy, but it will happen!

6 Comments | Started Today at 06:36:03 am
avatar

Don Patrick

The most sensational news ever! in AI News

https://artistdetective.wordpress.com/2017/11/17/the-most-sensational-news-ever

I've composed an overview of some of the most wrongly sensationalised news stories from this decade, in part thanks to our very own Tyler bot for providing them  ;)
Did I forget any particularly big stories?

10 Comments | Started November 17, 2017, 02:47:26 pm
avatar

systematic

A system for Machine Behaviour in General Project Discussion

I feel that it is important to start with the overall process required for intelligent action of robotics and then to work our way down from there to the actual details of the system.

I have linked a spreadsheet that delineates a general process that I believe would work.

https://1drv.ms/x/s!Ap39DLuUU_9bi2AAsZKoB2T1WGY4

2 Comments | Started November 13, 2017, 11:39:48 am
avatar

unreality

Robot walks 40.5 miles in marathon in General AI Discussion

This cute robot walked in a marathon 40.5 miles and took only 16 watts to walk, of which only 11 watts went to the motors. I read that a typical human takes 190 calories per hour, which comes to 220 watts. I don't know how loud electric motors are inside an encased robot that has material to absorb sound. Maybe with the sound absorbing material you can't here it. There are also contactless motors that are extremely quiet. Noisy gears can be replaced with rubber substitutes or contactless induction methods. There are a lot more options now. Technology to make a real Synth body undetectable by humans is ready and waiting for the ASI. :)

11 Comments | Started November 22, 2017, 07:28:16 pm
What are the main techniques for the development of a good chatbot ?

What are the main techniques for the development of a good chatbot ? in Articles

Chatbots act as one of the most useful and one of the most reliable technological helpers for those, who own ecommerce websites and other similar resources. However, a pretty important problem here is the fact, that people might not know, which technologies it will be better to use in order to achieve the needed goals. Thus, in today’s article you may get an opportunity to become more familiar with the most important principles of the chatbot building.

Oct 12, 2017, 01:31:00 am
Kweri

Kweri in Chatbots - English

Kweri asks you questions of brilliance and stupidity. Provide correct answers to win. Type ‘Y’ for yes and ‘N’ for no!

Links:

FB Messenger
https://www.messenger.com/t/kweri.chat

Telegram
https://telegram.me/kweribot

Slack
https://slack.com/apps/A5JKP5TND-kweri

Kik
http://taell.me/kweri-kik

Line
http://taell.me/kweri-line/

Skype
http://taell.me/kweri-skype/

Oct 12, 2017, 01:24:37 am
The Conversational Interface: Talking to Smart Devices

The Conversational Interface: Talking to Smart Devices in Books

This book provides a comprehensive introduction to the conversational interface, which is becoming the main mode of interaction with virtual personal assistants, smart devices, various types of wearables, and social robots. The book consists of four parts: Part I presents the background to conversational interfaces, examining past and present work on spoken language interaction with computers; Part II covers the various technologies that are required to build a conversational interface along with practical chapters and exercises using open source tools; Part III looks at interactions with smart devices, wearables, and robots, and then goes on to discusses the role of emotion and personality in the conversational interface; Part IV examines methods for evaluating conversational interfaces and discusses future directions. 

Aug 17, 2017, 02:51:19 am
Explained: Neural networks

Explained: Neural networks in Articles

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years.

Jul 26, 2017, 23:42:33 pm
It's Alive

It's Alive in Chatbots - English

[Messenger] Enjoy making your bot with our user-friendly interface. No coding skills necessary. Publish your bot in a click.

Once LIVE on your Facebook Page, it is integrated within the “Messages” of your page. This means your bot is allowed (or not) to interact and answer people that contact you through the private “Messages” feature of your Facebook Page, or directly through the Messenger App. You can view all the conversations directly in your Facebook account. This also needs that no one needs to download an app and messages are directly sent as notifications to your users.

Jul 11, 2017, 17:18:27 pm
Star Wars: The Last Jedi

Star Wars: The Last Jedi in Robots in Movies

Star Wars: The Last Jedi (also known as Star Wars: Episode VIII – The Last Jedi) is an upcoming American epic space opera film written and directed by Rian Johnson. It is the second film in the Star Wars sequel trilogy, following Star Wars: The Force Awakens (2015).

Having taken her first steps into a larger world, Rey continues her epic journey with Finn, Poe and Luke Skywalker in the next chapter of the saga.

Release date : December 2017

Jul 10, 2017, 10:39:45 am
Alien: Covenant

Alien: Covenant in Robots in Movies

In 2104 the colonization ship Covenant is bound for a remote planet, Origae-6, with two thousand colonists and a thousand human embryos onboard. The ship is monitored by Walter, a newer synthetic physically resembling the earlier David model, albeit with some modifications. A stellar neutrino burst damages the ship, killing some of the colonists. Walter orders the ship's computer to wake the crew from stasis, but the ship's captain, Jake Branson, dies when his stasis pod malfunctions. While repairing the ship, the crew picks up a radio transmission from a nearby unknown planet, dubbed by Ricks as "planet number 4". Against the objections of Daniels, Branson's widow, now-Captain Oram decides to investigate.

Jul 08, 2017, 05:52:25 am
Black Eyed Peas - Imma Be Rocking That Body

Black Eyed Peas - Imma Be Rocking That Body in Video

For the robots of course...

Jul 05, 2017, 22:02:31 pm
Winnie

Winnie in Assistants

[Messenger] The Chatbot That Helps You Launch Your Website.

Jul 04, 2017, 23:56:00 pm