Recent Posts

Pages: 1 2 [3] 4 5 ... 10
21
Forum Feedback / Re: New Theme
« Last post by Freddy on June 28, 2017, 04:51:56 pm »
Appearance looks fine here. Only, I can't connect from the home page. I have to be inside the forum to connect. But it's not important.
I really like the new look'n feel.

Same here. I'll see if I can think of anything.

So fast my refresh arrow doesn't change!

Hehe, yes very quick now :)
22
General Chat / Re: What's everyone up to ?
« Last post by LOCKSUIT on June 28, 2017, 04:33:09 pm »



I like this car.
23
General AI Discussion / Re: Pass-through machines
« Last post by ivan.moony on June 28, 2017, 03:13:37 pm »
Quote
Don't forget that at output, there are not only limbs, but also several mental functionalities, like the abilities to focus willingly on something, to swap current tasks, to abandon tasks, to open a new temporary task, to learn something, to speak to oneself, to choose to interpret input in a certain way (look at this cloud, it looks like a sheep), ...etc. A lot of "mental actions" that we can choose to perform.
I see mental actions such are free will, focus attention and forming knowledge as changing function compositions, i.e. changing formations that connect between input and output. At the same time, there can be a lot of input-output connections, and by passing parameters to the input, we can actually choose which path to use to perform actions, regarding to those parameters. Sometimes it is necessary to carry on some mutable data through several input-output cycles. In those cases, in each cycle we can output a (modified) data directly to the next cycle input, thus carrying on or advancing the data appearance, until the final result appears. For example, we can pass a topic of conversation through multiple hear/say cycles, and that would influence forming of our answers.

All of this seems so natural once you dive into functional programming paradigm. Anything could be done in each cycle, from making a foot step to making a step in solving an algebra expression. By predicting which way chaining different steps lead us, we can plan what combination of actions to perform to get the machine to a certain state.
24
General AI Discussion / Re: A (rather fantastical) idea on how AI might start.
« Last post by Zero on June 28, 2017, 12:45:16 pm »
An incomplete AGI could be built, with humans acting as the missing parts.

It would be a website, with user registration. Users could work in the emotional center, in the attention center, in the prediction center, in the perception center... every part of a mind. The website would behave as a whole, a single entity, a giant composite being, half-human, half-machine. It could even have political weight. A new kind of organization. Then, step by step, humans working in this entity would be replaced by programs.
25
General Chat / Re: What's everyone up to ?
« Last post by Art on June 28, 2017, 12:17:21 pm »
Data - I agree and also own a Hybrid vehicle that typically gets me between 50 - 65 mpg (depending on driving conditions and behaviors).
It shuts the gasoline motor off at a stop light and quietly runs on electric until moving forward and the speed increases enough for the gasoline engine to once again come on. The whole transition is so quiet and barely noticeable.
I love my car and it is one of the few cars that afford me a decent amount of head room, given my size.
26
Robotics News / Testing new networking protocols
« Last post by Tyler on June 28, 2017, 12:00:21 pm »
Testing new networking protocols
21 March 2017, 8:00 pm

The transmission control protocol, or TCP, which manages traffic on the internet, was first proposed in 1974. Some version of TCP still regulates data transfer in most major data centers, the huge warehouses of servers maintained by popular websites.

That’s not because TCP is perfect or because computer scientists have had trouble coming up with possible alternatives; it’s because those alternatives are too hard to test. The routers in data center networks have their traffic management protocols hardwired into them. Testing a new protocol means replacing the existing network hardware with either reconfigurable chips, which are labor-intensive to program, or software-controlled routers, which are so slow that they render large-scale testing impractical.

At the Usenix Symposium on Networked Systems Design and Implementation later this month, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory will present a system for testing new traffic management protocols that requires no alteration to network hardware but still works at realistic speeds — 20 times as fast as networks of software-controlled routers.

The system maintains a compact, efficient computational model of a network running the new protocol, with virtual data packets that bounce around among virtual routers. On the basis of the model, it schedules transmissions on the real network to produce the same traffic patterns. Researchers could thus run real web applications on the network servers and get an accurate sense of how the new protocol would affect their performance.

“The way it works is, when an endpoint wants to send a [data] packet, it first sends a request to this centralized emulator,” says Amy Ousterhout, a graduate student in electrical engineering and computer science (EECS) and first author on the new paper. “The emulator emulates in software the scheme that you want to experiment with in your network. Then it tells the endpoint when to send the packet so that it will arrive at its destination as though it had traversed a network running the programmed scheme.”

Ousterhout is joined on the paper by her advisor, Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science; Jonathan Perry, a graduate student in EECS; and Petr Lapukhov of Facebook.

Traffic control

Each packet of data sent over a computer network has two parts: the header and the payload. The payload contains the data the recipient is interested in — image data, audio data, text data, and so on. The header contains the sender’s address, the recipient’s address, and other information that routers and end users can use to manage transmissions.

When multiple packets reach a router at the same time, they’re put into a queue and processed sequentially. With TCP, if the queue gets too long, subsequent packets are simply dropped; they never reach their recipients. When a sending computer realizes that its packets are being dropped, it cuts its transmission rate in half, then slowly ratchets it back up.

A better protocol might enable a router to flip bits in packet headers to let end users know that the network is congested, so they can throttle back transmission rates before packets get dropped. Or it might assign different types of packets different priorities, and keep the transmission rates up as long as the high-priority traffic is still getting through. These are the types of strategies that computer scientists are interested in testing out on real networks.

Speedy simulation

With the MIT researchers’ new system, called Flexplane, the emulator, which models a network running the new protocol, uses only packets’ header data, reducing its computational burden. In fact, it doesn’t necessarily use all the header data — just the fields that are relevant to implementing the new protocol.

When a server on the real network wants to transmit data, it sends a request to the emulator, which sends a dummy packet over a virtual network governed by the new protocol. When the dummy packet reaches its destination, the emulator tells the real server that it can go ahead and send its real packet.

If, while passing through the virtual network, a dummy packet has some of its header bits flipped, the real server flips the corresponding bits in the real packet before sending it. If a clogged router on the virtual network drops a dummy packet, the corresponding real packet is never sent. And if, on the virtual network, a higher-priority dummy packet reaches a router after a lower-priority packet but jumps ahead of it in the queue, then on the real network, the higher-priority packet is sent first.

The servers on the network thus see the same packets in the same sequence that they would if the real routers were running the new protocol. There’s a slight delay between the first request issued by the first server and the first transmission instruction issued by the emulator. But thereafter, the servers issue packets at normal network speeds.

The ability to use real servers running real web applications offers a significant advantage over another popular technique for testing new network management schemes: software simulation, which generally uses statistical patterns to characterize the applications’ behavior in a computationally efficient manner.

“Being able to try real workloads is critical for testing the practical impact of a network design and to diagnose problems for these designs,” says Minlan Yu, an associate professor of computer science at Yale University. “This is because many problems happen at the interactions between applications and the network stack” — the set of networking protocols loaded on each server — “which are hard to understand by simply simulating the traffic.”

“Flexplane takes an interesting approach of sending abstract packets through the emulated data-plane resource management solutions and then feeding back the modified real packets to the real network,” Yu adds. “This is a smart idea that achieves both high link speed and programmability. I hope we can build up a community using the FlexPlane test bed for testing new resource management solutions.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage

27
General Chatbots and Software / Re: Is there a "real time" chatbot engine?
« Last post by Zero on June 28, 2017, 11:45:49 am »
Ok I got it. Then maybe a good chatbot engine would have to construct its replies. The process of finding the most appropriate answer wouldn't yield a pre-made sentence filled with captures from wildcards, but rather a "WannaSay" object, containing subject, verb, and complement properties, and then another process would build the sentence according to this WannaSay object.
28
Forum Feedback / Re: New Theme
« Last post by LOCKSUIT on June 28, 2017, 11:44:03 am »
So fast my refresh arrow doesn't change!
29
Forum Feedback / Re: New Theme
« Last post by Zero on June 28, 2017, 11:39:29 am »
Appearance looks fine here. Only, I can't connect from the home page. I have to be inside the forum to connect. But it's not important.
I really like the new look'n feel.
30
General AI Discussion / Re: Pass-through machines
« Last post by Zero on June 28, 2017, 10:24:59 am »
Yes, "functional" belongs to the "declarative" family.

Neurons can certainly be seen as functions. This is a very interesting path to AGI.

Don't forget that at output, there are not only limbs, but also several mental functionalities, like the abilities to focus willingly on something, to swap current tasks, to abandon tasks, to open a new temporary task, to learn something, to speak to oneself, to choose to interpret input in a certain way (look at this cloud, it looks like a sheep), ...etc. A lot of "mental actions" that we can choose to perform.

I'll use Racket since that's the language I'm currently swimming in.

So imagine a lot a functional neurons. Feurons. Each feuron is an object with the following properties:
- the computation function (a Racket function, like "+" or something more elaborated).
- a list of "follower" feurons, that receive this feuron output as part of their input
- a "backlink" list of followed feurons, from which this feuron takes its own inputs.

Imagine there is a big list of computation functions available, say 200 user-coded functions. This list represents a list of all possible types of feurons.

Now, I would go this way.

Start with an initial wiring schema, like Korrelan's connectome. Then go step by step.

At each step, do this:

First do your math, so you know the output of every feuron.

Then try to find similarities between results. For example, notice that feuron #2554 and feuron #9810 currently have similar results. You can also search for similarities between current results and results obtained a few steps ago.

For each found similarity, something has to be done. But it's still a little blurry in my mind for now. It's about creating new links. Perhaps Korrelan could come in and tell us what to do at this point.

Then, natural selection needs to happen. You create a lot of new feurons, arbitrarily connected to existing one (which ones?), with computation functions randomly chosen among our 200 functions list. At the same time, you need to let die feurons that are useless (which is related to the similarity thing: useless feurons seems to be completly out of the system, computing non-sense stuff).

Then you do your next step.

This is all very approximative, but I think the answer lies somewhere in that direction.

What do you think guys?


EDIT:

From the wikipedia article on Hebbian theory:
Quote
The theory is often summarized by Siegrid Löwel's phrase: "Cells that fire together, wire together." However, this summary should not be taken literally. Hebb emphasized that cell A needs to "take part in firing" cell B, and such causality can only occur if cell A fires just before, not at the same time as, cell B. This important aspect of causation in Hebb's work foreshadowed what is now known about spike-timing-dependent plasticity, which requires temporal precedence.

What needs to be found is a function that takes a cell A's output as input and that outputs the same output than another cell B. Perhaps. :)

Also, each type of feuron should come with its own linking machinery: specific ways to link feurons of this type to other feurons. Because it really depends on what this type of feuron eats and produces.
Pages: 1 2 [3] 4 5 ... 10

Users Online

24 Guests, 0 Users

Most Online Today: 35. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles