Recent Posts

Pages: [1] 2 3 ... 10
1
XKCD Comic / Re: XKCD Comic : Scientific Paper Graph Quality
« Last post by ranch vermin on January 23, 2018, 10:42:03 pm »
its not the presentation thats the whole quality of the product.
2
General AI Discussion / Re: Self Aware
« Last post by keghn on January 23, 2018, 04:42:40 pm »
Drones learn to navigate autonomously by imitating cars and bicycles: 

https://phys.org/news/2018-01-drones-autonomously-imitating-cars-bicycles.html 

Blade Runner (2049) - Best Visual Effects - 1080p: 

https://www.youtube.com/watch?time_continue=1&v=BGSu7uMpxME
3
General Project Discussion / Re: AiDL language draft
« Last post by ivan.moony on January 23, 2018, 04:09:01 pm »
Quote
A -> B
C -> D
follows:
(A or B) -> (C and D)
If there's no typo in this quote, then I don't get it. How could (A or B) be related to (C and D), since there's no link between "A -> B" and "C -> D"?

Sorry, it was a typo. I meant: (A or C) -> (B and D). But I overcalculated myself, sorry again, this statement is also false.

What I really meant (I hope I wrote it right now) was this:
A -> B
A -> C
consequences with:
A -> (B and C)

and other way around:
B -> A
C -> A
consequences with:
(B or C) -> A

This much is logical to me, and it is a kind of a common sense, if you think about it. I derived it a while ago, by connecting the pairs of formulas by `and` and doing a bit of classical logic inference by resolution method and rules:
(A -> B) <-> (¬A or B)
((A and B) or C ) <-> ((A or C) and (B or C))
((A or B) and C) <-> ((A and C) or (B and C))
¬(A and B) <-> (¬A or ¬B)
¬(A or B) <-> (¬A and ¬B)

Your notice about CFG and logic is exciting. Could they be expressed and manipulated uniformly? Do they both belong to the same superset?
It seems, for now, that logic alone is not suitable for processing streams of data, while it is good for isolated stream (sequence) elements. There has to be some sequence operator extension to logic. Maybe in a form of predicates, but then each predicate parameter (as a sequence element) should implement its little sub-logic connected to the above for references and recursion. Very inspiring question. And very good scientific paper could be extracted from CFG - Logic analogy. Proving a contradiction in a grammar would mean that the grammar would never parse anything (it would always report an error upon parsing), and could be considered as an error in grammar.
4
General Project Discussion / Re: AiDL language draft
« Last post by Zero on January 23, 2018, 02:38:38 pm »
Quote
A -> B
C -> D
follows:
(A or B) -> (C and D)
If there's no typo in this quote, then I don't get it. How could (A or B) be related to (C and D), since there's no link between "A -> B" and "C -> D"?

Your notice about CFG and logic is exciting. Could they be expressed and manipulated uniformly? Do they both belong to the same superset?
5
Robotics News / Engineers design artificial synapse for “brain-on-a-chip” hardware
« Last post by Tyler on January 23, 2018, 12:00:05 pm »
Engineers design artificial synapse for “brain-on-a-chip” hardware
22 January 2018, 3:59 pm

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses — the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

The design, published today in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.
6
XKCD Comic / XKCD Comic : Scientific Paper Graph Quality
« Last post by Tyler on January 23, 2018, 12:00:04 pm »
Scientific Paper Graph Quality
22 January 2018, 5:00 am

The worst are graphs with qualitative, vaguely-labeled axes and very little actual data.

Source: xkcd.com

7
General Project Discussion / Re: AiDL language draft
« Last post by ivan.moony on January 23, 2018, 10:09:22 am »
If I understood the question, you are right about `or`.

If you skim over context free grammar (CFG) definition, from which is BNF derived, you'll notice that there are only these kind of productions: `A -> B C D ...`, and repeating the same left side `A` makes an `or` set. This follows from a logic operator of consequence `->` where from:
A -> B
C -> D
follows:
(A or B) -> (C and D)

That is how the logic field works. Left sides are `or`-ed and right sides are `and`-ed. Repeating the same letter multiple times over multiple lines on the left or on the right sides makes wonders.

Make a notice that CFG productions draw arrows in the opposite direction than logic. What is in CFG written as
A -> B C D
should in logic be written as
B C D -> A
or more readable
A <- B C D
8
General Project Discussion / Re: AiDL language draft
« Last post by Zero on January 23, 2018, 09:45:15 am »
Ok, harder than I thought.

The simplest form of syntax description would be a multiset of couples... when the left hand appears several times, it means "or". Am I wrong?
9
General Project Discussion / Re: B-Bot
« Last post by ivan.moony on January 23, 2018, 09:35:41 am »
The Universe took 13.7 billions years to create us, and we are still slow sometimes.
10
General Project Discussion / Re: B-Bot
« Last post by ranch vermin on January 23, 2018, 08:56:37 am »
I dont blame you for it being slow,   machine learning is worse than raytracing workload wise.
Pages: [1] 2 3 ... 10

Users Online

27 Guests, 0 Users

Most Online Today: 35. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles