Recent Posts

Pages: [1] 2 3 ... 10
1
New Users Please Post Here / Re: Spikes in neural networks
« Last post by korrelan on February 22, 2018, 01:24:28 pm »
HI 157238m and welcome.

Sorry my reply was so short… I was typing on my phone and I hate typing on my phone lol.

I agree it is a weird one, the ‘spike’ seems to be a compounding problem, the randomness of the epoch points me towards something like a local minima scenario, where the variability of the backprop/ training is causing an accumulation error/ event, though how it’s recovering to continue improving is a little confusing lol.

Obviously as all the code is newly/ personally/ hand written and not from a regular tried and tested library/ source I’m assuming the graphs are correct, though the correlation between the error and accuracy graphs seem to contradict during the spike event, the error rate rises as the accuracy increases.

If you are confident your code is correct then have you checked for buffering problems with the OS you are using, depending on how you are decoding/ presenting the image data… are you confident the images are presented correctly on each iteration.  Check variable scope/ resolution/ accuracy within your code, it could be an inherent problem with the Java interpreter/ compiler you are using.  If you are using external libraries (sigmoid, etc) check them for accuracy.

Try presenting the training data in a different order, adding noise to the weights or gradually increasing the training weight adjustment rate as the system learns (momentum).

I hope you find the solution… let me know if you do.
2
Robotics News / MIT Sandbox’s first acquired startup gives back
« Last post by Tyler on February 22, 2018, 12:00:24 pm »
MIT Sandbox’s first acquired startup gives back
21 February 2018, 6:00 pm

In spring 2016, the Institute launched MIT Sandbox, a program that provides fledgling entrepreneurs with funding, mentorship, education, and other support to help shape and launch early-stage ventures.

Two years later, the program is celebrating a major entrepreneurial milestone: Escher Reality, an augmented-reality (AR) startup from the first Sandbox cohort, has been acquired by an AR-game-developer giant — and plans to give back to the program that helped it launch.

Escher is developing a back-end platform for AR that allows, for the first time, cross-platform and fully multiplayer games. It also enables “persistent experiences,” as the startup calls them, in which an AR game remembers where a user places digital objects in a physical play area — to return to, or for others to find — and recognizes when other users, connected on other devices, interact with those objects.

On Feb. 1, the startup announced that it sold to Niantic, the major AR-game developer, founded by ex-Google engineers, that’s behind 2016’s massive hit “Pokémon Go.”

Of the roughly 700 startups that so far have gone through Sandbox, several have raised significant rounds and are generating revenue, but Escher is the first that’s penned an exit deal. Attributing part of its early success to Sandbox — especially as a startup operating in a very young industry — Escher co-founders are now giving back to the program.

“The Sandbox is phenomenal,” says Ross Finman, who co-founded Escher with Diana Hu, a graduate of Carnegie Mellon University. “It gives students freedom to explore any idea … and funds and mentorship and a bunch of resources to work with. If someone wants to pursue something, there’s no reason they can’t do it. We are excited to be the first exit coming out of Sandbox and to be the first ones giving back.”

Sandbox aims to open pathways for a range of student innovators, who may have an idea, an experimental technology, or even a specific startup in mind. Sandbox connects students with educational experiences and mentoring, and provides up to $25,000 in funding. Sandbox collaborates with other MIT programs, such as the Martin Trust Center for MIT Entrepreneurship, the Venture Mentoring Service (VMS), and the Gordon Engineering Leadership Program. The program accepts three cohorts per year, in the spring, summer, and fall.

Participating entrepreneurs sign a voluntary pledge to give back if they find success outside the program. Escher plans to give back more than originally pledged. The gift means a lot, emotionally and practically, says Jinane Abounadi, executive director of Sandbox.

“On the emotional side, we’re so proud of the outcome and proud to have been part of the journey,” she says. On the practical side: “Our assumption is that when people are able to give back, they will, so the next generation of students has the same opportunities. That the first company that exits is generously giving back — it validates that assumption.”

Vice Chancellor Ian A. Waitz, now faculty director of Sandbox, who launched the program as dean of the School of Engineering, met Finman at the inception of the program, but a couple years before its launch. “In the interim, he sent me 43 emails asking, ‘When will it happen? We really need this!’” Waitz says. “His passion and enthusiasm are part of what drove the effort forward, the same way they have driven Escher forward. So, it is a wonderful coincidence that his team is the first Sandbox team to have an exit.”

Entrepreneur from the start

At MIT, Finman was deeply involved in the entrepreneurial ecosystem — including as a student advisor during the formation of Sandbox.

In 2014, he was on the founding team of StartMIT (then Start6), founded by Anantha Chandrakasan, the current dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, along with other students, faculty, and administrators. The Independent Activities Period program provides students and postdocs opportunities to participate in venture-building activities and attend events with alumni and Boston innovation leaders. Participants also work with mentors and take field trips to local startups to explore the culture.

StartMIT organizers noted Finman’s enthusiasm. When Abounadi started to put together a student advisory group for Sandbox in 2015, Finman’s name immediately came up from Waitz, who was a key organizer of StartMIT. “I got an email from Ian that told me to contact Ross because he was so passionate about entrepreneurship,” she says.

Finman provided Sandbox organizers with insights into what resources students may need to succeed. For Finman, the opportunity to help was indicative of MIT’s student-focused ecosystem. “MIT is always asking, ‘What do students need?’” he says.

Around the same time Sandbox launched, Finman and Hu were drafting startup ideas. The co-founders had met as undergraduates studying artificial intelligence and computer vision at Carnegie Mellon years before. After graduating, Hu worked as an engineer at Intel, while Finman enrolled at MIT, where he developed advanced computer vision algorithms for robots in the Computer Science and Artificial Intelligence Laboratory’s Marine Robotics Group.

In early 2016, the two decided to bring their combined computer vision experience to a then-emerging field: AR. Most technological development and funding was focused on virtual reality at the time, Finman says, while very few startups were developing AR technologies. “We decided to be contrarian,” he says.

Founding Escher that spring, they were accepted to Sandbox in the first summer cohort, with only a vague idea for advancing AR capabilities. Then, that July, “Pokémon Go” took the world by storm. “That’s when things really took off,” Finman says. “We said, ‘How do you take this to the next level?’”

In and out of Sandbox

Playing “Pokémon Go,” the two noticed a few lacking features: multiplayer and cross-platform capabilities, and the ability to save games.

In Sandbox, the team developed early proof of concepts. To fix the cross-platform issue, they developed a “unifying layer” that connects disparate devices to the same game. To provide multiplayer capabilities, they found a way to synchronize devices with low latency. Then they used algorithmic techniques to help each device recognize where, say, one Pokémon character is in relation to the other in the real world.

The other issue — “a freakishly hard problem,” Finman says — was saving games. When players exit an AR game, they lose all data and need to start over. The startup developed algorithms that let devices always recognize the exact spot where a digital object had been in a physical space, each time a game is restarted.

But perhaps more importantly, Finman says, the co-founders in Sandbox worked on marketing the technology in an industry that was just getting its footing. “In early days, it was putting together demos of user interfaces for developers and gamers,” he says. “For the early markets that many MIT companies go after, you have to ask, ‘How do you get it out to customers?’”

That Sandbox work got Escher that fall into the National Science Foundation’s I-Corps Program, hosted on campus by the VMS. Over the course of the program, they interviewed 100 AR and game developers in about seven weeks, which really helped them understand the market.

In June 2017, Escher joined Y Combinator, a seed accelerator in Silicon Valley. That August, Apple released its ARKit for AR software developers. Then, the many AR limitations — such as lack of multiplayer and cross-platform capabilities — that Escher had been working on for well over a year became apparent to developers. That caught the eye of Niantic, which quickly scooped up the startup.

Now the Escher team of six joins Niantic’s AR development team, as Niantic gears up for the 2018 release of its next title, “Harry Potter: Wizards Unite.”

“[Niantic] has been a trailblazer of AR and getting people excited about AR. That’s why it’s such a good match for us,” Finman says, adding, “Because who doesn’t want to play wizard battles in the real world?”

For Abounadi, Escher’s story is a prime example of what Sandbox stands for. Escher entered the program with only an idea for a brand-new technology that had no definite application in an emerging industry. “We expect that a good portion of the ideas people work on in Sandbox will fit exactly that mold,” she says. “Because it’s MIT, and we have people pushing the boundaries on new technologies.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.
3
New Users Please Post Here / Re: Spikes in neural networks
« Last post by 157239n on February 22, 2018, 08:54:19 am »
Could be a local minima problem but then why should the network changes at all later on? Meanwhile it stays idle for a while and suddenly gets better. :-\
4
AI News / Re: Noam Chomsky Has Weighed In On A.I. Where Do You stand?
« Last post by keghn on February 22, 2018, 01:43:14 am »

 My  model will auto generate it own language if no other already exist.
Did humans speak through cave art? Ancient drawings and language's origins:   
   
https://www.sciencedaily.com/releases/2018/02/180221122923.htm   

5
New Users Please Post Here / Re: Spikes in neural networks
« Last post by korrelan on February 22, 2018, 01:24:49 am »
Local minima problem?

 :)
6
Bot Conversations / Re: Young Alpha
« Last post by LOCKSUIT on February 22, 2018, 12:33:08 am »
Is this the order you got? Correct the list if not.



no
false
wrong
incorrect
true?
why?
never?
neither

yes
true
definitely
correct
either

maybe
fuzzy
could be
intermediate

ok
fair

questions/answers/remarks+randomness
7
General Robotics Talk / Re: They Are Coming For US!
« Last post by LOCKSUIT on February 21, 2018, 08:27:54 pm »
Forget that video LOL. Here's the REAL version !

https://www.wired.com/story/watch-a-human-try-to-fight-off-that-door-opening-robot-dog/

Lol everyone even I gets upset over the guy with the hockey stick.
8
AI News / Re: Noam Chomsky Has Weighed In On A.I. Where Do You stand?
« Last post by keghn on February 21, 2018, 08:09:37 pm »
 Data base is built up in a unsupervised fashion. Three methods can be used.
 First). Is  to look for two or more matching patterns at different parts of the raw data with Kolmogorov Complexity algorithms:
https://en.wikipedia.org/wiki/Kolmogorov_complexity

 And together with:

 Frieze group  pattern:
https://en.wikipedia.org/wiki/Frieze_group
http://www.scientificcomputing.com/article/2015/09/patterns-are-math-we-love-look 

 The second). Is to have a internal doodle board and draw something on it and then go look for it in the data.
 A genetic algorithms evolve doodle form simple to complex.

 The third). Is  to random set weight in very simple neural network. And then go see if it will detect something in the data.
 To see what it detects, a doodle board generates image in evolving way until detector NN activates.
 The detector NN is evolved from simple to complex. This type is a Unsupervised Generative adversarial Neural Network 

 Once a pattern is detected a conscious focus can track it, manipulate it, and compare it with others. The detected is converted into internal
symbol and stored in memory to build up a language of synesthesia symbols and also used programming code instruction.

 The AGi  will be running code form the very fist time, initial starter code.  But with no detected symbols it is running a search
program to look for them. For the AGI to run write it need to use code along with symbols. New internal symbol will be used
as new instruction in its own ever growing code. So there will be a working memory that program will built and then jump into them like
function calls. But there will be a back step system if the code crashes. 
 This code will be it soul to be up loaded, at some time in the future.

9
New Users Please Post Here / Spikes in neural networks
« Last post by 157239n on February 21, 2018, 07:49:45 pm »
Hi guys, I'm new here (literally created this account 5 min ago) and I want to ask a technical question about AI. I'm not sure whether this forum is suitable for technical questions or not but here's it:

Context:
So I have been doing AI for a long time now. I'm just curious about it and have no intention for now to apply AI in any fields. I have been testing different kinds of neural networks lately to see which works and which don't. I program all of these in Processing which is based on Java and I don't use any libraries like Tensorflow at all. Because of that, it'll be hard to give out the code as you'll need a lot of time to comprehend it so I'll just give the setup and the results I found here:

Network setup:
Goal: recognize the MNIST handwritten digits (really popular dataset in AI).
Architecture: plain feed forward network
Dimension: 28*28+1 neurons (input layer. Last input always equal to 1 in all samples to replicate the bias), 50 neurons (hidden layer), 10 neurons (output layer).
Correct answer format: If 0 is presented, the first neuron in the output layer will be 1 and all the other 0. Vice versa for every other number.
Activation function: sigmoid
Learning constant: 0.04 (plain backpropergation).
Learning rule: Delta learning rule
Training samples: 1000
Batch size: 150 (last batch have size 100)
Testing samples: 100

Evaluation function used:
Error: Sum over all samples(Sum over all output neurons(  abs(prediction-answer)  ))
Confidence: Sum over all samples(Sum over all output neurons(  min(1-answer, answer)  ))/number of samples. The more the network is confident that it is correct, the lower this value is.
Accuracy: Sum over all samples(  kronecker delta(max(all output neurons), max(all correct answer neurons))  )/number of samples

Extension evaluation functions:
Confidence: confidence on the first batch of the training samples
Real confidence: confidence on the testing samples
Accuracy: accuracy on the first batch of the training samples
Real accuracy: accuracy on the testing samples

These are the graphs I collected from running this:



This is a closer look at the begining:



Observations:
- Accuracy and real accuracy graphs closely match each other in shape
- Confidence and real confidence graphs closely match each other in shape
- Accuracy keeps improving over time, but most of the time it's really quiet and unchanging and increases only at specific points (let's call them spikes)
- When a spike occurs, confidence increases dramatically, accuracy increases by a bit and error nudges a bit and slopes downward faster, eventually reaching a new, lower equilibrium.

Further experiments:
- The spikes occur very randomly and you can't really tell one is coming up
- Spikes stop happening when you reach an accuracy of around 0.96 (96% correct)
- Sometimes the error drops sharply to 0 and rise back up to its nominal value just before a spike happens.
- When I run it with a learning constant of 0.02, the spikes don't appear at all and the error race towards 0.96 right away. A summary graph of it can be found here: http://157239n.com/frame%203.png
- At first, I suspect the spikes are due to the fact that the network's experiencing learning slowdown to the nature of the sigmoid activation function and the quadratic cost function. This can be dealt with using cross-entropy cost function but why don't the spikes appear if they're still stuck with learning slowdown when the learning constant is 0.02. I am working on implementing the cross-entropy cost function and see what happens but in the mean time, that's all the information I've got.

My question:
So my question is, how can those spikes be formulated? What are those spikes anyway? What caused them? Can I somehow trigger a spike to happen so that the network can continue to learn?

Thanks in advanced if anyone knows about this. Let me know if you have any questions.
10
Bot Conversations / Re: Young Alpha
« Last post by 8pla.net on February 21, 2018, 05:11:34 pm »
Ranch,

Your feedback helped train it on more rules.  Turns out handling "depends" can be applied as synergy.  You may notice the results have since become slightly more successive.
Pages: [1] 2 3 ... 10

Spikes in neural networks
by korrelan (New Users Please Post Here)
February 22, 2018, 01:24:28 pm
Young Alpha
by LOCKSUIT (Bot Conversations)
February 22, 2018, 12:33:08 am
They Are Coming For US!
by LOCKSUIT (General Robotics Talk)
February 21, 2018, 08:27:54 pm
XKCD Comic : Self-Driving Issues
by Tyler (XKCD Comic)
February 21, 2018, 12:00:06 pm
Why we laugh. (or cry) - very funny videos included
by keghn (General AI Discussion)
February 20, 2018, 10:27:51 pm
Emergence of the universe's PURPOSE !!!!
by keghn (Future of AI)
February 19, 2018, 04:54:20 pm
XKCD Comic : 2018 CVE List
by Tyler (XKCD Comic)
February 19, 2018, 12:01:26 pm
the emergence of AI
by LOCKSUIT (Future of AI)
February 19, 2018, 05:04:07 am

Users Online

28 Guests, 0 Users

Most Online Today: 52. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles