Recent Posts

Pages: [1] 2 3 ... 10
1
General Project Discussion / Re: Logos formal language specification
« Last post by Zero on Today at 09:38:00 am »
Thank you, Zero for a kind word :)

Code: [Select]

«(a ‹⋅› b) ‹+› (a ‹⋅› c)»

(higher (a (lower ⋅) b) (lower +) (a (lower ⋅) b) )


actually it is:
Code: [Select]

«(a ‹⋅› b) ‹+› (a ‹⋅› c)»

higher((a (lower ⋅) b) (lower +) (a (lower ⋅) b) )


Sorry Ivan, can we go over this again please, I didn't understand. I'm interesting in the potential s-expression translation one could make of an a-expression. Why is "higher" outside of parens?
2
Want to learn how to train an artificial intelligence model? Ask a friend.
25 June 2019, 6:50 pm

The MIT Machine Intelligence Community began with a few friends meeting over pizza to discuss landmark papers in machine learning. Three years later, the undergraduate club boasts 500 members, an active Slack channel, and an impressive lineup of student-led reading groups and workshops meant to demystify machine learning and artificial intelligence (AI) generally. This year, MIC and MIT Quest for Intelligence joined forces to advance their common cause of making AI tools accessible to all.

Starting last fall, the MIT Quest opened its offices to MIC members and extended access to IBM and Google-donated cloud credits, providing a boost of computing power to students previously limited to running their AI models on desktop machines loaded with extra graphics processors. The MIT Quest and MIC are now collaborating on a host of projects, independently and through MIT’s Undergraduate Research Opportunities Program (UROP).

“We heard about their mission to spread machine learning to all undergrads and thought, ‘That’s what we’re trying to do — let’s do it together!” says Joshua Joseph, chief software engineer with the MIT Quest Bridge.

A makerspace for AI

U.S. Army ROTC students Ian Miller and Rishi Shah came to MIC for the free cloud credits, but stayed for the workshop on neural computing sticks. A compute stick allows mobile devices to do image processing on the fly, and when the cadets learned what one could do, they knew their idea for a portable computer vision system would work.

“Without that, we’d have to send images to a central place to do all this computing,” says Miller, a rising junior. “It would have been a logistical headache.”

Built in two months, for $200, their wallet-sized device is designed to plug into a tablet strapped to an Army soldier’s chest and scan the surrounding area for cars and people. With more training, they say, it could learn to spot cellphones and guns. In May, the cadets demo'd their device at MIT’s Soldier Design Competition and were invited by an Army sergeant to visit Fort Devens to continue working on it.

Rose Wang, a rising senior majoring in computer science, was also drawn to MIC by the free cloud credits, and a chance to work on projects with quest and other students. This spring, she used IBM cloud credits to run a reinforcement learning model that’s part of her research with MIT Professor Jonathan How, training robot agents to cooperate on tasks that involve limited communication and information. She recently presented her results at a workshop at the International Conference on Machine Learning.  

“It helped me try out different techniques without worrying about the compute bottleneck and running out of resources,” she says.

Improving AI access at MIT

The MIC has launched several AI projects of its own. The most ambitious is Monkey, a container-based, cloud-native service that would allow MIT undergraduates to log in and train an AI model from anywhere, tracking the training as it progresses and managing the credits allotted to each student. On a Friday afternoon in April, the team gathered in a quest conference room as Michael Silver, a rising senior, sketched out the modules Monkey would need.

As Silver scrawled the words "Docker Image Build Service" on the board, the student assigned to research the module apologized. “I didn’t make much progress on it because I had three midterms!” he said.

The planning continued, with Steven Shriver, a software engineer with the Quest Bridge, interjecting bits of advice. The students had assumed the container service they planned to use, Docker, would be secure. It isn’t.

“Well, I guess we have another task here,” said Silver, adding the word “security” to the white board.

Later, the sketch would be turned into a design document and shared with the two UROP students helping to execute Monkey. The team hopes to launch sometime next year.

“The coding isn’t the difficult part,” says UROP student Amanda Li, a member of MIC Dev-Ops. “It’s the exploring the server side of machine learning — Docker, Google Cloud, and the API. The most important thing I’ve learned is how to efficiently design and pipeline a project as big as this.”

Silver knew he wanted to be an AI engineer in 2016, when the computer program AlphaGo defeated the world’s reigning Go champion. As a senior at Boston University Academy, Silver worked on natural language processing in the lab of MIT Professor Boris Katz, and has continued to work with Katz since coming to MIT. Seeking more coding experience, he left HackMIT, where he had been co-director, to join MIC Dev-Ops.

“A lot of students read about machine learning models, but have no idea how to train one,” he says. “Even if you know how to train one, you’d need to save up a few thousand dollars to buy the GPUs to do it. MIC lets students interested in machine learning reach that next level.”

Conceived by MIC members, a second project is focused on making AI research papers posted on arXiv easier to explore. Nearly 14,000 academic papers are uploaded each month to the site, and although papers are tagged by field, drilling into subtopics can be overwhelming.

Wang, for one, grew frustrated while doing a basic literature search on reinforcement learning. “You have a ton of data and no effective way of representing it to the user,” she says. “It would have been useful to see the papers in a larger context, and to explore by number of citations or their relevance to each other.”

A third MIC project focuses on crawling MIT’s hundreds of listservs for AI-related talks and events to populate a Google calendar. The tool will be closely patterned after an app Silver helped build during MIT’s Independent Activities Period in January. Called Dormsp.am, the app classifies listserv emails sent to MIT undergraduates and plugs them into a calendar-email client. Students can then search for events by day or by a color-coded topic, such as tech, food, or jobs. Once Dormsp.am launches, Silver will adapt it to search for and post AI-related events at MIT to an MIC calendar.

Silver says the team spent extra time on the user interface, taking a page from MIT Professor Daniel Jackson’s Software Studio class. “This is an app that can live or die on its usability, so the front end is really important,” he says.  

Wang is now collaborating with Moin Nadeem, MIC’s outgoing president, to build the visualization tool. It’s exactly the kind of hands-on experience MIC was intended to provide, says Nadeem, a rising senior. “Students learn fundamental concepts in class but don’t know how to implement them,” he says. “I’m trying to build what freshman me would have liked to have had: a community of people excited to do interesting stuff with machine learning.”

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.
3
Human Experience and Psychology / Re: What thought next
« Last post by Zero on June 26, 2019, 10:38:36 am »
Quote
Yup! So do I! It'll require some thought, but it's just too good to give up on. I'm gonna do my best to incorporate it into my project.

We should keep in mind, that while it might be part of the implementation, it could also rather be an a posteriori observation one would make when looking at a well designed system. Wow, this sentence is weird. Am I being understandable?

Quote
any one philosophy, or modus operandi, doesn't apply well to all situations

Oh yes, I've been feeling this very often during my researches. I think I finally got something that do apply well to anything, at least the way I see it. I strongly believe there's not 1 AGI solution, but many ways it can be achieved.

Quote
First, you should be careful of the word "blackboard" because there exists a variation on expert systems called a blackboard system (https://en.wikipedia.org/wiki/Blackboard_system), so I'm not sure if you're referring to a blackboard system or the common notion of a blackboard.

This wikipedia article describes exactly what I'm talking about. A blackboard.
Please don't presume I don't know what I'm talking about.

Quote
Close, but not quite.

I know, I wasn't suggesting that we were doing it identically. On the contrary! You do FOA with a single node, I do it differently because I don't think 1 node  is enough to represent the current state of mind. What do you think?

Quote
P.S.--This system builds hierarchies and free will into it. (1) Hierarchies: Because interrupts will inevitably occur (hunger, restlessness, etc.), this keeps the system motivated to keep changing its state in order to survive. Survival is the #1 goal. (2) Free will: In some search modes, such as free association or random selections between equally appealing alternative actions, the system's behavior is not completely predictable. This is not a deterministic system.

Do you consider that free will is only a result of randomness?
4
General Project Discussion / Re: Logos formal language specification
« Last post by Zero on June 25, 2019, 09:26:59 pm »
Remaking Logo in Logos would be cool. I love Logo.
6
XKCD Comic / XKCD Comic : Motivated Reasoning Olympics
« Last post by Tyler on June 25, 2019, 12:00:41 pm »
Motivated Reasoning Olympics
24 June 2019, 5:00 am



Source: xkcd.com

7
General Project Discussion / Re: Logos formal language specification
« Last post by goaty on June 25, 2019, 09:47:00 am »
logo is a cool name,  trademark it!  :D

.
8
General Project Discussion / Re: Logos formal language specification
« Last post by ivan.moony on June 25, 2019, 08:05:09 am »
Logos represents merely a translator from whatever to whatever. That's a definition of a term rewriting system. You can imagine translating natural language commands to sequences that output electricity of proper strength at proper wires, needed to move eyes, head, limbs, ... Moreover, if a logic is involved, we can detect contradictions (usually derivations of a form of `A /\ ~ A`) indicating what we translate is a falsehood / lie.
9
Thanks for the replys.

So stepper motors can already step a nanometre?     I spose that's how they make standard micrometres…  ???
 (as in mechanically, with cogs,  not just purely magnetic.)

.
10
Human Experience and Psychology / Re: What thought next
« Last post by AndyGoode on June 25, 2019, 12:49:26 am »
Oversimplifying it, there's a "mind blackboard" containing graphs of current thoughts. The content of this blackboard, I believe, corresponds to your single-node FOA.

Close, but not quite.

First, you should be careful of the word "blackboard" because there exists a variation on expert systems called a blackboard system (https://en.wikipedia.org/wiki/Blackboard_system), so I'm not sure if you're referring to a blackboard system or the common notion of a blackboard.

Either way, I am using a symmetric directed graph (https://en.wikipedia.org/wiki/Directed_graph) to represent knowledge in general, where each node might be a neural network node, semantic net node, tree node, digraph node, or whatever, and represents a specific concept. That entire graph would represent whatever your system has in its memory, and my FOA is a single highlighted node from that graph that represents the single concept that the system is currently thinking about. Only one node can be the FOA at any point in time, and I'm assuming that the next FOA can be only a node that has a link to the current FOA. It's sort of like a lighted marquee, where the changing selection of which light that is lit can give the illusion of motion through the network.

In the following diagram you can see a single FOA change location across three time steps. The graph stays the same, and is mostly independent of what is going on with the FOA (unless maybe you suddenly remove part of the graph where the FOA was active, in which case I don't know what would happen):



P.S.--This system builds hierarchies and free will into it. (1) Hierarchies: Because interrupts will inevitably occur (hunger, restlessness, etc.), this keeps the system motivated to keep changing its state in order to survive. Survival is the #1 goal. (2) Free will: In some search modes, such as free association or random selections between equally appealing alternative actions, the system's behavior is not completely predictable. This is not a deterministic system.

I also forgot to mention that when the FOA jumps to a different node, that new FOA can be in a different layer, such as in the Real layer, such as if a bee sting causes an interrupt during the solution of a problem of lesser importance in the Virtual layer.
Pages: [1] 2 3 ... 10

Users Online

63 Guests, 2 Users
Users active in past 15 minutes:
LOCKSUIT, Zero
[Trusty Member]

Most Online Today: 75. Most Online Ever: 340 (March 26, 2019, 09:47:57 pm)

Articles