Recent Posts

Pages: [1] 2 3 ... 10
1
Posting link once more because all should see this it's such a hallmark of previous latest AI work that we now can utilize/try:
https://deepai.org/ai-image-processing
2
Robotics News / Gauging language proficiency through eye movement
« Last post by Tyler on Today at 12:00:53 pm »
Gauging language proficiency through eye movement
23 May 2018, 4:59 am

A study by MIT researchers has uncovered a new way of telling how well people are learning English: tracking their eyes.

That’s right. Using data generated by cameras trained on readers’ eyes, the research team has found that patterns of eye movement — particularly how long  people’s eyes rest on certain words — correlate strongly with performance on standardized tests of English as a second language.

“To a large extent [eye movement] captures linguistic proficiency, as we can measure it against benchmarks of standardized tests,” says Yevgeni Berzak, a postdoc in MIT’s Department of Brain and Cognitive Sciences (BCS) and co-author of a new paper outlining the research. He adds: “The signal of eye movement during reading is very rich and very informative.”

Indeed, the researchers even suggest the new method has potential use as a testing tool. “It has real potential applications,” says Roger Levy, an associate professor in BCS and another of the study’s co-authors.

The paper, “Assessing Language Proficiency from Eye Movements in Reading,” is being published in the Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. The authors are Berzak, a postdoc in the Computational Psycholinguistics Group in BCS; Boris Katz, a principal research scientist and head of the InfoLab Group at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL); and Levy, who also directs the Computational Psycholinguistics Lab in BCS.

The illusion of continuity

The study delves into a phenomenon about reading that we may never notice, no matter how much we read: Our eyes do not move continuously along a string of text, but instead fix on particular words for up to 200 to 250 milliseconds. We also take leaps from one word to another that may last about 1/20 of a second.

“Although you have a subjective experience of a continuous, smooth pass over text, that’s absolutely not what your eyes are doing,” says Levy. “Your eyes are jumping around, mostly forward, sometimes backward. Your mind stitches together a smooth experience. … It’s a testimony to the ability of the mind to create illusions.”

But if you are learning a new language, your eyes may dwell on particular words for longer periods of time, as you try to comprehend the text. The particular pattern of eye movement, for this reason, can reveal a lot about comprehension, at least when analyzed in a clearly defined context.

To conduct the study, the researchers used a dataset of eye movement records from work conducted by Berzak. The dataset has 145 students of English as a second language, divided almost evenly among four native languages — Chinese, Japanese, Portuguese, and Spanish — as well as 37 native English speakers.

The readers were given 156 sentences to read, half of which were part of a “fixed test” in which everyone in the study read the same sentences. The video footage enabled the research team to focus intensively on a series of duration times — the length of time readers were fixated on particular words.

The research team called the set of metrics they used the “EyeScore.” After evaluating how it correlated with the Michigan English Test (MET) and the Test of English as a Foreign Language (TOEFL), they concluded in the paper that the EyeScore method produced “competitive results” with the standardized tests, “further strengthening the evidence for the ability of our approach to capture language proficiency.”

As a result, the authors write, the new method is “the first proof of concept for a system which utilizes eye tracking to measure linguistic ability.”

Sentence by sentence

Other scholars say the study is an interesting addition to the research literature on the subject.

“The method [used in the study] is very innovative and — in my opinion — holds much promise for using eye-tracking technology to its full potential,” says Erik Reichle, head of the Department of Psychology at Macquarie University in Sydney, Australia, who has conducted many experiments about tracking eye movement. Reichle adds that he suspects the paper “will have a big impact in a number of different fields, including those more directly related to second-language learning.”    

As the researchers see it, the current study is just one step on a longer journey of exploration about the interactions of language and cognition.

As Katz says, “The bigger question is, how does language affect your brain?” Given that we only began processing written text within the last several thousand years, he notes, our reading ability is an example of the “amazing plasticity” of the brain. Before too long, he adds, “We could actually be in a position to start answering these questions.”

Levy, for his part, thinks that it may be possible to make these eye tests about reading more specific. Rather than evaluating reader comprehension over a corpus of 156 sentences, as the current study did, experts might be able to render more definitive judgments about even smaller strings of text.

“One thing that we would hope to do in the future that we haven’t done yet, for example, is ask, on a sentence-by-sentence basis, to what extent can we tell how well you understood a sentence by the eye movements you made when you read it,” Levy says. “That’s an open question nobody’s answered. We hope we might be able to do that in the future.”

The study was supported, in part, by MIT’s Center for Brains, Minds, and Machines, through a National Science Foundation grant.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.
3
XKCD Comic / XKCD Comic : Business Update
« Last post by Tyler on Today at 12:00:53 pm »
Business Update
23 May 2018, 5:00 am

Our customers keep sending us their personal information, even though we've repeatedly asked them to stop. The EU told me I'm the heir to some ancient European throne that makes me exempt from the GDPR, but we should probably still try to fix that.

Source: xkcd.com

4
Home Made Robots / Re: soldering motor attachments
« Last post by ranch vermin on Today at 08:59:07 am »
yeh art, probably not a good idea to solder the motors together,   but use of lead could be in the design if it wasnt used in a critical bearing point.

Im not sure what to do,  maybe the plastic is a better bet,   i think making a good cube between the 90 degree motors would be strong enough, then u cant shake the robot apart.

bit by bit,  the body is coming together,  the brain is more important to get,  its what all the robots you see are missing today.
5
Home Made Robots / Re: soldering motor attachments
« Last post by Art on Today at 02:58:38 am »
There are many searchable videos/articles on converting recyclable plastics like milk containers and such to solid, usable plastic items that can be repurposed for a variety of things.

Solder is nothing more than a bridge between two metallic points in order to maintain or allow conductivity, not structure.

Many of those low-cost camera gimbals might do the trick for your needs.

Lastly, there is the ever trusty J.B.Weld here in the states that even turns clear after being mixed and hold with something around 4,400 psi of strength yet remains, fileable, sandable, drillable and paintable. Fairly cheap as well!

Good luck on your project but no structural solder. A lot of it is still made with lead.
6
General Project Discussion / Re: Wish Lists
« Last post by ranch vermin on May 23, 2018, 12:46:11 pm »
confibularity.    pack of fixes.
7
Home Made Robots / Re: soldering motor attachments
« Last post by ranch vermin on May 23, 2018, 12:43:19 pm »
Maybe if you bake sugar bonds in an oven...  :)

There might be some way to do it,  sometimes things only work if they are done just right.   (like mixing sugar with something and getting something more than a hard ginger nut.)  ginger nuts are pretty hard, but there could be a way to get it tougher,  takes many iterations and experiments. =)
8
General Project Discussion / Re: Wish Lists
« Last post by Art on May 23, 2018, 12:42:34 pm »
@ infurl,

"...That way I can seamlessly merge all the tuples, and hence all the data, into a single body of knowledge, complete with provenance and confidence levels...."

Perhaps you have seen that Black Adder sketch where Dr. Johnson announces his compiled and completed English Dictionary. Completed that is until Black Adder offers his own "contrafibularity", causing the wise Dr. to pencil in this newest word. ;)

Will it ever be finished? Probably not but I do admire your goal and determination! Best of luck in this endeavor!
9
General Chatbots and Software / Re: KorrBot
« Last post by LOCKSUIT on May 23, 2018, 12:28:56 pm »
Yep that's better in all ways around. Human Speech only developed not long ago. Visual sentences were what all animals used only. "Hammer smashing crystals.". There's no way to interpret it wrongly. It's universal. First you teach it, then it discovers more.

:)
10
General Chatbots and Software / Re: KorrBot
« Last post by ranch vermin on May 23, 2018, 12:24:05 pm »
wow first time ive seen this,   done a very good job, showing off the power of assignment.

This would be cool for a supervised learning/implanting technique.

what I would think to do if I was making this would be to combine it with machine vision, and then the robot could search through the video for the labels and maybe you could get the robot to have "implantable methods" - that are just written in plain english to it.

Cause how else would you do it?  You need labels or you cant tell the robot anything.  otherwise its just stuck in its primitive goal or your doing basicly this but in a more primitive way.

Why cant you just tell the robot and then it learns - because theres the problem where it cant distinguish the language labels from the actual thing that it associates with, and this would skip past that issue.
Pages: [1] 2 3 ... 10

Users Online

17 Guests, 1 User
Users active in past 15 minutes:
infurl
[Trusty Member]

Most Online Today: 37. Most Online Ever: 208 (August 27, 2008, 09:36:30 am)

Articles