Recent Posts

Pages: [1] 2 3 ... 10
1
Robotics News / Re: Deep learning with point clouds
« Last post by LOCKSUIT on Today at 05:08:20 PM »
Makes me wonder how ultrasound mapping earthlings have a completly different higher order sense comparing to human like vision. Bats for example. We can only imagine what it feels like to have a surrounding 3d map footprint inside the brain, processed through merely ears. It seems like completely different sense with complicated processing algorithm. Amazing Nature...

https://www.youtube.com/watch?v=m8liMvCKeiQ
2
Robotics News / Re: Deep learning with point clouds
« Last post by ivan.moony on Today at 01:25:17 PM »
Makes me wonder how ultrasound mapping earthlings have a completly different higher order sense comparing to human like vision. Bats for example. We can only imagine what it feels like to have a surrounding 3d map footprint inside the brain, processed through merely ears. It seems like completely different sense with complicated processing algorithm. Amazing Nature...
3
Robotics News / Deep learning with point clouds
« Last post by Tyler on Today at 12:04:32 PM »
Deep learning with point clouds
21 October 2019, 5:10 pm

If you’ve ever seen a self-driving car in the wild, you might wonder about that spinning cylinder on top of it.

It’s a “lidar sensor,” and it’s what allows the car to navigate the world. By sending out pulses of infrared light and measuring the time it takes for them to bounce off objects, the sensor creates a “point cloud” that builds a 3D snapshot of the car’s surroundings.

Making sense of raw point-cloud data is difficult, and before the age of machine learning it traditionally required highly trained engineers to tediously specify which qualities they wanted to capture by hand. But in a new series of papers out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), researchers show that they can use deep learning to automatically process point clouds for a wide range of 3D-imaging applications.

“In computer vision and machine learning today, 90 percent of the advances deal only with two-dimensional images,” says MIT Professor Justin Solomon, who was senior author of the new series of papers spearheaded by PhD student Yue Wang. “Our work aims to address a fundamental need to better represent the 3D world, with application not just in autonomous driving, but any field that requires understanding 3D shapes.”

Most previous approaches haven’t been especially successful at capturing the patterns from data that are needed to get meaningful information out of a bunch of 3D points in space. But in one of the team’s papers, they showed that their “EdgeConv” method of analyzing point clouds using a type of neural network called a dynamic graph convolutional neural network allowed them to classify and segment individual objects.

“By building ‘graphs’ of neighboring points, the algorithm can capture hierarchical patterns and therefore infer multiple types of generic information that can be used by a myriad of downstream tasks,” says Wadim Kehl, a machine learning scientist at Toyota Research Institute who was not involved in the work.

In addition to developing EdgeConv, the team also explored other specific aspects of point-cloud processing. For example, one challenge is that most sensors change perspectives as they move around the 3D world; every time we take a new scan of the same object, its position may be different than the last time we saw it. To merge multiple point clouds together into a single detailed view of the world, you need to align multiple 3D points in a process called “registration.”

Registration is vital for many forms of imaging, from satellite data to medical procedures. For example, when a doctor has to take multiple magnetic resonance imaging scans of a patient over time, registration is what makes it possible to align the scans to see what’s changed.

“Registration is what allows us to integrate 3D data from different sources into a common coordinate system,” says Wang. “Without it, we wouldn’t actually be able to get as meaningful information from all these methods that have been developed.”

Solomon and Wang’s second paper demonstrates a new registration algorithm called “Deep Closest Point” (DCP) that was shown to better find a point cloud’s distinguishing patterns, points, and edges (known as “local features”) in order to align it with other point clouds. This is especially important for such tasks as enabling self-driving cars to situate themselves in a scene (“localization”), as well as for robotic hands to locate and grasp individual objects.

One limitation of DCP is that it assumes we can see an entire shape instead of just one side. This means it can’t handle the more difficult task of aligning partial views of shapes (known as “partial-to-partial registration”). As a result, in a third paper the researchers presented an improved algorithm for this task that they call the Partial Registration Network (PRNet).

Solomon says that existing 3D data tends to be “quite messy and unstructured compared to 2D images and photographs.” His team sought to figure out how to get meaningful information out of all that disorganized 3D data without the controlled environment that a lot of machine learning technologies now require.

A key observation behind the success of DCP and PRNet is the idea that a critical aspect of point-cloud processing is context. The geometric features on point cloud A that suggest the best ways to align it to point cloud B may be different from the features needed to align it to point cloud C. For example, in partial registration, an interesting part of a shape in one point cloud may not be visible in the other — making it useless for registration.

Wang says that the team’s tools have already been deployed by many researchers in the computer vision community and beyond. Even physicists are using them for an application the CSAIL team had never considered: particle physics.

Moving forward, the researchers hope to use the algorithms on real-world data, including data gathered from self-driving cars. Wang says they also plan to explore the potential of training their systems using self-supervised learning, to minimize the amount of human annotation needed.

Solomon and Wang were the two sole authors of the DCP and PRNet papers. Their co-authors on the EdgeConv paper were research assistant Yongbin Sun and Professor Sanjay Sarma of MIT, alongside postdoc Ziwei Liu of University of California at Berkeley and Professor Michael M. Bronstein of Imperial College London.

The projects were supported, in part, by the U.S. Air Force, the U.S. Army Research Office, Amazon, Google Research, IBM, the National Science Foundation, the Skoltech-MIT Next Generation Program, and the Toyota Research Institute.

Source: MIT News - CSAIL - Robotics - Computer Science and Artificial Intelligence Laboratory (CSAIL) - Robots - Artificial intelligence

Reprinted with permission of MIT News : MIT News homepage



Use the link at the top of the story to get to the original article.
4
General Chat / Re: Now you see it...
« Last post by Art on Today at 02:12:23 AM »
Please stick to the posted video and try not to venture off into unknown thrillseeking. My post was only meant to show some current technology that actually exists, not so much about sensationalism.

Turning people into puddles of slush is not exactly pertinent to the initial posted video nor to the intent of this website. This is a family-friendly site.

Thank you.
5
General Chat / Re: Now you see it...
« Last post by AndyGoode on Today at 01:32:48 AM »
I'm not debunking any of this stuff but I think it's important to understand the tech being used to record these phenomena.

I'll go with that. I think somebody mentioned that elsewhere, too. Now please explain the other videos.  :)
Also, if you want links to the Iraqi doctor describing the bizarre burning, decapitation, disemboweling, and vehicle shrinking effects of a lightning-like beam shot from a tank, or the Panamanian eyewitness describing soldiers in the street being turned into a puddle of slush, I'll try to find those, too.
6
Gaming / Re: Can your PC run it?
« Last post by LOCKSUIT on October 22, 2019, 06:47:43 PM »
I thought you were going to say that we can run it on cloud.
7
Gaming / Can your PC run it?
« Last post by Art on October 22, 2019, 05:13:27 PM »
That's always been a conundrum...whether your PC would be able to run a specific game (or not).
Don't buy that game if you're not sure it will play on your computer.

Well, wonder no longer.
A friend pointed me to a really nice site today and it works surprisingly well.

Can You Run It? A Free, Quick and Private evaluation/diagnosis of your computer in seconds.

https://www.systemrequirementslab.com/cyri-score
8
Home Made Robots / Re: advancements on the spider pully leg design
« Last post by goaty on October 22, 2019, 04:41:24 PM »
"
Between quality, cost, and time, you can only choose two.

You can have high quality & cheap,    but that requires lots of time.
You can have cheap, and fast,    but the quality will suffer.
You can have high quality and fast,    but its gonna cost a lot.
"

The highest quality thing in the highest quantity will come in long due time, at a high cost :) ..... the result is the lowest cost lol.

if you don't count time or intellectual difficulty,  it costs 0c forever after.
9
Hi sal,

I am Project Manager at Greyfly and we offer artificial intelligence tool. I want that people must know the benefits of the tool. My website is www.greyfly.co.uk. Visit and use the tool.

How does your AI improve project success exactly? I'm not sure exactly what you would feed it or how it would know unless it used natural language processing.
10
General Chat / Re: Now you see it...
« Last post by LOCKSUIT on October 22, 2019, 03:42:07 PM »
You can literally see in the background the other guy is also camouflaging, Korrelan  is correct.
Pages: [1] 2 3 ... 10

Now you see it...
by Art (General Chat)
Today at 02:12:23 AM
Can your PC run it?
by LOCKSUIT (Gaming)
October 22, 2019, 06:47:43 PM
advancements on the spider pully leg design
by goaty (Home Made Robots)
October 22, 2019, 04:41:24 PM
Who are you, what do you do & what do you want to do ?
by LOCKSUIT (New Users Please Post Here)
October 22, 2019, 03:49:37 PM
XKCD Comic : Wardrobe
by Tyler (XKCD Comic)
October 22, 2019, 12:00:23 PM
Consciousness & Self-awareness
by PhilNewAGI (General AI Discussion)
October 21, 2019, 01:44:00 PM
beyond omega level coding
by yotamarker (General AI Discussion)
October 21, 2019, 10:19:49 AM
Invert music....
by goaty (General Chat)
October 21, 2019, 07:34:48 AM

Users Online

10 Guests, 1 User
Users active in past 15 minutes:
AndyGoode
[Trusty Member]

Most Online Today: 28. Most Online Ever: 340 (March 26, 2019, 09:47:57 PM)

Articles