Ai Dreams Forum
Artificial Intelligence => AI News => Topic started by: infurl on October 31, 2020, 05:21:57 am
-
https://www.technologyreview.com/2020/10/30/1011435/ai-fourier-neural-network-cracks-navier-stokes-and-partial-differential-equations/ (https://www.technologyreview.com/2020/10/30/1011435/ai-fourier-neural-network-cracks-navier-stokes-and-partial-differential-equations/)
This is the kind of mathematical insight that can have a great impact and make the world a better place.
The whole thing is extremely clever, and also makes the method more generalizable. Previous deep-learning methods had to be trained separately for every type of fluid, whereas this one only needs to be trained once to handle all of them, as confirmed by the researchers’ experiments.
“Having good, fine-grained weather predictions on a global scale is such a challenging problem,” she says, “and even on the biggest supercomputers, we can’t do it at a global scale today. So if we can use these methods to speed up the entire pipeline, that would be tremendously impactful.”
-
Math is almost perfect.
-
IMHO the minor fragment of math imperfection is what makes it perfect. See Gödel incompleteness theorems (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems) (simplified: (1) if you have it all, then you are not right, and (2) if you are right, you don't have it all).
-
People do tend to use incomplete reasoning to approach internal consistency. Maybe this phenomenon is rooted in some truth. As far as I know, if you invent a completely defined system, you don’t get emergent phenomena. Maybe the universe generates novelty due to a small break in internal consistency. So, there could be this relatively small “error” or “variation” which propagates through the various un-sandboxed systems running our simulation, annoying our creators. ;D
"There is a crack in everything, that's how the light gets in." - Leonard Cohen