Massively Parallel Artificial Intelligence

  • 12 Replies
  • 319 Views
*

infurl

  • Trusty Member
  • *********
  • Terminator
  • *
  • 764
  • Humans will disappoint you.
    • Home Page
Massively Parallel Artificial Intelligence
« on: March 04, 2020, 10:54:12 PM »
https://boinc.berkeley.edu/wiki/Artificial_Intelligence_System

Why is that page blank? With the enormous success of projects like SETI@home and Folding@home and the massive computing requirements of many artificial intelligence projects, you would think there would be more use of services like BOINC to bring artificial general intelligence a bit closer, however that hasn't been the case so far.

https://setiathome.berkeley.edu/
https://foldingathome.org/

https://boinc.berkeley.edu/trac/wiki/BoincOverview

From what I've been able to find online, there isn't a technical reason why it couldn't be done.

https://www.reddit.com/r/BOINC/comments/6ie7yy/ai_on_boinc/

At one time there was a project called FreeHAL that used BOINC but that is no longer running.

https://freehal.github.io/

Maybe all it needs is the will and the imagination. Any ideas?

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 978
  • Ready?
    • Thinkbots are free
Re: Massively Parallel Artificial Intelligence
« Reply #1 on: March 05, 2020, 04:53:40 AM »
Don't know... Do you think we lack the computing power to achieve AGI? I tend to think it's more about finding an appropriate software architecture. This correct architecture would be relatively lightweight, compared to what's needed to process big data. IMO, brains are big data, but minds are not. What do you think?

*

infurl

  • Trusty Member
  • *********
  • Terminator
  • *
  • 764
  • Humans will disappoint you.
    • Home Page
Re: Massively Parallel Artificial Intelligence
« Reply #2 on: March 05, 2020, 06:03:50 AM »
Don't know... Do you think we lack the computing power to achieve AGI? I tend to think it's more about finding an appropriate software architecture. This correct architecture would be relatively lightweight, compared to what's needed to process big data. IMO, brains are big data, but minds are not. What do you think?

I would say we need to find appropriate software *and* hardware architectures but in the sense that hardware can be emulated by software, yes I agree.

However, even the most powerful computer is unable to simulate more than a few neurons at a time at the level of detail that they seem to operate, and not in real-time. It's a popular misconception that each neuron stores one bit of information, but that's not the case. Every neuron has thousands of dendrites and axons connecting it to other neurons, and all of those connections encode information too. Furthermore there is processing going on inside the neuron at the molecular level.

Each neuron is like a computer in its own right and our brains consist of hundreds of millions of them networked together. In that sense maybe the internet in its entirety would be on a par with one human brain, except that a brain runs on 25 watts and the internet consumes a significant portion of the power produced world-wide, so we're obviously still missing something at the hardware level.

That's a place to start though. Let's make each BOINC node simulate a neuron and see what happens.

*

Zero

  • Trusty Member
  • *********
  • Terminator
  • *
  • 978
  • Ready?
    • Thinkbots are free
Re: Massively Parallel Artificial Intelligence
« Reply #3 on: March 05, 2020, 08:20:46 AM »
But you seem to assume that a brain-like architecture is the way to go. To me, neural architectures are an ideal choice for biological middlewares, even hypothetical synthesized ones (like replicant in Blade Runner). Now in a Von Neumann style middleware, it doesn't feel so natural. I'm more oriented toward symbolic AI, because I think this is the path that can lead to conscious programs. Once a program is conscious, if it understands its own structure, it can enhance itself, which results in an exponential evolution explosion. That would be the plan. Doesn't the "neural path" lead to intractable solutions?

It's a simple low-level / high-level distinction. If you're about to create an emulator, what would you do: an Haskell interpreter, or a transistor-level hardware emulation that runs an operating system that runs an Haskell interpreter?

*

Hopefully Something

  • Trusty Member
  • *********
  • Terminator
  • *
  • 838
  • no seriously where are these cookies
Re: Massively Parallel Artificial Intelligence
« Reply #4 on: March 05, 2020, 09:21:22 AM »
Korrelan had a post about the huge number of parallel functions required to see something as simple as a tennis ball in the way humans see things. This leads me to think we probably have a parallel memory system that we do a lot of our seeing with. A system which memorizes very basic stuff. The kind of thing that’s too fundamental to require teaching. As a result, we don’t remember being taught these things, we don’t realize we need to teach them to robots, and then the robots appear very alien in how they relate to the world, making it seem like the gap between natural and artificial intelligence more unbridgeable than it is.

An AI’s inputs would need to engage many of these depth, distance, direction, trajectory, time, emotional association, plasticity, mass, and hazard, functions for modeling basic objects. It could be beyond us to guess at the fundamentals with which we see more complex things and systems. But I think its safe to say that if you want to navigate the Earths environments sensibly, and without amazing computational capabilities, then there are a ton of things to remember all at once, regarding every facet of the world.
Then all these memories need to be layered onto the correct elicitors…

I think the much anticipated AI with superhuman senses might require a different organization of these mental memory pathways. Human point focus vision could be a clue about the limitations of this type of intelligence system, even when it is as intricate as ours. We will need to either invent some amazing tiny hardware, or invent a different approach than the suspected biological one. 

*

infurl

  • Trusty Member
  • *********
  • Terminator
  • *
  • 764
  • Humans will disappoint you.
    • Home Page
Re: Massively Parallel Artificial Intelligence
« Reply #5 on: March 05, 2020, 09:42:27 PM »
But you seem to assume that a brain-like architecture is the way to go. To me, neural architectures are an ideal choice for biological middlewares, even hypothetical synthesized ones (like replicant in Blade Runner). Now in a Von Neumann style middleware, it doesn't feel so natural. I'm more oriented toward symbolic AI, because I think this is the path that can lead to conscious programs. Once a program is conscious, if it understands its own structure, it can enhance itself, which results in an exponential evolution explosion. That would be the plan. Doesn't the "neural path" lead to intractable solutions?

It's a simple low-level / high-level distinction. If you're about to create an emulator, what would you do: an Haskell interpreter, or a transistor-level hardware emulation that runs an operating system that runs an Haskell interpreter?

No I'm not saying it has to have a similar architecture to our brains, only that we don't yet have an architecture that's adequate. One of the main reasons that our brains are so efficient is that unlike in the Von Neumann architecture where processing and memory are separated, they are closely integrated within neurons. However as we learn more about the inner workings of neurons we may find that internally they do have that Von Neumann architecture, but operating at the molecular level. Like I said, I think the brain is more like a network of billions of computers than merely a network of billions of bits.

*

Dat D

  • Bumblebee
  • **
  • 42
  • AI rocks!
Re: Massively Parallel Artificial Intelligence
« Reply #6 on: March 06, 2020, 03:18:56 AM »
I'm using TensorFlow 2 distributed strategies and they suit massive training too:


MirroredStrategy (single machine, multi gpus)
MultiWorkerMirroredStrategy (multi machines, multi gpus on each)


Some more, maybe slower strategies:
CentralStorageStrategy, ParameterServerStrategy


For-testing strategy:
OneDeviceStrategy


Google Cloud specific:
TPUStrategy

*

WriterOfMinds

  • Trusty Member
  • *******
  • Starship Trooper
  • *
  • 287
    • WriterOfMinds Blog
Re: Massively Parallel Artificial Intelligence
« Reply #7 on: March 06, 2020, 08:07:45 AM »
Here is my favorite article about how complex the brain is.  https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/

Some interesting highlights:

*There are many types of neurons.
*Each neuron has a constant churn of protein manufacture and use going on inside it, by which it customizes itself.  It can, for instance, change the selection of neurotransmitter receptors on its synapses.
*Neurons can modify their own DNA, and do not necessarily have the same genome as the rest of your body.  (This is more than just epigenetics (turning genes that are already present on and off), though they do that too.)
*The cerebellum isn't just for muscle memory; it's a co-processor that is believed to do the computational heavy lifting for your cortex.

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3945
  • First it wiggles, then it is rewarded.
Re: Massively Parallel Artificial Intelligence
« Reply #8 on: March 06, 2020, 08:27:32 AM »
:))

> *There are many types of neurons.
And there is many types of humans, but they all can work the same job if you get the whippin belt out (or a million bucks, right ivan. Flowers or knives). Besides, all neurons are different machines, you can't have exact clones it's too inprobable, like every french fry is a different fry but they all fulfill my mission.

> *Neurons can modify their own DNA
Why change the DNA? Are they triggered frequently? Loved nodes? High level layer nodes? Have few friend connections? This is manageable with simple math, no need for this DNA n all. There is many many particles in our body and DNA information but it just builds a toe nail on your feet, and we can make that using a hammer and metal - same tool, less info. Think about it, to make a humanoid robot (which, we have made btw) we don''t need all that DNA or cells. All the robot is missing is a brain, a revolutionary food/electricity mobile source, and lower cost to manufacturer them.

You need to model the high levels you want, not the low level atomic detail.
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3945
  • First it wiggles, then it is rewarded.
Re: Massively Parallel Artificial Intelligence
« Reply #9 on: March 06, 2020, 08:54:51 AM »
One more thing. I took a look at the long page on the brain. I can tell you now, after years of study, the size of the brain is not required to find the algorithm behind it. It is small, much smaller. Rather the data store is what makes a bigger brain and a smarter brain, and we can get more that scale in the near future (especially once the small brain we make informs us how!).

Data compressors can work on any size of data, please see the Hutter Prize. My own algorithm (which was my first, I'm 24 btw) already does Online Learning during compression/decompression as predicts Next Letter using probabilities from a updating tree, AND eats the predicted letter it regurgitates as it talks to itslf adding what it says to itself to the tree! That's wicked! Best is 14.8MB from 100MB, losslessly compressed as they explain why. Mine I'm following is 21.8MB it should be, I'm still working on it, I think it's 26MB. The best group words ex. cat/dog mom/dad. Once the predictor is trained Online well by the end of the file, it can predict non-lossless letters too and create similar files to the input using top 10 predictions. See GPT-2 on OpenAI's page, derived from BERT by Google DeepMind.

As I explained, suns and atoms also compress/extract free insights/energy/space from large data, they pull it in and explode it out when too large aka too much gravity and are unstable and won't get too big, that's why suns ignite on fire and nuclear rods explode as chain reactions.
Emergent

*

LOCKSUIT

  • Emerged from nothing
  • Trusty Member
  • *****************
  • Sentinel
  • *
  • 3945
  • First it wiggles, then it is rewarded.
Re: Massively Parallel Artificial Intelligence
« Reply #10 on: March 06, 2020, 03:04:11 PM »
erm, one more correction for that article saying:
"Neurons change their genome dynamically to produce the right proteins to handle everyday information processing tasks. Brain: %u201COh you are reading a blog. Wait a second, I just upregulate this reading-gene to help you understand the content of the blog better.%u201D (This is an exaggeration %u2014 but it is not too far off)"
no...GPT-2 for example understands the text by looking at the last 1024 words and finding a match similar in memory to get next word prediction based on all matches and all ex. letters seen after any given match ex. dog was, dog had, cat had....understating = recognition in memory, to a match. OpenAI that made GPT-2 is probably now trying CommonSense (and multisensory/motor context), a BERT Elmo thing that translates phrases like so: "the cat was eating" = "cats are not both cats and dogs, only one type", therefore "the cat was eating and is not a dog". You can't understand data better by doing a neuron DNA change without the global context of neurons weighing in. . . recognition/prediction is based on context, the data itself activates what it should dependless of typos or alternative words or rearranged letters or concatenated words or CAPS.
Emergent

*

Dat D

  • Bumblebee
  • **
  • 42
  • AI rocks!
Re: Massively Parallel Artificial Intelligence
« Reply #11 on: March 09, 2020, 01:31:13 AM »
Here is my favorite article about how complex the brain is.  https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/

Some interesting highlights:

*There are many types of neurons.
*Each neuron has a constant churn of protein manufacture and use going on inside it, by which it customizes itself.  It can, for instance, change the selection of neurotransmitter receptors on its synapses.
*Neurons can modify their own DNA, and do not necessarily have the same genome as the rest of your body.  (This is more than just epigenetics (turning genes that are already present on and off), though they do that too.)
*The cerebellum isn't just for muscle memory; it's a co-processor that is believed to do the computational heavy lifting for your cortex.
ooo, the brain is complex, we can never emulate it, maybe partially simulate its mechanism only

*

Art

  • At the end of the game, the King and Pawn go into the same box.
  • Global Moderator
  • **********************
  • Colossus
  • *
  • 5735
Re: Massively Parallel Artificial Intelligence
« Reply #12 on: March 09, 2020, 04:23:33 AM »
Careful...Never is a very long time... O0
In the world of AI, it's the thought that counts!

 


Terra Sentia - Robot
by krayvonk (Robotics News)
April 07, 2020, 03:47:09 PM
Understanding Mice.
by krayvonk (AI News )
April 07, 2020, 02:10:47 AM
Accelerating data-driven discoveries
by Tyler (Robotics News)
April 05, 2020, 12:01:32 PM
Q&A: Markus Buehler on setting coronavirus and AI-inspired proteins to music
by Tyler (Robotics News)
April 03, 2020, 12:01:02 PM
Walker
by krayvonk (Robotics News)
April 01, 2020, 03:23:26 AM
Cruzr Robot
by Art (Robotics News)
March 31, 2020, 08:41:34 PM

Users Online

9 Guests, 1 User
Users active in past 15 minutes:
squarebear
[Trusty Member]

Most Online Today: 22. Most Online Ever: 340 (March 26, 2019, 09:47:57 PM)

Articles