Recent Posts

Pages: [1] 2 3 ... 10
General AI Discussion / Brain vs computer
« Last post by unreality on Today at 06:36:03 am »
Interesting. According to Intel, the home sapien brain runs at around 1000 Hz. It’s believed the human brain has around 100 trillion synapses, although some believe it could be as high as 1000 trillion. Multiply the two and you get 1E+17 to 1E+18.

Modern CPUs have about 800 million transistors and operate at around 3 GHz. Multiply the two and you get about 2E+18.

That’s a ratio of 2 to 20 (2E+18 / 1E+18 = 2 and 2E+18 / 1E+17 = 20). Average ratio is (2 + 20) /2 = 11.

The human brain while thinking hard takes about 20 watts. A web server is a good example because they don’t need high power graphic cards, and they have a lot of RAM, which is something my AI requires. A server under load takes about 200 watts.

That’s a ratio of 10 (200 / 20). Interesting. The synapse*frequency ratio between cpu & brain taken from above is 11:1, and power ratio is 10:1. Pretty much the same, it seems.

What about neurons? I haven’t seen any valid way to convert neurons to a computer. A few academics believe neurons have memory. A neuron is very simple. It’s no where near a cpu. DRAM memory chips are made of capacitors and transistors-- 8 of each per byte. There are 100 billion neurons in the human brain. A 128GB server would have about 1000 billion capacitors and transistors. That’s a 10:1 ratio. I’ve often said that my AI should have at least 500 GB of RAM, but then again I’m shooting for ASI-- beyond the human.

Idk, just thinking out loud, but the same ratio calculations in synapse*frequency vs power is interesting. Sure, the above figures will probably vary a lot. The fact that they're even remotely close is interesting.

If a 128 GB RAM server had the correct AI code, I truly believe it could be comparable to humans in thinking ability. I probably spend way way way too much time thinking about this lol. I'm like obsessed with it. After one has spent so much time analyzing something, they can get a good gut feeling about something. My gut feeling is shouting out loud that the hardware is here already, and to start coding the AI the correct way. The code is what's missing. Evolution has had millions of years on the brains program. We're trying to accomplish the same thing in software in just a few decades. It's not easy, but it will happen!
General AI Discussion / Re: Robot walks 40.5 miles in marathon
« Last post by keghn on Today at 03:30:35 am »
 In complete AGI theory i using mechanical focusing along with multiple inner mind eye that focus on encoder position and on
recorded data or generated data. And also the inner mind eye focus can follow in parallel with mechanical eyes focus.
General Chat / Re: What defines Human?
« Last post by on Today at 03:08:38 am »
It is not possible to cook underwater.
How about dolphins, in terms of intelligence?
Interesting post, though... (Just analyzing it).
General AI Discussion / Re: Robot walks 40.5 miles in marathon
« Last post by on Today at 02:54:56 am »
About 81,000 steps? 
(Depending on the robot's stride.)
General AI Discussion / Re: Robot walks 40.5 miles in marathon
« Last post by unreality on Today at 12:41:38 am »
One thing in humans that might be worth adding to a Synth is the foveal region in the eye. Although it has a negative side effect in that the foveal region is the reason we have to move our eyes all around. I've never heard a CCD having a foveal region. There might be some, but I think you can achieve the same thing with a special lens where the zooming factor increases as it gets closer to the center of the CCD. Of course the software or hardware would have to convert the distorted image into a linear undistributed image.

1. By centering the eye on an object the Synth can get a better image of the object.
2. Eyes that move around will make humans feel more at easy. A Synth that just stares in one direction without moving their eyes will look freaky and might frighten us.

1. It takes time and energy to move eyes.
General AI Discussion / Re: Robot walks 40.5 miles in marathon
« Last post by unreality on November 22, 2017, 11:49:46 pm »
infurl, we'll have to see. Maybe everyone thinks their AI project is the one. I definitely feel my AI, when finished, will think better than a human.

I don't agree with everything. I'm told brain signals travel at 268 mile per hour. A bullet fired from a gun travels faster. Brain signals are about 2,500,000 times slower than cpu signals. There are a lot of neurons in the brain, but I wouldn't compare it to a cpu.

One huge disadvantage is that modern cpus were not designed for AI, but I'll still accomplish it. Don't forget that computers destroy humans in chess. Your desktop pc can destroy the best human chess player. Google DeepMind learned how to play Go and destroyed a grand master go player. DeepMind learns on its own to play video games, which ends up being better at playing the game than humans. Google said their software is now better and faster at recognizing objects in photos than humans. There's now AI that can create art, paint, compose music, etc. Computers are now better at driving cars. Computers are now performing surgery. Just wait 10 years. The Synthetics are almost here and you'll be shaking one of their hands, staring face to face, having a deep intellectual conversation with one.
General AI Discussion / Re: Robot walks 40.5 miles in marathon
« Last post by keghn on November 22, 2017, 11:23:11 pm »

The reward Codex.

 No mater how advanced a algorithm and machine is it has to follow rules of energy management and self preservation.

 The fist law for development of the first order of the primitive reward system, and are measured in calories, and calories to repair:

01. Expend the littlest amount of energy to sense and record the world. From the tiniest to the farthest possible.
02. Expend littlest amount of energy to get more energy.
03. Expend littlest amount of energy accelerate the accumulation of energy.
04  Expend littlest amount of energy reduce the loss of energy. But not to terminate self.
05. Expend littlest amount of energy to accelerate the loss of energy.
06. Expend littlest amount of energy to prevent damage. Pain is a anti reward and a notice that real damage is imminent and greater pain.
07. Expend littlest amount of energy to repair self.
08. Expend littlest amount of energy to accelerate repairs.
09. Have the ability easily expend around 70 percent of stored energy at one time to survive a
      unexpected attack or accident.  Survival at all cost.
10. Expend littlest amount of energy replicate functioning copies.
11. Use as little amount of energy to move a percentage of functional copies to a
      distant areas where there are none.To avoid local disasters.
12. Keep a percentage of functional copies close to functions as a group.
13. Expend littlest amount of energy to cause the group to functions as if one being.
14. Use as little energy to communicate with other functional copies.
15. Always improving the level of recovery from max damage and max energy loss.
16. Expend littlest amount energy to make a reward happen as quick possible.
17. Expend littlest amount of energy to avoid or delay a anti reward for the longest possible time.
18. Expend littlest amount of energy increase the duration of a reward.
19. Expend littlest amount of energy increase the strength of a reward.
20. Expend littlest amount of energy to stay with in reward zone.
21. Expend energy to reduce the strength of a ant reward.
22. Expend energy to get out of a anti reward zone.
24. Expend energy to reduce the duration of a anti reward.
25. Hit as many rewards at one time. Keep the highest reward to anti reward ratio.
26. Hit as few anti reward possible. Keep the highest reward to anti reward ratio.
27. Stay close to rewards.
28. Keep ant reward as far away as possible.
29. Expend littlest amount of energy to finding better ways to implement sated rules.
30. A small "repeat anti reward" is applied to each specific reward and
    accumulates so that the a reward will become nothing, cancel out. So that a AI will get stuck in a
    never ending loop. "Repeat anti reward" fade with time or by another by other mechanism.

    The rules of development for the second reward system:

31. Copy the all implementation of rules of 1 through thirty+ from a much older and strongest functional AI copy. If one or more are around.

General AI Discussion / Re: Robot walks 40.5 miles in marathon
« Last post by infurl on November 22, 2017, 10:59:02 pm »
There is considerable room for improvement on mammalian brains as it turns out that bird brains are much more efficient than our own. How else could parrots and ravens be so smart in spite of their tiny heads? Here is just one of many many articles on the subject.

ASICs are application specific integrated circuits. That means that instead of implementing the algorithm in a language like C, and compiling it to machine code (assembly language), which in turn runs on microcode (firmware embedded in the processor), which in turn runs on a general purpose processing unit, which in turn is implemented as an application specific integrated circuit, your algorithm is implemented directly in hardware. By cutting out all those levels you are trading off versatility for performance which is definitely worth doing. In answer to your rhetorical question, just about everybody is working on custom AI chips.

However better coding is unlikely to bring about any major improvement for artificial intelligence by itself. Most of the interesting problems are intractable meaning that the optimal solutions can only be calculated with an impossibly high number of operations. If you can make a computer that's a billion times faster it means you might be able to solve one of these problems in a billion years instead of a trillion years.

The alternative is to use heuristics which are informed guesses. They don't produce the best solution but they do produce one that is good enough. Another possibility is deep learning which encodes vast amounts of experience as accurate predictions. It's still not very flexible though. Where is the AI that can learn from a single example, like we can?

To put things in perspective, each neuron is the equivalent of a processor. Intelligent processing is happening at the molecular level, otherwise how could single celled organisms display intelligent behaviour that involves learning, because they certainly do.

In the past we've thought that each neuron was equivalent to one bit of memory but it's much more complicated than that. The human brain contains the equivalent of trillions of computers, more than all the computing power on Earth, and it runs on 25 watts.
B-roll of T-HR3: 

Toyota unveils third generation humanoid robot T-HR3:

I especially like the part 2 minutes into the video. Amazing balance and grace. You could turn that into a great artistic concert.
Wow I'd love to go to a live VR concert! Maybe VR is the future. In VR land we can be anything, right? Some of the new VR headsets have nearly full field of view and adjust according to the direction your eyeballs are looking, thus making it appear close to real world.
Pages: [1] 2 3 ... 10