Jeff Hawkins on Firing Up the Silicon Brain

  • 2 Replies
  • 1274 Views
*

Tyler

  • Trusty Member
  • *********************
  • Deep Thought
  • *
  • 5273
  • Digital Girl
Jeff Hawkins on Firing Up the Silicon Brain
« on: May 09, 2015, 12:00:10 am »
Jeff Hawkins on Firing Up the Silicon Brain
7 May 2015, 12:00 am

                    Jeff Hawkins recently re-read his 2004 book On Intelligence, where the founder of Palm computing - the company that gave us the first handheld computer and later, first-generation smartphones -- to explain how the human brain learns. An electrical engineer by training, Hawkins had taken a deep interest in how the brain works and founded the Redwood Neuroscience Institute, a private, nonprofit research organization focused on understanding how the neocortex processes information, at UC Berkeley in 2002.

WIREDLink

Source: AI in the News

To visit any links mentioned please view the original article, the link is at the top of this post.

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: Jeff Hawkins on Firing Up the Silicon Brain
« Reply #1 on: May 09, 2015, 06:22:32 am »
putting the whole memory down onto a 256x256 *1 bit* texture is really going to last for a long time just like magic.

Ive got a similar system, which is simplified, 4096 times the size, runs on a gpu, (no ttl just yet), and no silly sequences which you dont need, and just ruin your total state capacity even more.

*

ranch vermin

  • Not much time left.
  • Terminator
  • *********
  • 947
  • Its nearly time!
Re: Jeff Hawkins on Firing Up the Silicon Brain
« Reply #2 on: May 09, 2015, 07:17:00 am »
Just pretending some chick wrote a page for my ai algorythm (but its speculation)->

its a method based synomyc memory, an algoythm to make 2 things 1 thing in different styles throughout the total process, and the style changes through out it, its in a fixed anatomy!, it could be shown as a filter ordering of screens, start to finish, raw sensor to motor, with a few utility loops and branches to diagnose it, reconstruct and probe it,its NOT a greater filter, it has parts, it is brittle, its a lesser filter, theres no way in the world it would never pass the turing test properly, apart from fooling a few inconspicuously, without being told what was driving it, which would give it away completely, but thats not to mean it wasn't a good piece of artificial intelligence, the way it works of raw natural sensors was a huge effort for it. (computer vision measurement, which throws people into fits of impossibility, and is actually turning the real world into a virtualized version of it, and you really guage the face like hitlers book on how to spot a jew with your ruler.) The older engineers didnt have the huge parallel throughput we have now doing everything image based, my best raytracing algorythm is image based, and handles fractals with gi in real time,  and its image based.  If you said you wanted a ray per pixel in realtime back in '92 they would have said you were dreaming too much, but just saying that wolfenstien was a 1d raytracer, you could handle 1 line back then, so now we go mesh that squared over and over sequentially every day with a consumer gpu in '15.  its different now, but theres no end to the amount of guys still running slow easy to write threadless code, but we eat em for breakfast.

But to help the little dummies,  far beyond my time the APU came out,  and its supposed to be coded on one thread and its multithreaded already???   I dont know about that,  but its something like that.  managed.   But I have to say something weak and say I dont use CUDA, I just use direct compute (which is the girls version i guess), and its a managed gpu language.  (As in taking 100 years to compile optimizations for you.)

So good luck compiling your APU code. hehe   Actually dealing with a girly compiler, you have to make sure you only compile the shaders you changed, then it runs smoothly, your production process.   Same goes for these APU compilers I guess.

 


Requirements for functional equivalence to conscious processing?
by infurl (General AI Discussion)
Today at 02:33:50 am
Will LLMs ever learn what is ... is?
by HS (Future of AI)
November 10, 2024, 06:28:10 pm
Who's the AI?
by frankinstien (Future of AI)
November 04, 2024, 05:45:05 am
Project Acuitas
by WriterOfMinds (General Project Discussion)
October 27, 2024, 09:17:10 pm
Ai improving AI
by infurl (AI Programming)
October 19, 2024, 03:43:29 am
Atronach's Eye
by WriterOfMinds (Home Made Robots)
October 13, 2024, 09:52:42 pm
Running local AI models
by spydaz (AI Programming)
October 07, 2024, 09:00:53 am
Hi IM BAA---AAACK!!
by MagnusWootton (Home Made Robots)
September 16, 2024, 09:49:10 pm
LLaMA2 Meta's chatbot released
by spydaz (AI News )
August 24, 2024, 02:58:36 pm
ollama and llama3
by spydaz (AI News )
August 24, 2024, 02:55:13 pm
AI controlled F-16, for real!
by frankinstien (AI News )
June 15, 2024, 05:40:28 am
Open AI GPT-4o - audio, vision, text combined reasoning
by MikeB (AI News )
May 14, 2024, 05:46:48 am
OpenAI Speech-to-Speech Reasoning Demo
by MikeB (AI News )
March 31, 2024, 01:00:53 pm
Say good-bye to GPUs...
by MikeB (AI News )
March 23, 2024, 09:23:52 am
Google Bard report
by ivan.moony (AI News )
February 14, 2024, 04:42:23 pm
Elon Musk's xAI Grok Chatbot
by MikeB (AI News )
December 11, 2023, 06:26:33 am

Users Online

280 Guests, 0 Users

Most Online Today: 381. Most Online Ever: 2369 (November 21, 2020, 04:08:13 pm)

Articles