Locksuit,
Honestly, you're extremely and unrealistically overoptimistic about quantum computers. Quantum computers are not a panacea, especially for artificial intelligence. *Most* problems cannot be sped up *at all* by quantum computers, *especially* the most important problems like NP-complete problems, which are usually graph-type problems, and AI problems. This is the same situation with classical parallel computers; only if a problem is inherently parallel can parallel processing (classical or quantum) speed it up any appreciable amount, per Amdahl's law. There is one known, slight, possible benefit for AI with quantum computers since Grover's algorithm running on a quantum computer can speed up linear search through an unsorted database, but that will likely have limited utility in practical AI problems. Maybe it could help in a vision system for searching many possible identifications of a viewed object, but I'm not even sure of that. Certain optimization problems could also be solved well by quantum computers, but again, the use of such solution ability as applied to AI is not very clear, at least not to me. The people most interested in quantum computers are government folks who want to read private, encrypted messages, which can be done by Shor's algorithm, but that application is not related to AI. All that suggests that quantum computers are not of interest to the common man, other than protential for privacy threats.
https://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdfhttps://quantiki.org/wiki/grovers-search-algorithmhttp://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0538e/BABFGJGF.htmlI know I like to talk about quantum computers, as do others on AI forums, but personally that's only because I've always been fascinated by quantum mechanics, not because I have any interest or hope in applying quantum computers to AI. Looking to quantum computers for a big advance in AI is the wrong place to look: that's a simple-minded solution that government folks and uninformed laymen look to, and you don't want to be in either of those groups.
The real situation is even more extreme than what I just described: Marvin Minsky ("the father of artificial intelligence") and others believe that faster computing is virtually *useless* for producing AGI because existing hardware is already *far* more than adequate for the task. This is called "hardware overhang".
https://aiimpacts.org/hardware-overhang/What Marvin Minsky and Jeff Hawkins (and myself) believe is that the key to AGI is a clever *organization* of what we already have. In other words, it's an intellectual problem, not a hardware problem, which in turn means anybody could make a breakthrough without need of Google-sized research teams working with cryogenic engineers, physics experts, and number theory experts. That's very good news for the common man like you and I and everybody else on this forum.
(p. 36)
According to functionalism,
being intelligent or having a
mind is purely a property of organization and has nothing inher-
ently to do with what you're organized out of. A mind exists in
any system whose constituent parts have the right causal rela-
tionship with each other, but those parts can just as validly be
neurons, silicon chips, or anything else. Clearly, this view is
standard issue to any would-be builder of intelligent machines.
Consider: Would a game of chess be any less real if it was
played with a salt shaker standing in for a lost knight piece?
Clearly not.
Hawkins, Jeff. 2004.
On Intelligence. New York: Times Books.