Hi, Art!
Just to clarify, when I say I'm "miffed with so much modern emphasis on machine learning", I don't mean the general idea of machines learning stuff, or unsupervised learning as a general concept, I mean specifically neural networks and similar techniques that nowadays claim the "machine learning" label, and that usually involve massive amounts of data, brute-force number crunching and opaque, cryptic, subsymbolic internal representations. And I don't resent so much the emphasis ON them, but the emphasis AWAY FROM symbolic AI, which IMO happened for no good reason. Why do I believe there's no good reason? For one, expressiveness has been kept deliberately below what was not just imagined but even implemented, in the name of performance, decidability and other "good behaviour" traits. We are still dealing with straightjackets like Horn clauses (Prolog, Datalog, rule engines, SWRL..) and Description Logic (OWL, mostly) when the standard should be things like SUO-KIF, CLIF, IKL and beyond (ie higher-order syntax with full first-order semantics). Another sign is that, still to this day, I haven't found a robust, modern, well-supported, open-source CNL that can handle time, states and events. I'd love to be proven wrong on this particular point. It's not like it would require a crazy amount of work, in fact I've seen one or two doctoral dissertations that seem to
do just that, but it's like nobody cares, maybe because everyone is playing with neural networks.
That said, even if an AI has to be "spoon fed" everything it knows, it can still be extremely useful and fun to use, just like virtually all software we use today is "dumb" in that sense, unable to learn anything on its own, but we still find it useful and fun, especially if the spoon feeding process is improved by a powerful CNL, perhaps in combination with equally good visual and speech UI modes. Having better expert systems that are easier to teach and train seems a worthy goal, even if they can't learn anything substantial on their own after they are deemed ready for production.
Besides, the concept of "learning" sounds somewhat nebulous. For instance, we may even say of a human being that he/she "never learns" if they make what we consider similar mistakes all the time, even though their dumbest decisions are way smarter than anything current AI can come up with. On the other extreme, even introducing new data into a spreadsheet increases the amount of knowledge it contains, so in a way when we do that we are "teaching" the spreadsheet some new information and therefore it's "learning" something new.
Lastly, I think there are reasons to believe that doing away with the symbolic paradigm and putting something like neural networks in charge of the AI would probably have a number of nasty consequences from a practical POV. That kind of AI would be nothing like a tool or an assistant, more like a daemon or an alien wild beast, it would perhaps be more aptly described as "intelligent artificial life" than AI proper. Sure, building a wild, uncontrollable AI may well be easier than building a reliable, trustworthy AI which is just as smart, and that kind of research will probably be needed, with all due precautions. But I wouldn't trust that AI to even set the speed of an electric toothbrush.
OK, I think I've shared enough controversial opinions for one post.