I detected problems immediately with your first three claims. It looked like the later claims in the list were more believable, but I didn't read those latter claims in detail. I've also thought a lot about the first three claims for many years, which is maybe why I detected problems with your reasoning right away.
1. Neural networks
It's true that *current* neural networks are very limited and cannot think or reason, have extremely slow learning, are devoted to one task at a time, etc., but the (2nd) film didn't say what improvements had been made in the futuristic neural networks. Very many improvements are possible, and Jeff Hawkins says the reason the field is sluggish and dying is due to academic pressures to publish instead of doing deep thinking, and basically the academic emphasis on math stifles creatively (in my words):
(p. 26)
I thought the field would move on to more realistic
networks, but it didn't. Because these simple neural networks
were able to do interesting things, research seemed to stop right
there, for years. They had found a new and interesting tool, and
overnight thousands of scientists, engineers, and students were
getting grants, earning PhDs, and writing books about neural
networks. Companies were formed to use neural networks to
predict the stock market, process loan applications, verify sig-
natures, and perform hundreds of other pattern classification
applications. Although the intent of the founders of the field
might have been more general, the field became dominated by
people who weren't interested in understanding how the brain
works, or understanding what intelligence is.
The popular press didn't understand this distinction well.
(p. 27)
Newspapers, magazines, and TV science programs presented
neural networks as being "brainlike" or working on the "same
principles as the brain." Unlike AI, where everything had to be
programmed, neural nets learned by example, which seemed,
well, somehow more intelligent. One prominent demonstration
was NetTalk. This neural network learned to map sequences of
letters onto spoken sounds. As the network was trained on
printed text, it started sounding like a computer voice reading
the words. It was easy to imagine that, with a little more time,
neural networks would be conversing with humans. NetTalk
was incorrectly heralded on national news as a machine learn-
oing to read. NetTalk was a great exhibition, but what it was actu-
ally doing bordered on the trivial. It didn't read, it didn't
understand, and was of little practical value. It just matched let-
ter combinations to predefined sound patterns.
Hawkins, Jeff. 2004. On Intelligence. New York: Times Books.
2. Accidental self-awareness
I designed a neural network once that was designed to think more like a human, and when I thought about what that design could do, I realized that two types of self-awareness automatically arose from the design! So yes, self awareness can arise accidentally, and there also exist multiple types of self-awareness, so you should research those various types and specify which types of self-awareness you mean. The first type (awareness of own's own body) is trivial and has been used in robots for decades.
3. Selective generalisation
Your logic is sound *if* you assume that Skynet had no human-specific knowledge. However, I assure you that government collects an extremely large amount of data on virtually every individual on the planet, and applies machine learning and generalizations on such data (I saw a reference on this generalization effort recently), so if Skynet had access to that database it could easily make loads of accurate generalities about the human species, its typical behavior, and more, and know full well how humans would react to numerous scenarios.
If you get stuck finding references for my claims I might help you out if I have the time.