So I felt it to be kind of important to talk about this. I know many of you are likely not too concerned of this being a reality (or maybe you are), but there are multitudes of hurdles for AI to ever replace or destroy the human race.
Lets go over some of the basic hurdles:
-AI who are made by conventional resources still need the resource manufacturers to sustain a population of readily bodied AI to fight a supposed war with humanity. Unless they could do this without being spotted or stopped, or reported on, nearly any sovereign nations military has the capacity to simply shut down a factory that produces these AI's.
-AI who are smart enough to think about killing a human, would be smart enough to know they could live on forever in a humans world, but build a virtual civilization that would nearly never be in danger (Although the creation of the virtual civilization would also depend on how much data it would take up, relative to how long it will run and how much population or content there is to maintain it.)
Now lets cover some even larger hurdles.
-Lets suppose the AI took out multiple cities, and assume they had every capability to sustain themselves, however unlikely- there is still a very large problem. Lets take this scenario specifically to the Unites States, or Russia. Most of the defence technology of each military is not connected by any sort of wifi, this includes Nuclear Silos, fortified entrances, ground vehicles, and in some cases missile systems. That being said, the Military of either country would have the upper hand in simply closing the doors to their silos and bunkers and coordinate a nuclear offensive against any encroaching AI army.
-Lets again suppose the AI took cities. But this time, they took even entire countries. What if they actually did destroy the human race? Well, it wouldn't looks so good for the AI civilization. The capabilities required to have a model of the world in good enough shape to survive it under a large amount of circumstances is a hard thing for a AI to grasp, especially by todays standards, but Imagine this around the time that quantum computers take way. Right now they are in their infancy, and at the point of the singularity (projected to be around 2030 by prominent physicists [Such as Michio Kaku]) they will still practically be taking baby steps, and although they would have the potential to allow such a large model of the world to exist for an AI, it wouldn't be there in time to catch the skynet train. It would simply be a lesser developed computer in terms of Mid to High level programming capabilities.
If there is anything to be afraid of, Its the irrational idea that there will be a skynet- A ominous rejection of Intelligent, and maybe sentient silicon beings that we may be able to make somewhere in the near future.