I'm a professional in the field of AI, I'm new to this forum, and I've been in a lot of AI forums before, so I'll give my opinions, like them or not.
First, this is such a common concern and question that I recommend every AI forum create a subforum for this question alone, since many newcomers ask about this, and after a while the same question asked repeatedly becomes annoying. Another such annoying topic is consciousness.
Second, the main thrust of my answer without even considering your specific questions is always two-fold: (1) Any new technology (fire, coherent light, nuclear energy, genetics, etc.) is potentially dangerous, but in itself every force of nature is neutral, not inherently good or bad, so the outcome depends only on how people handle that technology. (2) By far the biggest danger of AI will be misuse of it from human beings (especially by the Elites and the military), not because AI itself decided to exterminate our species or anything like that.
As for your specific questions...
(1)Are we to become robots as next stage of human evolution?
See Hans Moravec's book "Mind Children". I agree with Moravec that robots and humans will *tend* to merge in the future, and that AI is a natural part of evolution, but in any kind of a free world people should always have the free choice about what they want to do with their bodies.
(2) and (3) are already covered in my generic statement.
(4)Is it even good idea to continue researching AI to the point when it attain awareness?
Please research the different types of "self awareness" before asking such questions. I claim that intelligence *requires* self-awareness of at least two types in order to function correctly, so self-awareness in AI is inevitable.
I want to emphasize that I believe AI is potentially very dangerous, but not because of any kind of Terminator type scenario, but because human beings are an absolutely despicable species, and the more power they have, the more they become corrupt, so in the hands of very powerful people, the corruption of those people will strongly motivate them to use AI against the human race, whether the machines themselves have any opinion about the human race or not. Be sure you understand my meaning, though: I'm saying that the general population must be allowed to freely own, develop, and use AI to keep a balance of power to fight back against such corrupt leaders, *especially* if those leaders try to convince the general population that AI must be regulated because the leaders deem it is too dangerous for the common person to own. In other words, when AI is outlawed, only outlaws will own AI.
Every see the old film "Forbidden Planet"? Just substitute AI for the Krell's mind amplification machine, and the scenario I mean is the same as in the film, and with the same result: self-destruction.