As you mentioned consciousness as it relates to humans is going to be quite different as it relates to machines and whether by definition we can even determine a machine to be conscious.
That's why the Searle Chinese Room thought experiment is so important - it gives us a way to "become" the machine, the CPU, inside the Chinese Room and see exactly what is different about a conscious machine, and how it is equivalent to human consciousness.
The Searle Chinese Room Thought Experiment - you are inside a room, playing the role of the machine or CPU. From outside, there is input a slip of paper with Chinese writing on it (and you don't know Chinese), and the question to answer is 'What if anything is your conclusion?" You then use a book of rules to figure it out, and output your answer in Chinese. It is clear that at no time did you actually (semantically) understand anything in Chinese that came in, or you put out - the whole time you are completely unconscious of what is on the input slip or output - it is all just gibberish squiggles to you; you are completely clueless about what is on the input slip; you have no consciousness (semantic understanding) of it. Searle's point is that this is exactly the situation with syntax based, classical (unconscious) computer systems - it is always just syntactic squiggles in and out - no actual understanding of anything - no consciousness of it.
Let's see if we can experience the Searle Chinese Room.
Classical Computer System - syntactic I/O so no machine consciousness presentInput Slip (in Chinese, representing syntactic squiggles to you):-----------------------------------------------------------------------------------------------------------------------------
1. 清晨的时候被雷雨声å«é†’.
2. å‘觉自己沉在被çªé‡Œ.
3. 惺忪ç€çœ¼ç«–ç€è€³æœµå¬.
-------------------------------------
What if anything is your conclusion?
------------------------------------------------------------------------------------------------------------------------------
I think you will agree that you - like the syntactic classical computers - have no understanding or consciousness of what the Input says.
And that if you had a real good book of rules, you could in theory, use it to combine these meaningless squiggles into an answer in Chinese, but once again, you would still be clueless as to what it says - zero consciousness of what you write as output in Chinese.
This is exactly the situation for syntactic classical computers - clever, but no consciousness of what's coming in or going out. Now feel the difference when the exact same input slip comes in, but in English this time.
Post-Classical Computer System - semantic, reason based I/O, so machine consciousness is presentInput Slip (in English, representing semantic components to you, which you can understand):-----------------------------------------------------------------------------------------------------------------------------
1. No eggs of mine, that are new, have been boiled.
2. All my eggs in the refrigerator are fit to eat.
3. No unboiled eggs of mine are fit to eat.
-------------------------------------
What if anything is your conclusion?
------------------------------------------------------------------------------------------------------------------------------
Do you feel that difference? You can actually (semantically) understand what's on the input slip, and your English answer, in other words you are cognitively conscious of what's on the input slip and your English output. And that is exactly how post-classical conscious machines experience it, and how they are different then classical systems - they actually understand or are cognitively conscious of their I/O.
(* 5 bonus points for whoever can get the correct conclusion
*)