Influence? Do you mean that sometimes, some physical particles don't move as laws of physics dictate, because they're influenced by the spiritual world?
Yes. One good definition of a miracle, I think, is a localized temporary override of the standard laws of physics. If such overrides *can't* happen and we really *do* live in a clockwork universe, then in my opinion we just have to punt on the free will thing. Clockwork universe and free will are inherently incompatible notions. If the laws of physics are making all my decisions, not only am *I* not making them, they're not even decisions -- they're inevitabilities. (Or perhaps random outcomes, if you're looking at the quantum level.) I'm neither noble when I behave altruistically, nor condemnable when I behave selfishly ... physics made me do it.
You've brought the issue of consciousness in now, but as I see it, that's separate. The ability to have experiences and the ability to make free decisions (that aren't causally mandated by something/someone else) are two different things.
Awareness of one's internal state can be another data input, but I don't think incorporating feedback mechanisms creates free will. It just makes it more difficult for an external observer to tease out what the ultimate causes of any given action were. Any self-modifying feedback action was prompted by an "if internal state is X, then modify self in Y way" statement, which in turn might have been created by a previous feedback loop, which was spawned by an even earlier "if X, then Y," and so on -- until you get all the way back to the original seed code, which inevitably determines the final outcome. This is still reaction, not choice. You seem to think that, just by making the chain of reactions sufficiently long and complicated, we'll achieve a scenario in which the final outcome is somehow not dictated by us. But we remain the First Cause in this chain. We still have to write the seed code that deterministically spawns everything else. And in my mind, that means that we still bear ultimate responsibility for the result. Ergo, we should try to exercise that responsibility wisely and make it a good result.
Setting aside all this bickering about what free will is and whether humans really have it or not, I think you and I are in some degree of agreement about the right way to build an AI -- start with a comparatively simple seed program and let it build itself up through learning and feedback. But I'm convinced we should at least try to introduce a moral basis in the seed -- because if we don't determine it intentionally, we will determine it unintentionally. The latter could quite possibly be characterized as gross negligence.
I'd like to spend the weekend working on AI instead of arguing about them, so I'm going to make a second attempt to excuse myself from this thread. I will see you all later.