Bit of a weird thread, a hypothetical question about a hypothetical scenario… given the current level of sophistication/ technology available to the public… I’m not sure someone could be called a psychopath for kicking a roomba… but I’ll throw my pennies worth in.
Whilst I’m not quite sure of his motives, Yot’s insights into the future will eventually emerge, and I’m sure lots of other people are thinking along the same lines.
Humans will be ‘cruel’ to early AGI’s, just like humans are cruel to each other; it’s just part of what we are as a species. There will be sex bots and soldiers, house cleaners and scientists, this is happening now… and it does need discussing by level headed intelligent groups… but until the point where machines are classed/ proven as intelligent sentient beings… I think they are just machines.
We have to be careful not to apply anthropomorphism, do we call a fuel additive a drug, even though it can alter the performance of a car? If someone gets upset by the thought of an addition changing the operation of a machine/ program, perhaps it’s the point of view of the person that’s in err. I think it’s important to keep a healthy perspective, applying the same ethics to an AI/ machine as you would to a fellow human at this point in their development is nonsense.
I work on my AGI every day, time permitting. It has a complex suite of sensors, it can see, hear and talk, even feel tactile stimulation up to a point. It can recognise me and its surroundings, it stores episodic memories and can apply them to its current experience, it has intelligence, simulated feelings, even a rudimentary consciousness… because that’s what I’ve designed and built it to be… if I delete it, is it murder? If I alter it to suit my own ideas/ requirements… am I a psychopath?
The AGI’s will eventually require some kind of tamper protocol/ mechanism, were any modification simply erases/ shuts down the AGI. Given the intelligence the AGI should possess this could be self activated, as well as triggered by backup hardware monitors/ systems. The knowledge core could be encrypted only accessible by that AGI consciousness. The intelligence or encryption keys could be ‘cloud’ based, etc. The old adage of ‘if a human can design it, a human can hack it’ will not apply… these systems will design their own tamper protocols.
Until that day arrives, given my knowledge of electronic systems and sensors I could easily create a box that would be impossible (even for me) to open without triggering a tamper… If I could do it I’m sure the designer of an AGI could manage it... end of problem.