This is more or less my line of thinking too. Today's computer systems just don't possess behaviors that were not designed into them. Directly or indirectly, intentionally or unintentionally, a computer does everything it does because it was explicitly told to, not because it decided to do something all on its own. Even bugs and viruses are examples of explicitly telling it do something, even though it might appear to have a "mind of its own" while misbehaving.
A system given moral decision-making capabilities based on Utilitarianism or Kantianism would have the same behavioral problems inherent to those philosophies, so in that way I can see how it could backfire to a certain extent. And it's true that bugs and oversights in the programming can lead to unexpected behaviors and undesirable results. But at the end of the day, computers as they are designed today don't "think" the same way humans do.
They aren't good at finding patterns like humans are, and they cannot make assumptions on their own in the absence of facts. We could not make them actually have emotions or desires, we could only make them appear to have those traits through some sort of simulation (which again would have explicit instructions for how to do so).
As we're tying our entire infrastructure together with computers, I think there are other concerns about how the whole system could be brought down that are much more relevant and realistic within the foreseeable future. Computers and AI are still light years away from being able to take over as self-serving slave masters.