The programmers may not be programming the cars to make these type heuristic decisions (at this point), but when they are designing their algorithms they are taking into account these possible quandaries. They have to. How can they not?
They're designed to be safe, not as safe as can be. So when a robot does something unsafe it's almost certainly a result of the robot not understanding its environment correctly or a software bug.
And while you'd think that we'd have to take those kinds of things into account, that's actually not quite true. At least not yet. I'd expect that the developers would be more than happy to allow the cars choice in that situation to be governed by emergent behavior rather than engineered the system to a predetermined set of mores.
One of the problems we run into is the human tendency to anthropomorphize things that appear to act with human intelligence. A robot makes a decision and we assume that it's making that decision for the same reasons we would make that same decision. Then a similar situation arises and we assume that the robot would again make the same choices as we do for the same reasons. But it doesn't, because the robot had a vastly different representation of the world. What appeared to be similar can actually be vastly different.
Take an example of a kid running out in front of an autonomous car. If we had enough time, we'd (probably) conclude that the best thing to do is full brakes and avoid the kid at all costs, even if it meant a head on collision. We might even be able to determine what the driver of the oncoming car would do, given what they can see.
Developing an autonomy system, you're not thinking anywhere near that level. You're thinking.. okay, my LIDAR is picking up a substantial point cloud of data that is moving across the street at the speed consistent with a child running. The color imagining suite may have had enough time to give me a most likely type of object recognition and a probability. I probably know there's a car in the other lane, but it's doubtful that I've spent or will spend a whole lot of time evaluating trajectories that go into oncoming traffic because that's an exceptional behavior and I don't want my system to be able to execute exceptional behaviors willy nilly. It's just too hard to debug and the emergent behaviors are too unpredictable. Furthermore. in the back of my head i'm wondering what happens if the robots understanding of the world is wrong? A cardboard box blowing across the street on a windy day might look like a child to our sensors. I don't want to be responsible for a car swerving into oncoming traffic and killing people because a kite blew across the road.
So a more realistic situation is as the car is driving around it continuously evaluates emergency trajectories. If for whatever reason the car doesn't have one or can't find enough, then it probably stops or slow down until it does. So at every point in time, there is always one or more bailout strategies. If the robot detects something moving in front of the vehicle which necessitates an emergency manuever then it will either take the first safe, the last one that was deemed to be safe, or full stop.
The moral dillema we're dealing with is this. Autonomous cars will save lives. However, they will also cost them. They will certainly save more lives than they cost, but people don't care about the car accidents that didn't happen. You can't find the people who didn't die because a robot was driving instead of a more fallible human, but you can certainly find the people who did die because of a robots mistake. That has been and probably will continue to be the biggest fear in the industry. If autonomous cars reduce the number of fatalities by 100x, but still occasionally (but far less frequently than humans) kill people they'll be known as killer robots.
And just to be clear I don't work for Google or on self driving cars, but I have been designing autonomy systems for 15(ugh..) years.