• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Google says self-driving car hits municipal bus in minor crash

that was a great movie for the first few minutes, until it was time to feed the pet, which was a balloon with feet. It all went downhill from there. LOL.

And by downhill you mean amazing, right?!?
 
Perhaps.

But I suspect it will open-up a whole new genre of law suits against the auto manufacturer, since driver-error or misuse has now been removed from the liability equation!

Why should we care if liability is transferred to vultures at the big 3?

Far fewer crashes means everything in that industry will take a hit. Even the auto insurers are pissing themselves worried that their product will be not only less profitable, but the legal mandate to purchase insurance may disappear. Lawyers, cops, and mechanics will be especially hard up. I know, poor them, always thrilled to see more crashes

But so long as a human is able to override the automated driving system and take control of the wheel (i suspect this option will become illegal as soon as another drunken asshole crashes into a crowd at a homecoming parade), there will be liability
 
Which "laws"?

Most civil liability has a huge common law component.


And in any event, I'm not aware of any jurisdiction with a statute limiting the common law in the way you suggest. If your non-self-driving car has a defective gas pedal which gets stuck, causing you to get into an accident, you're not simply stuck with the liability. If the person you hit sues you, you turn around and sue the manufacturer.

Been a while since I took civil procedure, but I'm pretty sure it's called a "third party complaint"



I don't see why it would be any different with self-driving cars, unless a state decides to protect self-driving car manufacturers with a law that makes it impossible to sue the manufacturer with a third party complaint in those situations.

you need to do some research before you start saying what you think.

CA and NV state laws already require a person to be behind the wheel of the car.
self driving or not. other states only allow them on designated roads.

while google has said they want any ticket that is just sheer folley.
you own the car you are responsible for it's operation. just like with any other motor vehicle.

you assume the liability. unless you can prove that it was google's fault you wrecked then you have no case against google.
it is up to you as the car owner to ensure that your car is working in accordance with any laws.

if you think there is a software glitch then you should have taken it in and gotten it maintained or fixed.
 
Creating software that can take into account the unpredictable nature of human drivers isn't going to be a piece of cake.

If the software is found at fault, it makes little sense to go after the 'rider' in that autonomous vehicle, they weren't driving.

Who's that leave as the next likely law suit target? The autonomous vehicle manufacturer, right?

actually it does because they are the ones that are responsible for the operation of the vehicle.
they would have to prove that the software was faulty.

then again if they knew something was wrong with the car they should have taken it in to get it fixed.
I don't see google or any other car company opening themselves up to million and billions of dollars in lawsuits.
 
Which "laws"?

Most civil liability has a huge common law component.

And in any event, I'm not aware of any jurisdiction with a statute limiting the common law in the way you suggest. If your non-self-driving car has a defective gas pedal which gets stuck, causing you to get into an accident, you're not simply stuck with the liability. If the person you hit sues you, you turn around and sue the manufacturer.

Been a while since I took civil procedure, but I'm pretty sure it's called a "third party complaint"

I don't see why it would be any different with self-driving cars, unless a state decides to protect self-driving car manufacturers with a law that makes it impossible to sue the manufacturer with a third party complaint in those situations.



you need to do some research before you start saying what you think.

CA and NV state laws already require a person to be behind the wheel of the car.
self driving or not. other states only allow them on designated roads.

while google has said they want any ticket that is just sheer folley.
you own the car you are responsible for it's operation. just like with any other motor vehicle.

you assume the liability. unless you can prove that it was google's fault you wrecked then you have no case against google.
it is up to you as the car owner to ensure that your car is working in accordance with any laws.

if you think there is a software glitch then you should have taken it in and gotten it maintained or fixed.



You're post is incoherent, wrong, and has nothing to do with my points about the law of civil liability.

You should get some legal experience before you start saying what you think about the law.
 
actually it does because they are the ones that are responsible for the operation of the vehicle.
they would have to prove that the software was faulty.

How is it that a human drive can operate (drive) an autonomous vehicle which has no operating (driving) controls?

then again if they knew something was wrong with the car they should have taken it in to get it fixed.
I don't see google or any other car company opening themselves up to million and billions of dollars in lawsuits.

How do you 'take' the car, the one without driving controls, to the place to get it fixed, without it driving itself there? Autonomous tow truck?
 
actually it does because they are the ones that are responsible for the operation of the vehicle.
they would have to prove that the software was faulty.
then again if they knew something was wrong with the car they should have taken it in to get it fixed.
I don't see google or any other car company opening themselves up to million and billions of dollars in lawsuits.

You are simply ignorant about the law. That or this is a concerted trolling effort.



If there's a manufacturing defect that the operator should not reasonably have been aware of, and that defect is the actual and proximate cause of the accident, then what happens in court is: The person who is hit sues the person who hit them. The person who hit them in turn sues the manufacturer for the defect that caused the accident.

It doesn't matter whether the "defect" is bad self-driving software or a gas pedal that gets stuck in a normal car.

There is no rule that the person in the driver's seat is liable for accidents no matter what.



You.
Are.
Just.
Wrong.
 
How is it that a human drive can operate (drive) an autonomous vehicle which has no operating (driving) controls?



How do you 'take' the car, the one without driving controls, to the place to get it fixed, without it driving itself there? Autonomous tow truck?

have you never seen a driverless care? they all have driving controls.
they are required to have a manual mode.

you as the operator or owner of the car are responsible for the car and any damage it might incur or do.
 
Last edited:
You are simply ignorant about the law. That or this is a concerted trolling effort.

Yes you are ignorant of the law.
CA and NV require drivers behind the wheel.

If there's a manufacturing defect that the operator should not reasonably have been aware of, and that defect is the actual and proximate cause of the accident, then what happens in court is: The person who is hit sues the person who hit them. The person who hit them in turn sues the manufacturer for the defect that caused the accident.

You have to prove that it was a manufacturing defect.

It doesn't matter whether the "defect" is bad self-driving software or a gas pedal that gets stuck in a normal car.
the burden of proof is on your to prove. your negligence in maintaining your vehicle is on you.
if you think or might suspect something wrong with your car then it is on your to see that it gets fixed.
it is not the makers fault if you ignore it.

There is no rule that the person in the driver's seat is liable for accidents no matter what.

actually you are liable along possibly the owner of the car if you are driving someone else's.
When your friend crashes your car: The rules of auto liability



You.
Are.
Just.
Wrong.

Yes you are. you seriously need to do research before you post stuff you will then be able to carry on a conversation about the subject matter.
 
The programmers may not be programming the cars to make these type heuristic decisions (at this point), but when they are designing their algorithms they are taking into account these possible quandaries. They have to. How can they not?

They're designed to be safe, not as safe as can be. So when a robot does something unsafe it's almost certainly a result of the robot not understanding its environment correctly or a software bug.

And while you'd think that we'd have to take those kinds of things into account, that's actually not quite true. At least not yet. I'd expect that the developers would be more than happy to allow the cars choice in that situation to be governed by emergent behavior rather than engineered the system to a predetermined set of mores.

One of the problems we run into is the human tendency to anthropomorphize things that appear to act with human intelligence. A robot makes a decision and we assume that it's making that decision for the same reasons we would make that same decision. Then a similar situation arises and we assume that the robot would again make the same choices as we do for the same reasons. But it doesn't, because the robot had a vastly different representation of the world. What appeared to be similar can actually be vastly different.

Take an example of a kid running out in front of an autonomous car. If we had enough time, we'd (probably) conclude that the best thing to do is full brakes and avoid the kid at all costs, even if it meant a head on collision. We might even be able to determine what the driver of the oncoming car would do, given what they can see.

Developing an autonomy system, you're not thinking anywhere near that level. You're thinking.. okay, my LIDAR is picking up a substantial point cloud of data that is moving across the street at the speed consistent with a child running. The color imagining suite may have had enough time to give me a most likely type of object recognition and a probability. I probably know there's a car in the other lane, but it's doubtful that I've spent or will spend a whole lot of time evaluating trajectories that go into oncoming traffic because that's an exceptional behavior and I don't want my system to be able to execute exceptional behaviors willy nilly. It's just too hard to debug and the emergent behaviors are too unpredictable. Furthermore. in the back of my head i'm wondering what happens if the robots understanding of the world is wrong? A cardboard box blowing across the street on a windy day might look like a child to our sensors. I don't want to be responsible for a car swerving into oncoming traffic and killing people because a kite blew across the road.

So a more realistic situation is as the car is driving around it continuously evaluates emergency trajectories. If for whatever reason the car doesn't have one or can't find enough, then it probably stops or slow down until it does. So at every point in time, there is always one or more bailout strategies. If the robot detects something moving in front of the vehicle which necessitates an emergency manuever then it will either take the first safe, the last one that was deemed to be safe, or full stop.

The moral dillema we're dealing with is this. Autonomous cars will save lives. However, they will also cost them. They will certainly save more lives than they cost, but people don't care about the car accidents that didn't happen. You can't find the people who didn't die because a robot was driving instead of a more fallible human, but you can certainly find the people who did die because of a robots mistake. That has been and probably will continue to be the biggest fear in the industry. If autonomous cars reduce the number of fatalities by 100x, but still occasionally (but far less frequently than humans) kill people they'll be known as killer robots.

And just to be clear I don't work for Google or on self driving cars, but I have been designing autonomy systems for 15(ugh..) years.
 
have you never seen a driverless care? they all have driving controls.
they are required to have a manual mode.

you as the operator or owner of the car are responsible for the car and any damage it might incur or do.

The Real Reason Google's Self-Driving Car Doesn't Have Controls

That's not true. And actually relying on a human to handle emergency situations makes the car less safe. A person is only good at evaluating what to do in an emergency if they're mentally prepared and have situational awareness. Dropping a person into a situation where they're expected to make the right decision will result in a very high proportion of humans making bad decisions. Autonomy developers understand this, so adapting that kind of the operator is responsible mentality won't hold up in court.
 
They're designed to be safe, not as safe as can be. So when a robot does something unsafe it's almost certainly a result of the robot not understanding its environment correctly or a software bug.

And while you'd think that we'd have to take those kinds of things into account, that's actually not quite true. At least not yet. I'd expect that the developers would be more than happy to allow the cars choice in that situation to be governed by emergent behavior rather than engineered the system to a predetermined set of mores.

One of the problems we run into is the human tendency to anthropomorphize things that appear to act with human intelligence. A robot makes a decision and we assume that it's making that decision for the same reasons we would make that same decision. Then a similar situation arises and we assume that the robot would again make the same choices as we do for the same reasons. But it doesn't, because the robot had a vastly different representation of the world. What appeared to be similar can actually be vastly different.

Take an example of a kid running out in front of an autonomous car. If we had enough time, we'd (probably) conclude that the best thing to do is full brakes and avoid the kid at all costs, even if it meant a head on collision. We might even be able to determine what the driver of the oncoming car would do, given what they can see.

Developing an autonomy system, you're not thinking anywhere near that level. You're thinking.. okay, my LIDAR is picking up a substantial point cloud of data that is moving across the street at the speed consistent with a child running. The color imagining suite may have had enough time to give me a most likely type of object recognition and a probability. I probably know there's a car in the other lane, but it's doubtful that I've spent or will spend a whole lot of time evaluating trajectories that go into oncoming traffic because that's an exceptional behavior and I don't want my system to be able to execute exceptional behaviors willy nilly. It's just too hard to debug and the emergent behaviors are too unpredictable. Furthermore. in the back of my head i'm wondering what happens if the robots understanding of the world is wrong? A cardboard box blowing across the street on a windy day might look like a child to our sensors. I don't want to be responsible for a car swerving into oncoming traffic and killing people because a kite blew across the road.

So a more realistic situation is as the car is driving around it continuously evaluates emergency trajectories. If for whatever reason the car doesn't have one or can't find enough, then it probably stops or slow down until it does. So at every point in time, there is always one or more bailout strategies. If the robot detects something moving in front of the vehicle which necessitates an emergency manuever then it will either take the first safe, the last one that was deemed to be safe, or full stop.

<deleted to meet forum post size-limit requirements>
Good post.

Thank you.

I spent much of my life in hardware design, and as part of my undergrad degree I had to fulfill a C.S. component, so I have a limited understanding of the complexities S/W involves.

S/W engineering is tough stuff! From my experience, SW is almost an art. Not everyone can do it well.

S/W guys (with BSCSs) often are totally lost in an attempt to design hardware because of their lack of training in electronics, but I suspect they could do it if they had taken an BSEE instead. And most guys that get through a good BSEE can do basic hardware design, usually only needing training in specific standards & practices.

But I've seen brand-new BSCS guys fall on their face in a high-level professional S/W engineering environment. Conversely, I've seen BSEEs with little training in C/S rise to write some decent code; I have no idea how they do it, because I'm not very good at it.

So based upon the above, I do feel S/W is an art that some have & others don't!

I may make an exception for analog hardware design though; analogue is an art too, and I've seen new BSEEs that just don't seem to have an in-depth intuitive grasp of the physical properties and phenomena involved in analog electromagnetics, but they may be fine in the digital realm as long as they don't have to lay-out any circuit-boards or run complex cabling.
 
The Real Reason Google's Self-Driving Car Doesn't Have Controls

That's not true. And actually relying on a human to handle emergency situations makes the car less safe. A person is only good at evaluating what to do in an emergency if they're mentally prepared and have situational awareness. Dropping a person into a situation where they're expected to make the right decision will result in a very high proportion of humans making bad decisions. Autonomy developers understand this, so adapting that kind of the operator is responsible mentality won't hold up in court.

Google's self-driving cars failed 272 times in 14 months (Wired UK)

you are wrong.
the fact is that they have to have manual controls for humans because the laws regarding the cars themselves require
that they do. for situations as posted in this article.

automated or not whoever is operating/owner of the vehicle is responsible for the liability for damages.
 
(Moved to evade a troll)
 
Last edited:
Just.... keep your derp to yourself.

yep you do don't you.
pretty much been proven wrong this whole thread so far.

from laws requiring drivers in the car
to liability of damage caused by the car.

you should probably do more research before you start telling other people
they are wrong.
 
You are simply ignorant about the law. That or this is a concerted trolling effort.

If there's a manufacturing defect that the operator should not reasonably have been aware of, and that defect is the actual and proximate cause of the accident, then what happens in court is: The person who is hit sues the person who hit them. The person who hit them in turn sues the manufacturer for the defect that caused the accident.

It doesn't matter whether the "defect" is bad self-driving software or a gas pedal that gets stuck in a normal car.

There is no rule that the person in the driver's seat is liable for accidents no matter what.

You.
Are.
Just.
Wrong.


CA and NV require drivers behind the wheel

You should read posts before responding to them, even if you just want to troll the poster. This has nothing to do with anything I said.



You have to prove that it was a manufacturing defect. the burden of proof is on your to prove.

Yes, that's what happens in court after lawsuits have been filed. I take it you were unable to read my statements about what happens in court in these situations?


your negligence in maintaining your vehicle is on you. if you think or might suspect something wrong with your car then it is on your to see that it gets fixed. it is not the makers fault if you ignore it.

That's why I said this: "If there's a manufacturing defect that the operator should not reasonably have been aware of"



actually you are liable along possibly the owner of the car if you are driving someone else's.
When your friend crashes your car: The rules of auto liability

This irrelevant babble has nothing to do with my point.
 
yep you do don't you.
pretty much been proven wrong this whole thread so far.
from laws requiring drivers in the car
to liability of damage caused by the car.
you should probably do more research before you start telling other people
they are wrong.

You should stop lying and trolling.
 
You should stop lying and trolling.

projection arguments don't help you in this case.
you seem to not be able to actually discuss the issue.

I suggest reading a bit more on the rules and regulations that places have on driverless cars
then come back and actually discuss the topic.
 
Damn, Google is going for a self driving car that is very realistic. It drives just like some of the drivers here in Houston. What's their next trick? Virtual road rage?

Article is here.

Google should stick to the Internet and leave the automobile stuff to Ford, GM, etc
 
have you never seen a driverless care? they all have driving controls.
they are required to have a manual mode.

you as the operator or owner of the car are responsible for the car and any damage it might incur or do.

No, I've never been in a driverless car. I was not aware they had standard controls, and a manual mode. So, how quick can the car change from automated mode into driver controlled mode? In the middle of an impending accident?
 
No, I've never been in a driverless car. I was not aware they had standard controls, and a manual mode. So, how quick can the car change from automated mode into driver controlled mode? In the middle of an impending accident?

most are done by simply pressing the brake peddle. please see the article I posted on this a few posts ago.
I should correct 1 thing. it says that they have to have some kind of steering device and peddles to be roadworthy.

it seems one of the google cars has a game pad of sorts.

Take a look inside Google's cute little self-driving car | The Verge

At this stage in testing, the law dictates that Google's cars need a steering device and pedals to be roadworthy for tests, but Google's cars have no steering column, using instead a gamepad-sized device for both functions.

CA and NV law also require someone to be in the drivers seat.
 
No, I've never been in a driverless car. I was not aware they had standard controls, and a manual mode. So, how quick can the car change from automated mode into driver controlled mode? In the middle of an impending accident?
I think ludin is kinda pulling it out of thin air.

My guess are that you'll see the same amount of manual controls in autonomous cars as you have on other autonomous technologies such as escalators, elevators and other automated transit systems (like monorails at airports). In that, there'll probably be a big red "STOP" button but not much else.

Likewise we'll probably see a lot of lawsuits aimed at the manufacturers of these technologies no matter the insistence to the contrary.
 
most are done by simply pressing the brake peddle. please see the article I posted on this a few posts ago.
I should correct 1 thing. it says that they have to have some kind of steering device and peddles to be roadworthy.

it seems one of the google cars has a game pad of sorts.

Take a look inside Google's cute little self-driving car | The Verge

At this stage in testing, the law dictates that Google's cars need a steering device and pedals to be roadworthy for tests, but Google's cars have no steering column, using instead a gamepad-sized device for both functions.

CA and NV law also require someone to be in the drivers seat.

Thank you Ludin. Most informative post.
 
I think ludin is kinda pulling it out of thin air.

My guess are that you'll see the same amount of manual controls in autonomous cars as you have on other autonomous technologies such as escalators, elevators and other automated transit systems (like monorails at airports). In that, there'll probably be a big red "STOP" button but not much else.

Likewise we'll probably see a lot of lawsuits aimed at the manufacturers of these technologies no matter the insistence to the contrary.

why not try reading some of the articles posted so instead of making baseless claims you actually know what we are discussing.
no one is pulling anything out of thin air.

laws require more than just a stop button.
read the articles and educate yourself on how this works instead of just throwing stuff out there.
 
Back
Top Bottom