ArgonDigital - enterprise automation experts

Share This Post

Built to Kill

https://www.flickr.com/photos/61457605@N04/5598523831/in/photolist-9wHU5X-xUG7H-9wHUsr-xUFQQ-8bhh7Z-cocYvA-9wHUg6-8R4GYg-8R7PFA-8bhhiP-8vP3az-5Fni77-8R7PLw-8R7Qfh-ADuSK-64hFXR-5Fi1qX-4UWMYs-daiDxv-5FngAy-74A8FA-9ZwyX-4tTh3Z-3qVB3-oq3CNJ-daizyc-eaLLrR-bnz2YY-xcaLy-5G3oNL-8R4HyT-8R7Qvw-8R4GHD-ammQSE-74A9b7-aDVrea-8R7Q1f-8R4HtM-8PyTT-74wetK-5FY8wR-ciGDdb-8R7PVL-bnz2QQ-74wfbp-8R4GCc-GZdxZ-74A8Bd-8R4Gyv-9KFGgSDuring a conversation with a lead engineer working on the Google self-driving car project, it was mentioned that the car would be programmed to consistently break the speed limit. On average, the car will travel 10 mph over any posted speed limit. Why design a car to deliberately break the law? Safety, primarily; since every human-operated car will likely be speeding around the Google car, it would put passengers at greater risk for an accident if the Google car is travelling slower than surrounding traffic.

While this seems like a sensible, if not alarming, design decision on the part of Google engineers, the choice to build a system that subverts our laws raises important questions about how we build ethics into our functional requirements. The question is well-framed by Emily Dreyfus, in an article for Wired, titled “The Robot Car of Tomorrow Might Just be Programmed to Hit You”. The article presents the reader with a very specific edge case: If a Google car is going to crash and is in the middle lane between two cars, how does it decide which car to crash into?

For instance, an algorithm could be built into the Google car to read sensor data to determine the size and make of the two cars. In our scenario, let’s say on the left side is a Ford Explorer, and on the right side is a Smart Car. Conceivably, the algorithm would determine that the Explorer would be better equipped to handle a crash, so the car is steered into the SUV, leaving the Smart Car untouched.

The logic of this edge case is clear: “The Explorer is better equipped to handle a moving collision, so it should be targeted for a crash rather than the Smart Car, which would likely crumple and kill the driver.” The only problem is that this approach introduces biases into the Google Car’s decision making, in essence targeting SUVs for crashes, rather than any other car. Knowing this, SUV sales may be impacted, as people avoid purchasing a vehicle that could be targeted in a collision. Do we really want to unleash a legion of cars equipped with advanced targeting systems?

The answer is obviously, “no” but what alternatives to this edge case are we left with? There really are no good alternatives or a single correct moral choice in this scenario, but some choice will have to be made. This is the sort of problem that robust requirements analysis can identify early, allowing the problem to receive full consideration before anything gets built or tested. Essentially, this killer car is a requirements problem. If we don’t ask it to select a target, it won’t. If we ask it to speed, it will speed. But what should we specify if no good choices present themselves?

In situations like this, the worst thing a product could do is hide this messy moral calculus from its users. As a representative of the users of a product, it is vital to understand the impact of hidden features on a user’s experience with a product. To this end, the best thing a product can do is expose settings to users to give them the power to control the outcome they feel most comfortable with.

“As a Google Car driver, I must have the ability to disable crash-optimization settings, so my car doesn’t choose a target during a collision”.

Built to Kill

More To Explore

AI to Write Requirements

How We Use AI to Write Requirements

At ArgonDigital, we’ve been writing requirements for 22 years. I’ve watched our teams waste hours translating notes into requirements. Now, we’ve cut the nonsense with AI. Our teams can spend

ArgonDigital | Making Technology a Strategic Advantage