Ethics in the workplace
As developers, we are often one of the last lines of defense against potentially dangerous and unethical practices.
We’re approaching a time where software will drive the vehicle that transports your family to soccer practice. There are already AI programs that help doctors diagnose disease. It’s not hard to imagine them recommending prescription drugs soon, too.
Ethics in Technology
Ten million self-driving cars will be on the road by 2020, according to an in-depth report by Business Insider Intelligence. Proponents of autonomous vehicles say that the technology has the potential to benefit society in a range of ways, from boosting economic productivity to reducing urban congestion. But others—including some potential consumers and corporate risk managers—have expressed serious concerns over the cybersecurity of the so-called fleet of the future. As one tech
Experts say that self-driving cars will be particularly susceptible to hackers. What makes them so vulnerable?
The answer to this question depends on what kind of a self-driving car we are talking about and how connected the car is to the outside world. If the car does any significant computations by connecting to the outside world via the cloud, needs some sort of internet-connectivity for its functionality, or completely relies on outside sensors for making all decisions, then yes, it might be susceptible to hackers.
As a result, car manufacturers, like other industries, are trying to come up with defense techniques that can prevent attacks against their systems. I am not a car expert, but clearly, the less security vulnerabilities you have in your software, the less vulnerable you become to hacker attacks. Hence, I would imagine that a lot of effort is being put into designing secure, reliable systems. I would also guess that just like in passenger airplanes, cars of the future would also have different computer networks so that one network that is potentially compromised will not affect the car’s other sensitive computer networks.