Machine Ethics

Machine ethics, huh? Yea, we’re going there. Not in the crazy tin foil hat wearing way either. In the scary, turn around the corner and we’re there type of way. It’s something we’ll need to come to terms with, and here’s why.

Short Version:

As we  introduce more and more automation into our lives, choices with regards to human and asset safety will need to be made. Will these choices always be in the hands of humans? Are they entirely in the hands of humans now?

More and more, the answer might be no. Believe it or not, Open VS. Close source programming could also weigh in on this in terms of the moral compass that we use. Ultimately, until we invent AI, our programmers are our machines moral voice. Unlike medical professions that have evolved an ethics board, and have sworn oaths to defend the very people that they support, IT people have only themselves and their employers code of conduct to be governed by.

If IT then bleeds into all other trades, should it not then evolve these same mechanisms? Should the same ethical, social and cultural training other professions receive not extend into IT as well? Only the future can tell how the trade will change, but until then, it’s entirely up to the insight of companies and their employee’s to make sure things don’t go off of the rails. In some cases, literally.

TL;DR

Morals and ethics pop up in our everyday decisions. So it stands to reason that it would pop up in our trades as well. In IT, because of its wide adoption, ethical and moral choices are becoming more and more common. In the case of IT, however, it can sometimes be one person, or team, or  company that is choosing what is right or wrong for everyone using the technology world wide.

Often, we are merely operating within a predefined set of rules in order to use the technology that we need in the first place (example, every password complexity screen, ever!).

I’m not going to get all worst case scenario on you yet. That will be in a few moments. I will provide an example or two though, and you can make your own conclusions as to whether or not there is something that can be done to properly shape this.

Imagine you have a smart fridge. Your new smart fridge does a whole host of things, not the least of which is scan the barcodes of your food to check for expiration dates. As a programer, or a product designer, you’re immediately presented with a decision. What happens when we find out the foods expiration date has been reached?

The normal response is, we will trigger an alert. That’s fine and dandy, but what type of alert? For how long? Maybe the fridge will link to your social media accounts, or have one of its own and alert you from that. Maybe the company will have its own e-mail service, and so long as the fridge can reach the public internet it will send you an e-mail alert over a set amount of time about these outdated items. In each of these cases there are certain responsibilities for the user, and for the appliance. The user can govern themselves, but who governs the appliance?

OK. So, in this hypothetical scenario your milk has gone bad and your fridge has sent you some kind of alert. What happens when the alert is ignored? Do you constantly alert the user until the offending dairy has been removed? Do you let the user silence alerts? If so, do you assume the risk of their potential food poising, or are you left without responsibility because of an agreement the user clicked “Agree” on without reading it first?

What happens when the user listens? Maybe the milk was perfectly fine, and because of YOUR alert, they threw out 2 litres. Not good if you consider everyone with your smart appliance, in your town, who goes to your store, who probably has Milk with the same date on it. Think of the waste.

I get it. The milk scenario isn’t life or death unless you’re in  marginalized population (young or elderly people are more likely to become critically ill from food poisoning leading to death), but think about the world wide waste that could occur simply because your fridge was smart enough to scan a barcode, but dumb enough to not recognize that the food wasn’t actually bad. Even worse, based on your smart device alert a concerned parent or guardian ignored their better judgement and wasted that food even though they felt that it was OK for human consumption.

You can see that the more and more automation is introduced, the more logic and input is needed to make the right choices. It’s not a far leap to take this reasoning into other areas as well, like transportation. If you crashing into a wall causes the least amount of damage, and is for the overall good of everyone around you, will your car kill you intentionally (Idea sourced from Science Daily)?

If  a metric of who gets to live in that scenario based on the program that the car is running is the number of people in the vehicle, will people start cheating the system and tricking weight sensors in their car seats  to have a better chance of automated survival?

With every example you can see how it boils down to the systems design, what the designer / programmer value the most and how the device is used by the people who adopt it. Ultimately, employee’s behind the product are listening to a company or team leader to deliver to retail for a profit. The key there is the profit bit. Can we trust that? Do we have a choice? In the case of Open Source projects the responsibility boils down to the projects global contributors. Are you going to agree with the masses, or fork your own version of the project and move forward? Even if you do, will yours be the globally dominant version?

As technology creeps into vehicles, medical equipment, safety equipment, utilities and everything else under the sun, we might find ourselves less likely to pick a product or service based on our favourite brand, and more on our moral values.

Since profit is directly tied to technology, it could become very political, if it hasn’t already. Unfortunately, even though it’s a science, profit could dictate ethical or moral choices that a company makes. Rather than assisting the people using the technology, or providing options for the greater good, we might push companies more towards sensor packed  toys instead of other globally meaningful products.

As consumers, and as the IT professionals that provide the services, we have a global responsibility. Lets not lose site of that, and recognize that our buying power can drive not only innovation, but a companies moral and ethical choices. Combining all of that could lead to real world change that we can all be apart of.

As always, feel free to share, like, post or whatever other media lingo you use. If you have any comments, leave them in the comments section below. If you want to get in contact, email me at comments@digishock.ca.

 

%d bloggers like this: