The ethics of software

November 23, 2014 · Posted in Blog 

I have tweeted (@bobbowie) a couple of times recently about the ethics of software. Increasingly, we see examples of software controlling machines that can be involved in life or death situations. This could be the weapons that choose which target to strike (see the New York times piece here or autodrive cars that might have to decide between a manoeuvre that leads to the death of the driver (hitting a wall) and hitting a person in the street. (See the BBC Radio 4 programme, The digital human)

This hits a number of ethical issues.
(1) What is the moral culpability of programmers? If you programme the software that makes that decision are you responsible for any deaths?
(2) Should those codes be determined by a en ethics committee, such as you find in hospitals, that approve or prevent the programme? Who should sit on that committee?
(3) What does this say about the nature of freedom and morality in ‘pre’ conscious robots?
(4) Should life taking be reserved for human action? Is there any moral difference between the soldier controlling a drone by remote and the technologist programme the remote to seek out targets?

This weeks ethics starter for ten!


Leave a Reply