This article is more than 1 year old

Two driverless cars stuffed with passengers are ABOUT TO CRASH - who should take the hit?

Big data ethics doesn't need philosophy – boffin

Comment Data analysts don’t need philosophers to help them with ethical decisions - the “science” can figure that out, a top boffin said this week.

Asked about the ethics of big data, the head of Imperial College’s new Big Data dept said: “We treat [ethical issues] not philosophically, but scientifically”.

It’s a startling assertion. Possibly the most startling since former England football manager Graham Taylor stated that “in football, time and space are the same thing” – a bold metaphysical challenge from the former Villa manager.

Dr Yike Guo, at Imperial College London’s Data Science Institute, brought together researchers from the uni’s medical, engineering, and natural sciences departments, as well as its business school. A few of the many research projects being undertaken are listed here (PDF)

Big Data raises lots of ethical questions, as we begin to trust rule-based machines to make human decisions affecting life or death. Last week Fujitsu CTO Joseph Reger raised the example of two autonomously driven vehicles, both containing human passengers, en route for an “inevitable” head-on collision on a mountain road.

Asked Reger: should one vehicle “sacrifice itself” on the basis of who was in each car? What if one contained children, and one didn’t? And what, he suggested, if one vehicle was a motorbike whose rider wasn’t wearing a helmet. Should the cars be allowed to “punish” the helmetless driver? Reger strongly believes ethicists should be involved in such decisions.

We do actually have a few hundred years of thinking about ethics to draw on, much of which is useful, and philosophy at least allows us to smoke out a dubious argument. But Silicon Valley is distinguished by its ahistorical approach: it's a new world, and there's nothing to learn from the past.

Society should have a say too, Reger thought. But Imperial’s Big Data unit is taking a much narrower approach, avoiding ethical dilemmas in its pursuit of technical “solutions”.

Half of the data scientists polled recently after Facebook’s mood manipulation experiment was publicised want ethical guidelines clarified. But not all. It’s a great yawning chasm.

Maybe boffins such as Guo hope “the computer” can tell us right from wrong - possessing a God-like authority. But this would requires us to pretend that nobody programmed the computer, and it arrived at its decision using magic. ®

More about

TIP US OFF

Send us news


Other stories you might like