This article is more than 1 year old

Big question: Who gets the blame if a cyborg drops a kid on its head?

Boffins demand panic button to control wacky brain-machine interfaces in the future

Who is responsible if a robot controlled by a human brain drops, say, a baby?

It's a bizarre question, but one worth asking, according to scientists who raise a host of ethical and social questions around brain-machine interfaces (BMI) in the policy forum in Science.

Research in this area begun in the 1970s when the US military research arm DARPA entertained the possibilities of using human brains to control devices. The idea has now gone mainstream, with Facebook's CEO Mark Zuckerberg and Neuralink's CEO Elon Musk investing in neural laces, capable of merging man and machine together as cyborgs.

"Brainjacking" can potentially be a serious issue, the researchers said. Facebook is aiming to create a system that "can type 100 words per minute straight from your brain." Data about brain activity needs to be protected if it can potentially reveal sensitive information and communications.

Professor Niels Birbaumer, co-author of the paper and a senior Research Fellow at the Wyss Center for Bio and Neuroengineering in Geneva, Switzerland, said in a statement: "The protection of sensitive neuronal data from people with complete paralysis who use a BMI as their only means of communication, is particularly important. Successful calibration of their BMI depends on brain responses to personal questions provided by the family (for example, "Your daughter's name is Emily?").

"Strict data protection must be applied to all people involved. This includes protecting the personal information asked in questions as well as the protection of neuronal data to ensure the device functions correctly."

Autonomy is another tricky area – back to the baby question. If a human used a semi-autonomous robot to pick up a baby and the bot accidentally dropped it, who is to blame? Presuming you could stop the robot before it lost its grip on a baby, the authors propose an emergency stop function in all autonomous systems controlled via brain-machine interactions.

It's a solution similar to what has been proposed for AI agents by DeepMind and the Future of Humanity Institute at the University of Oxford.

John Donoghue, co-author of the paper and director of the Wyss Center, said, "Although we still don't fully understand how the brain works, we are moving closer to being able to reliably decode certain brain signals.

"We shouldn't be complacent about what this could mean for society. We must carefully consider the consequences of living alongside semi-intelligent brain-controlled machines, and we should be ready with mechanisms to ensure their safe and ethical use." ®

More about

TIP US OFF

Send us news


Other stories you might like