This article is more than 1 year old

Complex automation won't make fleshbags obsolete, not when the end result is this dumb

We're messy, expensive, lazy, difficult – and entirely necessary

Column Somewhere in the second hour of sorting through a handful of travel reservations that had been added to my calendar, I started to suspect I'd been lied to – by a computer.

Searching in vain for a ticket that I seemed to have received – by the entries in my calendar – I realised that an itinerary I'd drawn up as a suggested travel plan had been emailed back to me, ingested by Gmail, parsed by GCal, and added to my calendar as a convenience.

What I saw in my calendar was simply my digital reflection, created by layers of automation – all quite clever in themselves, but in aggregate almost perversely stupid, the natural product of a range of design decisions aimed at removing most of the thought from our communications.

We embrace automation because we're under the impression that it saves time. But when it goes wrong – and it always goes wrong at some point – the reinforcing nature of the errors tend to obscure the cause. These systems end up covering for one another, filling in the blanks, until everything has been scrawled over in gibberish.

Automation brings two things front of mind in most people: A) that it will put it us all out of work, or B) rise up and kill us. I'm beginning to wonder if it's not a bit from column A and a bit from column B. First, automation takes away all the need to think about anything, then when we're infantilised within the cocoon of smooth automation, a stupid mistake amplifies to lethality, all because there were never enough humans in the loop, keeping a steady hand and watchful eye on things.

One thing becomes more certain as we increase the depth and complexity of automation: the value of humans. We're messy, expensive, lazy, difficult – and entirely necessary. We keep the big picture in mind at all times because that's how you survive on the plains of Africa. It's a unique human gift, one that we never recognised until just a few moments ago, and one that's going to grow in importance as we continue to make the world smart – if we design for it.

Many of these systems assume the primacy of automation – that it's always going to work as desired, while it merely works as designed. So much of the time designers live so far away from the rough edges of their creations they never see beyond their own desires, realised in inexpensive, scalable automation.

Machines can scale, but the interfaces between machines and human beings? We're learning this doesn't scale nearly so well. Unfortunately, these machines tend to be so opaque in their operations it's difficult to inspect their processes for the sorts of logical errors that, though invisible to them, would appear glaringly obvious to us. Automation without transparency makes the unpredictable dangerous.

Given that many machine learning systems can not explain what they know, but merely perform it, the tension between automation and transparency will only grow. In a calendaring system, the dangers seem minimal (though it is possible to imagine scheduling automation generating a flash crowd through innocent accident). In something like an autonomous vehicle or drone, the menace of lethality necessitates an open approach. We need to design these systems in the open, and openly oversee their operations – not with more layers of automation, but with more of us expensive, lazy, difficult and entirely necessary humans.

In order to feel at all secure in a world of pervasive, complex automation, we'll need to be keeping an eye on things – everywhere, all the time. Here, the economic drive toward automation flashes its silver lining.

We will need a new generation of workers – "robot minders" who spend their careers watching intently, tuning constantly, and keeping all of our very powerful yet very stupid automation from thoughtlessly hurting us. ®

More about

TIP US OFF

Send us news


Other stories you might like