This article is more than 1 year old

Azure lost some virtual machine backups for eleven hours

So soz, says Redmond: Your VMs were there, but Azure forgot they were there

Microsoft's Azure cloud has had a nasty hiccup that saw it unable to find virtual machines recently added to its backup service.

As Microsoft's status page records, from “04:40 to 17:36 UTC 17 Feb 2016, customers using Azure Backup Service might be unable to discover newly added IaaS Virtual Machines within their old Backup Vaults (created before Feb 7th) when performing explicit discovery operation.”

Which may well have meant some brown trouser moments for sysadmins who'd bought into the “Cloud operators are so colossal and careful the chances of data disappearing are witheringly small” argument.

The good news is that the data never went anywhere. Microsoft says “A recent upgrade updated a property with a wrong value resulting in discovery failures for existing backup vaults.”

Once the glitch was understood, Azure admins “Updated the property to the correct value which made the discoveries successful.”

Microsoft now promises to “Review the service update logic and implement checks to prevent this from reoccurring.”

As is proper. And perhaps Microsoft could send some clean undies to the afflicted, too?

The incident is one of many in which small operational and configuration errors take down clouds. Google, for example, endured a 21-hour outage after a case-sensitive variable name was incorrectly entered and has also experienced an outage caused by forgetting to apply a patch to all servers.

As ever, the problem isn't the technology: it's the wetware in front of the keyboard that causes the problems. And even the planet's biggest operators haven't yet figured out how to completely prevent that problem. ®

More about

TIP US OFF

Send us news


Other stories you might like