This article is more than 1 year old

Google right to be forgotten 'looks odd and clumsy'

And: 'Hope he knows someone who's heard of encryption'

QuoTW Something was rotten in the state of the internet this week, though it’s unclear who’s to blame. Reports started emerging from major news outlets that links to their stories had been pulled from the European web as part of Google’s new commitment to the EU right to be forgotten.

Was Google hamming things up by letting slip the whitewashing of the rich and nefarious and making it seem like the inevitable result of the European ruling? Or were our ruling lords and masters once more trodding their great big muddy regulatory boots all over our pristine beloved free internet, trampling the poor innocent Chocolate Factory in the process? F**k knows, but it’s certainly shaping up to be something of a sticky mess.

One of the most quoted instances of article-pulling was a well-known piece by BBC TV journo Robert Peston about Merrill Lynch boss Stan O’Neal, which he wrote in 2007.

Peston did immediately point out that the article had been pulled, but he wasn’t keen to immediately leap to the side of Google.

He quite rightly pointed out that the piece was unlikely to meet the standard set by the court that any removal request must be for information that’s “inadequate, irrelevant or no longer relevant”. Governments may be working hard to persuade us the financial crisis and following Great Recession are over, but no one’s memory is that bloody short. He also said:

There is an argument that in removing the blog, Google is confirming the fears of many in the industry that the "right to be forgotten" will be abused to curb freedom of expression and to suppress legitimate journalism that is in the public interest.

To be fair to Google, it opposed the European court ruling. But its implementation of it looks odd, perhaps clumsy.

Peston isn’t the only journo to have had his work pulled, a number of other outlets have come forward with examples of missing links as well and it is a worrying trend. But you do have to wonder why Google is taking the time to let all these media types know that the links to their great works have disappeared from the search results.

This was also the week when everyone found out that Facebook was even more invasive than they had thought and had actually allowed reserachers to experiment on them without their knowledge.

The social network allowed researchers from Cornell University to manipulate people’s News Feeds in order to see if they could make them sad or happy on cue.

The uni said:

The researchers reduced the amount of either positive or negative stories that appeared in the news feed of 689,003 randomly selected Facebook users, and found that the so-called 'emotional contagion' effect worked both ways.

Sadly, however, they seemed rather unconcerned about the ethical considerations of making people happy or unhappy on a whim – and the language of the study did nothing to dispel the idea that regular Facebookers were nothing more than guinea pigs:

We test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.

Both the uni and the study’s authors slid the whole thing under Facebook’s Data Use Policy, claiming that just setting up an account on the network was the same as consent for being experimented on.

That tricksome piece of reasoning was met by a hefty pile of outrage with a dollop of scepticism on top from a large amount of people, including data commissioners in Blighty and Ireland. The UK’s Information Commissioner told The Register that it was working with its counterpart in the Emerald Isle to find out just what Facebook thought it was up to:

We're aware of this issue, and will be speaking to Facebook, as well as liaising with the Irish data protection authority, to learn more about the circumstances.

Facebook, however, still seemed to reckon that it hadn’t done anything wrong, trotting out the same statement on the whole thing for a couple of days:

This research was conducted for a single week in 2012 and none of the data used was associated with a specific person’s Facebook account. We do research to improve our services and to make the content people see on Facebook as relevant and engaging as possible.

A big part of this is understanding how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow. We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely.

News of the authorities getting involved prompted these additional thoughts from the social network:

When someone signs up for Facebook, we’ve always asked permission to use their information to provide and enhance the services we offer. To suggest we conducted any corporate research without permission is complete fiction.

Companies that want to improve their services use the information their customers provide, whether or not their privacy policy uses the word "research" or not.

But it wasn’t until yesterday that Sheryl Sandberg, Facebooking COO, “apologised”. Or rather did that fake apology bit corporates are so good at where they say we didn’t do anything wrong but if we made YOU feel like we did something wrong, we’re totally sorry about that, even though it wasn’t wrong, so we’re just sorry that you have feelings.

She told the Wall Street Journal:

This was part of ongoing research companies do to test different products, and that was what it was. It was poorly communicated. And for that communication we apologise. We never meant to upset you.

Meanwhile, Microsoft admitted this week that it had disrupted the service for a number of legitimate users of No-IP’s dynamic DNS while trying to fend off a cyber-attack. Legal beak David Finn said:

Yesterday morning, Microsoft took steps to disrupt a cyber-attack that surreptitiously installed malware on millions of devices without their owners' knowledge through the abuse of No-IP, an Internet solutions service.

Due to a technical error, however, some customers whose devices were not infected by the malware experienced a temporary loss of service. As of 6am Pacific time today, all service was restored. We regret any inconvenience these customers experienced.

But No-IP disputed the firm’s claims that everything was all sorted out. Spokeswoman Natalie Goguen said:

Services were not restored at 6am, in fact they are still not up at this moment. At 6am, they seemed to make a change to forward on the good traffic, but it didn’t do anything. Although they seem to be trying to take corrective measures, DNS is hard, and they don’t seem to be very good at it.

The whole problem was caused when Microsoft got a temporary restraining order against No-IP after showing that malware writers were using its service to sell and control nearly 250 types of naughty software. Redmond was supposed to filter out any No-IP subdomains linked to malicious activity and let the legitimate ones get through, but instead, it left an estimated four million customers cut off.

However, despite the clumsy execution, security folk reckoned the move was a solid one. Kaspersky Lab’s Costin Raiu said:

Based on our statistics, the shutdown has affected in some form at least 25 per cent of the APT groups we are tracking. Some of these hosts that were previously used in large and sophisticated cyberespionage operations are now pointing to what appears to be a Microsoft sinkhole, at 204.95.99.59.

In Blighty, the government is trying to put its stamp on the mighty Internet of Things by coming up with its very own specification – HyperCat.

The government’s consortium, which includes 40 tech firms from IBM to ARM, reckon the open specification should help IoT devices to find each other in the world, using metadata and standards like HTTPS, Restful APIs and JSON. HyperCat mouthpiece and boss of start-up 1248 Pilgrim Beart told a presser:

What HyperCat does is it unlocks the ability to 'Google' IoT data effectively. You can now crawl it, you can index it, you can do all the things that machines might do. HyperCat does not solve every possible IoT interoperability problem by design, it's a very thin layer. All it does is solve the discovery problem.

Despite being something of a narrow spec, its creators are keen to humble-brag HyperCat - and by extension, the Modest British Empire - up. Justin Anderson of IoT outfit Flexeye said:

A standard isn't quite as sexy as a wearable or some nice glasses that are going to give you all the information you want.

[But] what started the world wide web? A simple British standard. It spread like wildfire... the power of a standard is phenomenal.

So, it’s awesome and Tim Berners-Lee-inspired, but is it safe? Beart thinks so:

HyperCat has a very strong security model; it's a very simple one. It's the same one the web uses.

Ah. Right you are then. Anything else?

But privacy is a much more complex problem because that is about humans deciding who sees what under certain circumstances.

Beart may think he’s got that whole security issue sewn up there, but the Reg commentards aren’t so sure. John Styles reckoned the idea that HyperCat might be using the same security as the web could be a late April’s Fool and many others agreed.

Esme said:

Not that I was going to beforehand, but that comment (about the security model HyperCat has) pretty much ensures I'll resist any IoT-enabled devices for as long as I possibly can. Clearly the person giving the HyperCat quote hasn't a clue.

While Herman said:

As secure as HTTP. Woohoo. I hope he at least knows someone that heard about encryption.

®

More about

TIP US OFF

Send us news


Other stories you might like