This article is more than 1 year old

UK aims for 'openness and fairness' in its AI Strategy – unless we're talking about favoured contractors

Or the government itself, of course

It has been more than a month since the launch of the UK government's AI Strategy which, the authors said, "represents the start of a step-change for AI in the UK," and The Register, for one, has not forgotten.

While the strategy promises to "embed" supposed British values such as fairness and "openness" in the development and use of AI in the UK, events leading up to its launch, and in particular the behaviour of our government, tell a rather different story, one which could be worrying considering the likely impact of AI on society and the economy.

Some of the moves made by the UK over the first 18 months of the pandemic took place under the cover of emergency legislation, including deals inked by the government with a host of private tech firms in March 2020 to help deliver the NHS COVID-19 response.

One of these was the NHS COVID-19 data store, a project bringing together disparate medical and organisational data from across the national health service, with US spy-tech firm Palantir at the heart of it – although Google, Amazon, Microsoft and AI firm Faculty all hold contracts to work on the platform. Planners of the government response team were said to have found it useful, but it also attracted controversy. Then, in December last year, the contract was extended for another two years, again without scrutiny.

In May this year, a broad-based campaign group wrote to (then) UK Health Secretary Matt Hancock (yes, "vast majority" of the UK are "onside" with GP data grab Hancock). The letter called for greater openness around the government's embrace of this gang of private technology vendors. The campaigners soon found they had to threaten court to get the private-sector contracts published after those contracts were awarded without open competition.

Faculty and its CEO, Marc Warner, for one, had no trouble getting close to government circles, where the UK's leaders might be asked to be more mindful of asking private sector players to help them with the business of governance.

According to the testimony of former chief advisor to the Prime Minister, Dominic Cummings, in front of the Health Select Committee, the CEO was present during much of the decision-making in the crucial early stages of the pandemic, when Cummings was still advising the PM.

Reports from The Guardian – which Warner would later fail to deny – suggested he used his relationship with Cummings to influence Whitehall. "It felt like he was bragging about it," a senior source said, adding Warner would casually tell officials: "Don't worry, I'll text Dom," or "I'm talking to Dom later."

Faculty said Warner wanted to talk to The Register to give his views on the government AI strategy in the week leading up to publication of the policy document, but later he was unable to speak to us. He wasn't the only one. A host of other key private and public figures who'd normally cheerfully provide their take found themselves speechless.

It's fair to say that on Faculty's part, it might not be able to speak to speak to us because of the terms of its contract or due to concerns over commercially sensitive information, we don't know. What we do know is that a £2m Home Office contract was awarded to the firm without competition, for Innovation Law Enforcement (I-LE).

The tender documents offered few details about how AI might be used in law enforcement and when asked, the Home Office simply said: "We are unable to share further information since it's commercially sensitive."

So much for openness.

We are hoping to get more information from the private firm, which one could argue is less duty-bound than our country's leadership to give it to us. We have sent a list of questions via the company's PR firm. Given Faculty's history, and reports about its government contracts, it seems fair to ask, for the sake of openness, how many public-sector contracts it has been awarded and how many of those were awarded after open competition. It did not respond to these questions specifically.

It did, however, provide a statement saying: "Faculty is a highly experienced AI specialist that has delivered work for over 350 clients across 23 sectors of the economy and in 13 countries. We have strong governance procedures in place and all of our contracts with the government are won through the proper processes, in line with procurement rules."

Openness in government contracting is not only a question of fairness. If the UK is serious about developing the nation's industry in AI – or indeed any high-tech industry – it needs fair and open competition for the billions of taxpayer pounds it spends in the tech market.

Google's AI subsidiary DeepMind was also closely involved in the UK's pandemic response.

DeepMind co-founder Mustafa Suleyman, now veep for AI policy, was reportedly approached by NHSX to help work with patient data, including discussing whether Google's cloud products were suitable for its data store project. In his role as chief advisor to the prime minister, Dominic Cummings brought Demis Hassabis, CEO and co-founder of DeepMind, into the heart of government decision-making, according to his select committee testimony [PDF].

Public procurement – what can go wrong

What's at stake when emergency contracts – not just to Palantir and Google and the like, but to many other vendors during the pandemic – escape scrutiny or circumvent the usual bidding and tendering process?

Peter Smith, former president of the Chartered Institute of Purchasing and Supply, told The Register that studies of countries including South Africa had shown that favouritism and nepotism in public procurement means suppliers can tend to either withdraw from the market or cut investment in technology, products and services, and instead put the money into employing an ex-minister as a non-exec or as an advisor, and wining and dining special government officials.

He went on to say that the recent spate of stories about a lack of openness in government contracts could damage how the UK is seen as a place to invest.

"We're in danger of moving from a country where we felt public procurement was in the upper quartile in the world, to a place where we're slipping down the league table," said Smith, who works as a consultant, having held senior roles in the public and private sector.

The picture in public procurement could then cut against government ambitions in AI – and it is not just Faculty that has a close relationship with the government and is involved with the government AI strategy. As mentioned, Google was part of the group on the NHS COVID-19 data store deal, and again this required the pressure of legal letters to have it aired in the public domain.

British government's AI strategy and citizens' data rights

DeepMind got prime spot on the press release for the UK AI Strategy, under the banner of a "new 10-year plan to make the UK a global AI superpower."

"AI could deliver transformational benefits for the UK and the world – accelerating discoveries in science and unlocking progress," Hassabis said in the pre-canned publicity material.

Part of the UK's vision for its AI strategy is an industry "with clear rules [and] applied ethical principles."

But Google, DeepMind's parent company, has found it difficult to get out of the AI ethics quagmire.

A UK law firm is bringing legal action on behalf of patients it says had their confidential medical records obtained by Google and DeepMind in breach of data protection laws. Mishcon de Reya launched the legal action in September 2021, saying it plans a representative action on behalf of Andrew Prismall and the approximately 1.6 million individuals whose data was used as part of a testing programme for medical software developed by the companies.

DeepMind worked with Google and the Royal Free London NHS Foundation Trust under an arrangement formed in 2015. In 2017, Google's use of medical records from the hospital's patients to test a software algorithm was deemed legally "inappropriate" by Dame Fiona Caldicott, National Data Guardian at the Department of Health.

Law firm Linklaters carried out a third party audit on the data processing arrangement between Royal Free and DeepMind, and concluded their approach was lawful.

At the same time, former co-lead of the Chocolate Factory's "ethical artificial intelligence team" Timnit Gebru left under controversial circumstances in December last year after her managers asked her to either withdraw an as-yet-unpublished paper, or remove the names of employees from the paper.

In her time since leaving the search giant, Gebru has marked out a stance on AI ethics. In a recent interview with Bloomberg, she said labour and whistleblower protection was the "baseline" in terms of making sure AI was fair in its application.

"Anything we do without that kind of protection is fundamentally going to be superficial, because the moment you push a little bit, the company's going to come down hard," she said.

Among the lost list of organisations and companies adding their names to the UK government's AI Strategy, who would back her stance?

We asked DeepMind, Benevolent AI CEO and co-chair of Global Partnership on Artificial Intelligence Joanna Shields, Alan Turing Institute professor Sir Adrian Smith, CEO of Tech Nation Gerard Grech, president of techUK Jacqueline de Rojas, and Nvidia veep David Hogan if they had thoughts on the issue.

None of them responded to the specific point, although we have included the responses we did receive in the box below.

While the UK has legal whistle-blower protection in certain scenarios, it only applies to law-breaking, damage to the environment and the health and safety of individuals. Where the law is unclear on AI it is uncertain what protection whistleblowers might get.

Meanwhile, proposals from the Home Office suggest a public interest defence for whistleblowing might be removed.

The only way is ethics

On the questions of AI ethics, the focus has been on data. Historic data created by humans in a particular social context can, when used for training AI and ML, lead to biased results, as in the case of a sexist AI recruitment tool which Amazon scrapped shortly after its introduction.

An industry has developed around these questions, with vendors offering tools to scan for biases in data and illuminate data which can be proxies for race, such as postal codes, for example.

But for some, the problem of AI ethics runs deeper than merely the training data. A paper shared by former Google ethics expert Gebru on Twitter found that far from considering the wider societal impact of their work, a sample of 100 influential machine learning papers define and apply values supporting the centralisation of power.

"Finally, we find increasingly close ties between these highly cited papers and tech companies and elite universities," the paper said [PDF].

Speaking to The Register, paper co-author Ravit Dotan, a postdoctoral researcher at the Center for the Philosophy of Science at the University of Pittsburgh, said the point of the study was to see the value behind ML research and the researcher's motivations.

"Who is the target, the beneficiary? Is it people within the discipline or is it a broader community? Or is it Big Tech? We wanted to see how authors intend to satisfy [that target]. We also wanted to understand the funding structure better," she said.

The paper also looked at whether ML researchers considered the negative consequences of their work. The vast majority did not. "It was very rare to see any kind of work addressing of potential negative consequences, even in papers that you really would expect it, such as those looking at the manipulation of videos," Dotan said.

In a world where deepfake porn is prompting those whose likenesses have been stolen (mostly women) to fight for tighter regulation, the negative consequences of image manipulation seem all too evident.

In her interview with Bloomberg, Gebru also called for the regulation of AI companies. "Government agencies' jobs should be expanded to investigate and audit these companies, and there should be standards that have to be followed if you're going to use AI in high-stakes scenarios," she said.

But the UK's AI strategy is vague on regulation.

Although it acknowledges trends like deepfakes and AI-driven misinformation might be risks, it promises only to "publish a set of quantitative indicators... to provide transparency on our progress and to hold ourselves to account."

It promises that "the UK public sector will lead the way by setting an example for the safe and ethical deployment of AI through how it governs its own use of the technology."

It adds that the UK will "seek to engage early with countries on AI governance, to promote open society values and defend human rights.

"Having exited the EU, we have the opportunity to build on our world-leading regulatory regime by setting out a pro-innovation approach, one that drives prosperity and builds trust in the use of AI.

"We will consider what outcomes we want to achieve and how best to realise them, across existing regulators' remits and consider the role that standards, assurance, and international engagement plays."

And data protection regulation? Even murkier...

One existing regulator, the Information Commissioner's Office, is already engaged with proposed changes to data protection law following the UK's departure from the EU. The government review has provoked alarm as it proposes watering down individuals' rights to challenge decisions made about them by AI.

Meanwhile, the UK has published guidance on AI ethics in the public sector, developed by the Alan Turing Institute, an AI body formed by five leading UK universities. This was followed by the government's Ethics, Transparency and Accountability Framework for Automated Decision-Making, launched in May 2021.

Critics might argue that guidance and frameworks do not amount to law and remain untested. The government has promised to publish a White Paper – or policy document – on governing and regulating AI next year.

A government spokesperson sent us a statement after initially only wanting to brief The Reg on background:

"We are committed to ensuring AI is developed in a responsible way. We have published extensive guidance on how firms can use the technology ethically and transparently and issued guidance so workers in the field can report wrongdoing while retaining their employment protections. We are also going to publish a White Paper on governing and regulating AI as part of our new national AI Strategy."

In the launch of the AI Strategy, business secretary Kwasi Kwarteng described his desire to "supercharge our already admirable starting position" in AI. But it will take more than words to convince the wider world. Observers will want to see more openness in public-sector contracting and in the government's approach to AI ethics to back up the government's ambition. ®

More about

TIP US OFF

Send us news


Other stories you might like