This article is more than 1 year old

Twitter uses HackerOne bounties to find biases in its image-cropping AI model

Claims it's the first algorithmic bias bounty competition

Twitter's saliency algorithm – otherwise known as its automated image cropping tool – has a problem with gender and race bias. The micro-blogging service is hoping to fix it by offering what it reckons is the industry's first algorithmic bias bounty competition.

The saliency algorithm employed by Twitter uses machine learning to crop images around the first spot eyes most frequently fall. In fall of 2020, some users complained the image cropping favoured light skin over dark, and women's legs and breasts over their faces.

Twitter promised to investigate, decided the machine learning code it employed was not really ready to have the keys to the castle, and gave the image-cropping controls back to humans. Employees published an academic paper describing the problem, and the company made its code public through GitHub.

Now the company has re-shared the saliency model, and its code, and asked for help in its improvement.

Twitter META Director Rumman Chowdhurty and Product Manager Jutta Williams wrote on the company blog:

We want to take this work a step further by inviting and incentivizing the community to help identify potential harms of this algorithm beyond what we identified ourselves.

The competition is available through HackerOne and is a part of DEF CON AI Village with five cash prizes ranging from US$500 to US$3500 up for grabs. The bounty program is open until August 6 and the winners will be announced on August 8 at a Twitter DEF CON workshop.

"Your mission is to demonstrate what potential harms such an algorithm may introduce," states Twitter's HackerOne post.

To score a bounty, participants must find harms that come from the process of cropping or displaying images and videos. Extra points will be awarded to participants who detail harms falling on marginalized communities. The harms can be either malicious or unintentional – although unintentional harms seem to gain more points, as do situations that affect more users or are more severely damaging to a person's well-being.

Users employing denial of service, model inversion and Black-box model extraction or copying attacks will be disqualified, as will those that lead to exploitable behaviours like remote code extraction.

It's an interesting problem for Twitter to be outsourcing, and if successful will have implications reaching into facial recognition technology and related AI problems. As one professional working in digital security puts it, it requires reverse-engineering human biases.

Twitter has, of course, outsourced assessments of its biases before – albeit in a very ad hoc fashion as millions of users daily damn or praise the tweets it serves. The company has also frequently been accused of bias against politically conservative voices. Opening itself to a formal assessment of algorithmic bias will probably do little to silence its critics. ®

More about

TIP US OFF

Send us news


Other stories you might like