Twitter just ran the first-ever AI bug bounty
Yesterday Twitter announced the winners of a brand new bounty contest. But unlike the “bug bounties” typically offered by tech companies – which reward those who spot security holes and site vulnerabilities – this challenge focused on something completely different.
It was billed as the very first in the industry algorithmic bias bonus contests. It started on July 30 and was led by Rumman Chowdhury, who leads Twitter’s Machine Learning, Ethics, Transparency and Accountability (META) team, and Jutta Williams, product manager at Twitter. ‘META team.
How it worked
Participants were given access to the code underlying Twitter’s salience algorithm for cropping images, which predicts the ideal way to crop and display an image on Twitter.
- In May, the company find that this salience model had gender and racial biases and moved away from the use of the technique.
To win the competition, Chowdhury told us that META needs to work with multiple teams on Twitter. Overall, she said the most difficult task was creating a column for algorithmic prejudices and biases.
- “There is a lot of remarkable research and work on the taxonomy of prejudice and prejudice, but we could find very little by specifically breaking them down into detailed tasks or detailed prejudices, and being able to list what it might look like. and bring value to it as well, ”said Chowdhury.
- Section was ultimately based on a series of factors, including the types of harm (eg stereotypes, psychological harm), the harm or impact of harm, and the number of users affected.
Winners, winners: Twitter gave out five awards (and a few honorable mentions) that day: first place ($ 3,500 prize), second ($ 1,000), third ($ 500); the most innovative ($ 1,000); more generalizable ($ 1,000). A panel of four judges from the AI and infosec worlds ranked the submissions according to the META rubric.
Look forward …Questions remain about the participation of members of affected communities, the type of community the program is building, to what extent Twitter will translate individual program results into global changes, and whether the tech industry as a whole will embrace the practice.
But Camille François, who co-leads a project on algorithmic damage at the Algorithmic Justice League, told us it was encouraging to see progress in assessing how AI systems affect people.
“I think we are very excited to see how this plays out,” François said. “We want more communities and affected parties to participate. … We know there is an appetite for people who say, “Hey, I just feel affected by this. … Let’s disclose this together. ‘ “—HF