Twitter’s photo cropping algorithm favors young, skinny women

0
24


In May, Twitter Say it it would stop using artificial intelligence the algorithm was found to favor white and female faces when automatically cropping images.

Now, an unusual competition examining AI programs for misbehavior found that the same algorithm, which identifies the most important areas of the image, also discriminates according to age and weight and favors text in English and other Western languages.

Best input, contributions Bogdan Kulynych, a computer security graduate at EPFL in Switzerland, shows how Twitter’s image cropping algorithm favors skinnier and younger looks. Kulynych used the deepfake technique to automatically generate different faces and then tested the crop algorithm to see how it reacted.

“Basically, the thinner, younger and more feminine the image, the more it will be favored,” says Patrick Hall, chief scientist. BNH, an artificial intelligence consulting company. He was one of the four judges in the competition.

The second judge, Ariel Herbert-Voss, security researcher in OpenAI, says the prejudices found by participants reflect the bias of the people who provided the data used to train the models. But he adds that the entries show how a thorough analysis of the algorithm could help product teams root out problems with their AI models. “It’s much easier to fix that if someone’s just ‘Hey, this is bad.'”

“The Challenge of Mutual Bias Algorithm,” held last week in Defcon, a computer security conference in Las Vegas, suggests that allowing outside researchers to test algorithms for misconduct could help companies solve problems before they cause real damage.

Just like some companies, including Twitter, encourage experts to look for security bugs in their code by offering rewards for certain exploits, some AI experts believe that companies should give foreigners access to the algorithms and data they use to spot problems.

“It’s really exciting to see this idea being researched, and I’m sure we’ll see more,” he says Amit Elazari, director of global cybersecurity policy at Intel and a lecturer at UC Berkeley who suggested using a bug-bounty approach to eradicate AI bias. She says the search for bias in artificial intelligence “can benefit from empowering the crowd.”

In September, a Canadian the student drew attention the way Twitter’s algorithm cropped photos. The algorithm is designed for zero representation of faces, as well as other areas of interest, such as text, animals or objects. But the algorithm often favored white faces and women in pictures depicting several people. Twittersphere soon found other examples of bias showing racial and gender bias.

For last week’s prize competition, Twitter provided participants with a code for an image cropping algorithm and offered prizes to teams that showed evidence of other harmful behavior.

Others found additional biases. One showed that the algorithm is biased towards people with gray hair. Another found that the algorithm favored the Latin text over the Arabic alphabet, giving it a West-oriented bias.

Hall BNH says it believes other companies will follow Twitter’s approach. “I think there’s hope this will take off,” he says. “Because of the upcoming regulation and because the number of AI bias incidents is increasing.”

In recent years, much of the confusion surrounding artificial intelligence has been exacerbated by examples of how algorithms can easily encode biases. Commercial face recognition algorithms have proven themselves discriminate against race and sex, image processing code He was found to be exposing sexist ideas, and a program that assesses the likelihood of re-offending has been proven biased against the accused blacks.

The problem turned out to be difficult to solve. Recognizing fairness is not easy, and some algorithms, such as those used to analyze medical X-rays, can internalize racial prejudice in ways that humans cannot easily spot.

“One of the biggest problems we face – that every company and organization faces – when we think about establishing bias in our models or in our systems is how to scale it?” says Rumman Chowdhury, director of the ML Group for Ethics, Transparency and Accountability on Twitter.





Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here