[quote]The first international beauty contest decided by an algorithm has sparked controversy after the results revealed one glaring factor linking the winners.
The first international beauty contest judged by “machines” was supposed to use objective factors such as facial symmetry and wrinkles to identify the most attractive contestants. After [url=http://beauty.ai/]Beauty.AI[/url] launched this year, roughly 6,000 people from more than 100 countries submitted photos in the hopes that artificial intelligence, supported by complex algorithms, would determine that their faces most closely resembled “human beauty”.
But when the results came in, the creators were dismayed to see that there was a glaring factor linking the winners: the robots did not like people with dark skin.
Out of 44[url=http://winners2.beauty.ai/#win] winners[/url], nearly all were WHITE, a handful were Asian, and only one had dark skin. That’s despite the fact that, although the majority of contestants were white, many people of color submitted photos, including large groups from India and Africa.
The ensuing [url=http://motherboard.vice.com/read/why-an-ai-judged-beauty-contest-picked-nearly-all-white-winners]controversy [/url]has sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results.
When Microsoft released the “millennial” [url=https://www.theguardian.com/technology/2016/mar/24/microsoft-scrambles-limit-pr-damage-over-abusive-ai-bot-tay] chatbot named Tay[/url] in March, it quickly began using racist language and promoting neo-[url=http://www.wikipedia.org/wiki/Godwin%27s_law]-godwinslaw!-[/url] views on Twitter. And after Facebook eliminated [url=https://www.theguardian.com/technology/2016/may/12/facebook-trending-news-leaked-documents-editor-guidelines]human editors[/url] who had curated “trending” news stories last month, the algorithm immediately promoted [url=https://www.theguardian.com/technology/2016/aug/29/facebook-fires-trending-topics-team-algorithm]fake and vulgar stories on news feeds[/url], including one article about a man masturbating with a chicken sandwich.
While the seemingly racist beauty pageant has prompted jokes and mockery, computer science experts and social justice advocates say that in other industries and arenas, the growing use of prejudiced AI systems is no laughing matter. In some cases, it can have devastating consequences for people of color.
Beauty.AI – which was created by a “deep learning” group called Youth Laboratories and supported by Microsoft – relied on large datasets of photos to build an algorithm that assessed beauty. While there are a number of reasons why the algorithm favored white people, the main problem was that the data the project used to establish standards of attractiveness did not include enough minorities, said Alex Zhavoronkov, Beauty.AI’s chief science officer.
Although the group did not build the algorithm to treat light skin as a sign of beauty, the input data effectively led the robot judges to reach that conclusion.
“If you have not that many people of color within the dataset, then you might actually have biased results,” said Zhavoronkov, who said he was surprised by the winners. “When you’re training an algorithm to recognize certain patterns … you might not have enough data, or the data might be biased.”
The simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.
The Beauty.AI results offer “the perfect illustration of the problem”, said Bernard Harcourt, Columbia University professor of law and political science who has studied “predictive policing”, which has increasingly relied on machines. “The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mind-boggling.”
The case is a reminder that “humans are really doing the thinking, even when it’s couched as algorithms and we think it’s neutral and scientific,” he said.
[url=https://www.theguardian.com/us-news/2016/aug/31/predictive-policing-civil-rights-coalition-aclu]Civil liberty groups[/url] have recently raised concerns that [url=https://www.theguardian.com/commentisfree/2016/jun/26/algorithms-racial-bias-offenders-florida]computer-based law enforcement forecasting tools[/url] – which use data to predict where future crimes will occur – rely on flawed statistics and can exacerbate racially biased and harmful policing practices.
“It’s polluted data producing polluted results,” said Malkia Cyril, executive director of the Center for Media Justice.
A [url=https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing]ProPublica investigation[/url] earlier this year found that software used to predict future criminals is biased against black people, which can lead to harsher sentencing.
“That’s truly a matter of somebody’s life is at stake,” said Sorelle Friedler, a professor of computer science at Haverford College.
A major problem, Friedler said, is that minority groups by nature are often underrepresented in datasets, which means algorithms can reach inaccurate conclusions for those populations and the creators won’t detect it. For example, she said, an algorithm that was biased against Native Americans could be considered a success given that they are only 2% of the population.
“You could have a 98% accuracy rate. You would think you have done a great job on the algorithm.”
Friedler said there are proactive ways algorithms can be adjusted to correct for biases whether improving input data or implementing filters to ensure people of different races are receiving equal treatment.
[url=http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html?_r=0]Prejudiced AI programs[/url] aren’t limited to the criminal justice system. One [url=http://www.cmu.edu/news/stories/archives/2015/july/online-ads-research.html]study [/url]determined that significantly fewer women than men were shown online ads for high-paying jobs. Last year, Google’s photo app was found to have [url=https://www.theguardian.com/technology/2015/jul/01/google-sorry-racist-auto-tag-photo-app]labeled black people as gorillas[/url].
Cyril noted that algorithms are ultimately very limited in how they can help correct societal inequalities. “We’re overly relying on technology and algorithms and machine learning when we should be looking at institutional changes.”
Zhavoronkov said that when Beauty.AI launches another contest round this fall, he expects the algorithm will have a number of changes designed to weed out discriminatory results. “We will try to correct it.”
But the reality, he added, is that robots may not be the best judges of physical appearance: “I was more surprised about how the algorithm chose the most beautiful people. Out of a very large number, they chose people who I may not have selected myself.”[/quote]
https://www.theguardian.com/technology/2016/sep/08/artificial-intelligence-beauty-contest-doesnt-like-black-people
-
Computers don't make mistakes
-
Racist robots o.o?
-
I think the bigger issue is it didn't pick any good looking people
-
Most of the people selected aren't even attractive to me. :/
-
Edited by Steve of Steves: 9/10/2016 5:02:10 AMSooooo... What happens when Round 2 comes around, and the AI picks another white person? Are they just going to keep tweaking the AI until it picks a minority...?
-
Even the machines! Come on people!
-
Edited by Scithe: 9/10/2016 2:34:38 AMWe all have our preferences, robots too
-
Edited by Gustav: 9/9/2016 10:59:47 PMI'm not into black women either, whats all the fuzz
-
Before you know it BLM is going to protest about machine brutality and robotic profilling.
-
-
Making a big deal out of nothing.
-
So I guess robots don't have that jungle fever..
-
The AI just knows what it likes And that's vanilla
-
I'm at a time when the sun goes on the phone to get my money and time consuming but it doesn't even work in progress of my friends to be able a new song on my iPhone to my friends to be able a new song on my iPhone to my mom and dad are you going to be a good day to be able a new song is the best only a couple of weeks days of school tomorrow.
-
Silly robot and not lying