Information technology appears to give well-calculated answers without our judgmental view. But did you know that the machines running the calculating software often spit out results with a biased twist and we might not even notice it? Especially women and people of color get to feel the negative and often invisible impact technologies like facial recognition, search engines, or automated hiring software has on them. The prejudice mindset of our analog world slowly sneaks into the digital sphere and creates biases that oppress marginalized groups.

Hey there! Are you coming outside to smash the patriarchy? 👊

12:30

A female voice assistant
wave is coming.

Voice applications are having a bigger impact on our lives than ever before. By 2023, the usage of voice assistants is expected to grow by 35 %. Even though voice assistants like Amazon’s Alexa, Google’s Assistant, Microsoft’s Cortana, and Apple’s Siri are becoming more popular, they all contribute to a major problem: they’re female. Using gender cues like voice, name and speech of a digital assistant allows the user to immediately identify their gender. However, in the United States 94.6 % of human administrative assistants are already female. Aren’t virtual voice assistants a great opportunity to break our stereotypical gender roles? Click the play button and Bella will explain why female voice assistants can be quite problematic.

  • "Voice assistants are the digital maidservants of our time," says sound researcher Holger Schulz from the University of Copenhagen.

  • Positioning AI as an expert and an authority lends itself to a male persona like IBM’s Watson assistant.

  • "As children grow up in dealing with Alexa and Co, this can have an impact on a society's understanding of gender roles," explains Miriam Meckel.

  • By using voice assistants, women with accents are also discriminated, because VA have trouble understanding their commands.

  • Q, the first Genderless Voice, is created to end gender bias in AI assistants, by breaking down the gender binary.

So unfortunately all these female voice assistants won’t help to break our stereotypical gender roles? 🤷‍♀️

12:44

Where are all the female
job candidates?

The job market is also unfairly influenced by algorithms. Almost without exception, in most countries women don’t have the same access to education, capital, networks and high-paying or high-profile roles as men. It’s not uncommon that women are even blocked by legal barriers to peruse a profession. It very much remains a world run by men.

Companies are increasingly use hiring and recruiting software driven by artificial intelligence, also known as AI, to select applicants. These programs are the beginning of automating the human resource department’s processes and might eventually replace the traditional HR management. However, this autonomous hiring software is discriminating against applicants whose resume is breaking ranks, in two ways.

Direct AI discrimination occurs when an algorithm makes decisions based on sensitive or protected attributes which correspond to an applicant’s personal information, like gender, race or sexual orientation.

Indirect AI discrimination is more common and much harder to prevent, because it occurs as a byproduct of non-sensitive attributes. These attributes don’t fall into the above listed categories and the discriminating bias is created through machine training with datasets that are too homogenous due to historical reasons.

In 2014, the online shopping platform Amazon wanted to solve their problem of manually ranking potential job candidates. Their solution was a new tool — powered by machine learning and artificial intelligence. The tool compared words and phrases of submitted resumes with past resumes, as well as the resumes of current Amazon employees. Each job applicant got a score based on detected similarities. However the tool did something unexpected: because most IT positions at Amazon are held by men, it taught itself to penalize female applicants who applied for an IT position. Three years later, the tool was abandoned.

Moreover, black women are oppressed due to multiple social and political aspects. A major factor in the black-white gap is occupational sorting — the clustering of demographic groups into certain jobs and fields. Women tend to be clustered into lower paying jobs more than men, and black people tend to be clustered into lower paying jobs relative to white people. In the U.S., the most popular jobs for white women pay almost twice as much as the most popular jobs for black women.

Data shows that education and job experience is not enough to gain better carrier opportunities for women of color. Even with the highest levels of education, they are still more likely to be in administrative roles rather than senior leadership positions, when compared to people holding a similar degree. Women of color are also paid significantly less compared to men of color and more frequently report frustrations with inadequate salaries.

Such a calculated world! One buzzword can decide if you’re offered an interview?

12:52

Don’t tell me what I have
to be, Mr. Search engine.

Compared to humans, computers should be the better decision makers. They are designed to avoid human biases and faulty logic and have no emotions, which could influence their decision. But have you ever thought about who actually built these machines and wrote their software? Humans! Developing “decision-making software” can be quite complex and it comes down to sophisticated math equations that are run in a so-called algorithm. These algorithms process a lot of data and calculate the most reasonable answer, like search results, insurance contributions, or credit ratings.

Cathy O’Neil, author of the book Weapons of Math Destruction, worked as a mathematician and developed decision making software. She realized that ”[…] software often encodes poisonous prejudices, learning from past records just how to be unfair.” So, the tricky part is that these algorithms incorporate the prejudices and misconceptions of their creators and learn from human behavior. Therefore, they’re actually not avoiding human biases and faulty logic and make millions of unfair decisions. In a nutshell, algorithms are harmful because they tend to penalize the less privileged and benefit the most privileged.

Now, the same principle occurs with algorithms of big search engines: they learn from our searching behavior on the Internet. All search results are more or less a mirror of our modern society searching and clicking online. Like Safina Umoja Noble states in her book Algorithms of Oppression, ”[…] dominant groups are able to classify and organize the representation of others, all the while neutralizing and naturalizing the agency behind such representations.”

On this note, underrepresented, marginalized, and oppressed groups can’t represent themselves properly on the Internet. They don’t have the manpower or even Internet access to feed the algorithms and influence the search results. Therefore, their online representation is created by a biased group that dominates the World Wide Web.

A Google search of a specific profession is an easy way to show how these search algorithms shape our perception. The first 200 images with one adult person of a Google image search with the keyword ‘professor’ showed 93 % male professors compared to 7 % female professors. However, in 2017, 24.1 % of professors in Germany were female.

It’s so sad to see how search engines define the way our society thinks about gender roles …

13:00

Why is the facial recognition
not recognizing me?

Doesn't everybody have the same opportunities in our western society? Actually, no! Just take a look at facial recognition software. Even though the application of facial recognition software is growing fast and becoming handy for unlocking our smartphones or helping to detect criminals, more and more cases of discrimination are leaking to the public. Uber, a transportation network company, uses facial recognition as a security feature to identify their drivers. Yet, their “Real-Time ID Check“ cannot identify people who are undergoing a gender transition. There are multiple cases of transgender Uber driver whose accounts got deactivated because the software didn't recognize them. Furthermore, facial recognition is reliable at identifying white people but much worse at recognizing black people – especially black women. This caused an angry outburst in 2015 when Google's image-recognition system labeled an African-American woman as a gorilla.

Joy Buolamwini, a computer scientist and the founder of the Algorithmic Justice League, experienced herself how some facial recognition software wasn’t able to recognize her dark-skinned face until she put a white mask on. The fact that those systems are trained on a dataset of predominantly light-skinned men, made her realize the impact of exclusion.

In her research, she uncovered large gender and racial biases in AI systems that are sold by tech giants – which contradicts the general understanding that machines are neutral. Experiments, in which the facial recognition software should recognize the gender, showed that all companies performed substantially better on male faces than on female faces. The companies she evaluated had error rates of no more than 1 % for white men. For black women, the errors soared up to 35 %. Some AI systems even failed to correctly recognize prominent faces like Oprah Winfrey, Michelle Obama, and Serena Williams.

Buolamwini found one government dataset of faces, which was used to train facial recognition software. This data set contained 75 % men and 80 % lighter-skinned individuals and less than 5 % women of color. This underrepresentation of women and people of color in technology and their absence in big pools of data is also known as the “pale male data problem”. It leads to the creation of technology that is optimized for a small and privileged group and misrepresents and mistreats many people in our society.

„The main message is to check all systems that analyze human faces for any kind of bias. If you sell one system that has been shown to have bias on human faces, it is doubtful your other face-based products are also completely bias free.“

– Joy Buolamwini

What can we change?

🤯 That just blew my mind!! It’s crazy how these tech giants can do whatever they want.

13:13

"The problem with programming is not that the Computer isn’t logical – the computer is terribly logical, relentlessly literal-minded. Computer are supposed to be like brains, but in fact they are idiots, because they take everything you say at face value.“

– Ellen Ullmann, Life in Code

Credits
A project by Ariane Kaiser, Diana Gert, Ekaterina Oreshnikova and Lina Wassong. Supervised by Prof. Fransizka Morlok and Prof. Dr. Marian Dörk.


A FH Potsdam project
In this seminar, we have been working on visual data essays at the intersection of editorial design and information visualization. Interdisciplinary teams of European Media Studies and Design students developed essays to communicate how different aspects of social and political discrimination overlap with gender, also know as intersectionality.

Want to see more projects? Visit the FHP feminist scrollytelling page.


Sources
A female voice assistant wave is coming.
Bovermann, P. (March 14, 2019). Halt den Mund, Alexa! Retrieved on June 29, 2019, from Sueddeutsche Zeitung
Whitwell, J. (January 29, 2019). Why are they all women? Retrieved on June 29, 2019, from unbabel
Lever, E. (April 26, 2018). I was a Human Siri. Retrieved on June 29, 2019, from NY Intelligencer
dpa (February 01, 2019). Warum sind Sprachassistentinnen weiblich? Retrieved on: June 29, 2019, from Zeit
Meet Q The First Genderless Voice. Retrieved on: June 29, 2019; 22:04, from Genderless Voice
Harwell, D. (July 19, 2018). The Accent Gap. Retrieved on: June 29, 2019, from Washington Post
Schnoebelen, T. (July 11, 2016). The gender of artificial intelligence. Retrieved on: June 29, 2019, from Figure Eight

Where are all the females in Hiring applications?
Najberg, A. (July 10, 2017). Women’s Conference Highlights Need Efforts to Erase Inequality. Retrieved on June 29, 2019, from Alizila
Greenfield, R. (August 07, 2018). Black Women’ Top Jobs Pay Half What White Women’s Do: What happens when the racial pay gap meets the gender gap. Retrieved on June 29, 2019, from Bloomberg
Lewanczik, N. (October 15, 2018). Kann man Bewerbungen über KI richtig evaluieren? Amazons Projekt lässt zweifeln. Retrieved on June 29, 2019, from Online Marketing
Race to Lead: Women of Color in the Nonprofit Sector (February 06, 2019). Retrieved on June 29, 2019, from AFP Global
Rosenbaum, E. (May 30, 2018). Silicon Valley is stumped: AI cannot always remove bias from hiring. Retrieved on June 29, 2019, from CNBC
van Kampen, J. (February 5, 2019). Employers’ Use of Artificial Intelligence to Screen Applicants Can Raise Discrimination Alarms. Retrieved on June 29, 2019, from NC Employment Attorneys
Westfall, B. (April 10, 1019). Recruiters Beware: AI can discriminate. Retrieved on June 29, 2019, from Capterra
Greenfield, R. (August 07, 2018). Black Women’ Top Jobs Pay Half What White Women’s Do: What happens when the racial pay gap meets the gender gap. Retrieved on June 29, 2019, from Bloomberg
Lewanczik, N. (October 15, 2018). Kann man Bewerbungen über KI richtig evaluieren? Amazons Projekt lässt zweifeln. Retrieved on June 29, 2019, from Online Marketing

Don’t tell me what I have to be, Mr. Search engine
O’Neil, C. (2016). Weapons of Math Destruction. Chapter 7, page 103. Crown Books.
Noble, S. (February 2018). Algorithms of Oppression. Chapter 2, page 86. NYU Press.
Rudnicka, J. (September 2018). Frauenanteil in der Professorenschaft in Deutschland im Jahr 2017 nach Bundesländern. Retrieved on June 30, 2019, from Statista

Why is the facial recognition not recognizing me?
Retrieved on: June 29, 2019; 21:44 Samuel, S. (April 19, 2019). Some AI just shouldn’t exist. Retrieved on June 29, 2019, from VOX
Buolamwini, J. (February 07, 2019). Published: February 07, 2019. Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It. Retrieved on: June 29, 2019, from Time
Kleinman, Z. (February 04 , 2019). Amazon: Facial recognition bias claims are ‘misleading’. Retrieved on June 29, 2019, from BBC
Horwitz, J., Quartz (June 2018). Accuracy rate for gender identification, by sex and skin color. Retrieved on June 29, 2019, from The Atlas

Images
Black Woman with white mask inspired by Joy Buolamwini Oprah Winfrey Identification inspired by an Experiment of Joy Buolamwini
Retrieved on: June 29, 2019; 21:47 Buolamwini, J. (February 07, 2019). Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It. Retrieved on June 29, 2019, from Time