12:30
12:31
12:32
12:34
12:35
12:37
12:38
12:39
Voice applications are having a bigger impact on our lives than ever before. By 2023, the usage of voice assistants is expected to grow by 35 %. Even though voice assistants like Amazonâs Alexa, Googleâs Assistant, Microsoftâs Cortana, and Appleâs Siri are becoming more popular, they all contribute to a major problem: theyâre female. Using gender cues like voice, name and speech of a digital assistant allows the user to immediately identify their gender. However, in the United States 94.6 % of human administrative assistants are already female. Arenât virtual voice assistants a great opportunity to break our stereotypical gender roles? Click the play button and Bella will explain why female voice assistants can be quite problematic.
"Voice assistants are the digital maidservants of our time," says sound researcher Holger Schulz from the University of Copenhagen.
Positioning AI as an expert and an authority lends itself to a male persona like IBMâs Watson assistant.
"As children grow up in dealing with Alexa and Co, this can have an impact on a society's understanding of gender roles," explains Miriam Meckel.
By using voice assistants, women with accents are also discriminated, because VA have trouble understanding their commands.
Q, the first Genderless Voice, is created to end gender bias in AI assistants, by breaking down the gender binary.
12:44
12:45
12:46
12:47
The job market is also unfairly influenced by algorithms. Almost without exception, in most countries women donât have the same access to education, capital, networks and high-paying or high-profile roles as men. Itâs not uncommon that women are even blocked by legal barriers to peruse a profession. It very much remains a world run by men.
Companies are increasingly use hiring and recruiting software driven by artificial intelligence, also known as AI, to select applicants. These programs are the beginning of automating the human resource departmentâs processes and might eventually replace the traditional HR management. However, this autonomous hiring software is discriminating against applicants whose resume is breaking ranks, in two ways.
Direct AI discrimination occurs when an algorithm makes decisions based on sensitive or protected attributes which correspond to an applicantâs personal information, like gender, race or sexual orientation.
Indirect AI discrimination is more common and much harder to prevent, because it occurs as a byproduct of non-sensitive attributes. These attributes donât fall into the above listed categories and the discriminating bias is created through machine training with datasets that are too homogenous due to historical reasons.
In 2014, the online shopping platform Amazon wanted to solve their problem of manually ranking potential job candidates. Their solution was a new tool â powered by machine learning and artificial intelligence. The tool compared words and phrases of submitted resumes with past resumes, as well as the resumes of current Amazon employees. Each job applicant got a score based on detected similarities. However the tool did something unexpected: because most IT positions at Amazon are held by men, it taught itself to penalize female applicants who applied for an IT position. Three years later, the tool was abandoned.
Moreover, black women are oppressed due to multiple social and political aspects. A major factor in the black-white gap is occupational sorting â the clustering of demographic groups into certain jobs and fields. Women tend to be clustered into lower paying jobs more than men, and black people tend to be clustered into lower paying jobs relative to white people. In the U.S., the most popular jobs for white women pay almost twice as much as the most popular jobs for black women.
Data shows that education and job experience is not enough to gain better carrier opportunities for women of color. Even with the highest levels of education, they are still more likely to be in administrative roles rather than senior leadership positions, when compared to people holding a similar degree. Women of color are also paid significantly less compared to men of color and more frequently report frustrations with inadequate salaries.
12:52
12:53
12:54
12:55
Compared to humans, computers should be the better decision makers. They are designed to avoid human biases and faulty logic and have no emotions, which could influence their decision. But have you ever thought about who actually built these machines and wrote their software? Humans! Developing âdecision-making softwareâ can be quite complex and it comes down to sophisticated math equations that are run in a so-called algorithm. These algorithms process a lot of data and calculate the most reasonable answer, like search results, insurance contributions, or credit ratings.
Cathy OâNeil, author of the book Weapons of Math Destruction, worked as a mathematician and developed decision making software. She realized that â[âŚ] software often encodes poisonous prejudices, learning from past records just how to be unfair.â So, the tricky part is that these algorithms incorporate the prejudices and misconceptions of their creators and learn from human behavior. Therefore, theyâre actually not avoiding human biases and faulty logic and make millions of unfair decisions. In a nutshell, algorithms are harmful because they tend to penalize the less privileged and benefit the most privileged.
Now, the same principle occurs with algorithms of big search engines: they learn from our searching behavior on the Internet. All search results are more or less a mirror of our modern society searching and clicking online. Like Safina Umoja Noble states in her book Algorithms of Oppression, â[âŚ] dominant groups are able to classify and organize the representation of others, all the while neutralizing and naturalizing the agency behind such representations.â
On this note, underrepresented, marginalized, and oppressed groups canât represent themselves properly on the Internet. They donât have the manpower or even Internet access to feed the algorithms and influence the search results. Therefore, their online representation is created by a biased group that dominates the World Wide Web.
A Google search of a specific profession is an easy way to show how these search algorithms shape our perception. The first 200 images with one adult person of a Google image search with the keyword âprofessorâ showed 93 % male professors compared to 7 % female professors. However, in 2017, 24.1 % of professors in Germany were female.
13:00
13:01
13:02
13:03
Doesn't everybody have the same opportunities in our western society? Actually, no! Just take a look at facial recognition software. Even though the application of facial recognition software is growing fast and becoming handy for unlocking our smartphones or helping to detect criminals, more and more cases of discrimination are leaking to the public. Uber, a transportation network company, uses facial recognition as a security feature to identify their drivers. Yet, their âReal-Time ID Checkâ cannot identify people who are undergoing a gender transition. There are multiple cases of transgender Uber driver whose accounts got deactivated because the software didn't recognize them. Furthermore, facial recognition is reliable at identifying white people but much worse at recognizing black people â especially black women. This caused an angry outburst in 2015 when Google's image-recognition system labeled an African-American woman as a gorilla.
Joy Buolamwini, a computer scientist and the founder of the Algorithmic Justice League, experienced herself how some facial recognition software wasnât able to recognize her dark-skinned face until she put a white mask on. The fact that those systems are trained on a dataset of predominantly light-skinned men, made her realize the impact of exclusion.
In her research, she uncovered large gender and racial biases in AI systems that are sold by tech giants â which contradicts the general understanding that machines are neutral. Experiments, in which the facial recognition software should recognize the gender, showed that all companies performed substantially better on male faces than on female faces. The companies she evaluated had error rates of no more than 1 % for white men. For black women, the errors soared up to 35 %. Some AI systems even failed to correctly recognize prominent faces like Oprah Winfrey, Michelle Obama, and Serena Williams.
Buolamwini found one government dataset of faces, which was used to train facial recognition software. This data set contained 75 % men and 80 % lighter-skinned individuals and less than 5 % women of color. This underrepresentation of women and people of color in technology and their absence in big pools of data is also known as the âpale male data problemâ. It leads to the creation of technology that is optimized for a small and privileged group and misrepresents and mistreats many people in our society.
âThe main message is to check all systems that analyze human faces for any kind of bias. If you sell one system that has been shown to have bias on human faces, it is doubtful your other face-based products are also completely bias free.â
â Joy Buolamwini
13:13
13:14
13:15
13:16
13:17
13:18
13:19
13:20
13:21
"The problem with programming is not that the Computer isnât logical â the computer is terribly logical, relentlessly literal-minded. Computer are supposed to be like brains, but in fact they are idiots, because they take everything you say at face value.â
â Ellen Ullmann, Life in Code
Credits
A project by Ariane Kaiser, Diana Gert, Ekaterina Oreshnikova and Lina Wassong. Supervised by Prof. Fransizka Morlok and Prof. Dr. Marian DĂśrk.
A FH Potsdam project
In this seminar, we have been working on visual data essays at the intersection of editorial design and information visualization. Interdisciplinary teams of European Media Studies and Design students developed essays to communicate how different aspects of social and political discrimination overlap with gender, also know as intersectionality.
Want to see more projects? Visit the FHP feminist scrollytelling page.
Sources
A female voice assistant wave is coming.
Bovermann, P. (March 14, 2019). Halt den Mund, Alexa! Retrieved on June 29, 2019, from Sueddeutsche Zeitung
Whitwell, J. (January 29, 2019). Why are they all women? Retrieved on June 29, 2019, from unbabel
Lever, E. (April 26, 2018). I was a Human Siri. Retrieved on June 29, 2019, from NY Intelligencer
dpa (February 01, 2019). Warum sind Sprachassistentinnen weiblich? Retrieved on: June 29, 2019, from Zeit
Meet Q The First Genderless Voice. Retrieved on: June 29, 2019; 22:04, from Genderless Voice
Harwell, D. (July 19, 2018). The Accent Gap. Retrieved on: June 29, 2019, from Washington Post
Schnoebelen, T. (July 11, 2016). The gender of artificial intelligence. Retrieved on: June 29, 2019, from Figure Eight
Where are all the females in Hiring applications?
Najberg, A. (July 10, 2017). Womenâs Conference Highlights Need Efforts to Erase Inequality. Retrieved on June 29, 2019, from Alizila
Greenfield, R. (August 07, 2018). Black Womenâ Top Jobs Pay Half What White Womenâs Do: What happens when the racial pay gap meets the gender gap. Retrieved on June 29, 2019, from
Bloomberg
Lewanczik, N. (October 15, 2018). Kann man Bewerbungen ßber KI richtig evaluieren? Amazons Projekt lässt zweifeln. Retrieved on June 29, 2019, from Online Marketing
Race to Lead: Women of Color in the Nonprofit Sector (February 06, 2019). Retrieved on June 29, 2019, from AFP Global
Rosenbaum, E. (May 30, 2018). Silicon Valley is stumped: AI cannot always remove bias from hiring. Retrieved on June 29, 2019, from CNBC
van Kampen, J. (February 5, 2019). Employersâ Use of Artificial Intelligence to Screen Applicants Can Raise Discrimination Alarms. Retrieved on June 29, 2019, from NC Employment Attorneys
Westfall, B. (April 10, 1019). Recruiters Beware: AI can discriminate. Retrieved on June 29, 2019, from Capterra
Greenfield, R. (August 07, 2018). Black Womenâ Top Jobs Pay Half What White Womenâs Do: What happens when the racial pay gap meets the gender gap. Retrieved on June 29, 2019, from Bloomberg
Lewanczik, N. (October 15, 2018). Kann man Bewerbungen ßber KI richtig evaluieren? Amazons Projekt lässt zweifeln. Retrieved on June 29, 2019, from Online Marketing
Donât tell me what I have to be, Mr. Search engine
OâNeil, C. (2016). Weapons of Math Destruction. Chapter 7, page 103. Crown Books.
Noble, S. (February 2018). Algorithms of Oppression. Chapter 2, page 86. NYU Press.
Rudnicka, J. (September 2018). Frauenanteil in der Professorenschaft in Deutschland im Jahr 2017 nach Bundesländern. Retrieved on June 30, 2019, from Statista
Why is the facial recognition not recognizing me?
Retrieved on: June 29, 2019; 21:44
Samuel, S. (April 19, 2019). Some AI just shouldnât exist. Retrieved on June 29, 2019, from VOX
Buolamwini, J. (February 07, 2019).
Published: February 07, 2019. Artificial Intelligence Has a Problem With Gender and Racial Bias. Hereâs How to Solve It. Retrieved on: June 29, 2019, from Time
Kleinman, Z. (February 04 , 2019). Amazon: Facial recognition bias claims are âmisleadingâ. Retrieved on June 29, 2019, from BBC
Horwitz, J., Quartz (June 2018). Accuracy rate for gender identification, by sex and skin color. Retrieved on June 29, 2019, from The Atlas
Images
Black Woman with white mask inspired by Joy Buolamwini
Oprah Winfrey Identification inspired by an Experiment of Joy Buolamwini
Retrieved on: June 29, 2019; 21:47
Buolamwini, J. (February 07, 2019). Artificial Intelligence Has a Problem With Gender and Racial Bias. Hereâs How to Solve It. Retrieved on June 29, 2019, from Time