How AI Image Generators Make Bias Worse

Buzzfeed recently published a now deleted article on what AI thinks Barbies would look like from different countries around the world.
The results contained extreme forms of representational bias - including colourist and racist depictions, which is something that AI image generators are often prone to doing.
With AI image generators like MidJourney, Stable Diffusion, and Dall-E gaining huge popularity, it’s important that we are vigilant about the forms of bias that these technologies can fuel.
* * * *
This video was inspired by an LIS undergraduate student’s end of first year project, ‘Beyond the Hype: Understanding Bias in AI and its Far-Reaching Consequences’ by Ana Howard.
Presentation Link: • Ana x The Conduit - Be...
In the third term of each year, students select a complex problem they personally feel passionate about. They then apply the skills they’ve learnt in interdisplinary thinking, including qualitative and quantitative research methods, in order to tackle their topic and unearth original insights.
At LIS - The London Interdisciplinary School, we believe that solutions to the world's most complex problems won't come from a single specialism. We need to bring together knowledge and expertise from across the arts, sciences and humanities.
If you're interested in our unique interdisciplinary approach to higher education, explore our degree offerings today:
Bachelor of Arts and Science (BASc) Degree - www.lis.ac.uk/undergraduate-d...
Master's of Arts and Science (MASc) Degree - www.lis.ac.uk/graduate
* * * *
Video Chapters:
0:00 Intro
0:46 Bias in Job Representation
1:55 Barbie Bias
2:24 Generative Adversarial Networks
3:09 Negative Feedback Loops
4:08 How do we stop bias from getting worse?
6:20 Collingridge Dilemma
7:22 Outro
References:
1) Europol paper on Deepfakes and Generative AI - www.europol.europa.eu/publica...
2) ‘Humans Are Biased. Generative AI. Is Even Worse’ - Bloomberg Technology Article by Leonardo Nicoletti and Dina Bass - www.bloomberg.com/graphics/20...
3) UTK Face Dataset by Zhang, Song & Qi - susanqq.github.io/UTKFace/
4) ‘Turing Lecture: Data science or data humanities?’ by Melissa Terras - • Turing Lecture: Data s...
5) ‘Corporate Accountability’ BY Lucy Suchman - robotfutures.wordpress.com/20...
6) ‘Principles alone cannot guarantee ethical AI’ by Brent Mittelstadt - www.nature.com/articles/s4225...
7) ‘Ethics from Within - Google Glass, the Collingridge Dilemma, and the Mediated Value of Privacy’ by Olya Kudina and Peter-Paul Verbeek - journals.sagepub.com/doi/10.1...
8) ‘Joy Buolamwini: Examining Racial and Gender Bias in Facial Analysis Software’ by Barbican Centre - artsandculture.google.com/sto...
Further Reading & Watching:
’Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence’ by Kate Crawford
‘Weapons of Math Destruction: How Big Data Threatens Increases Inequality and Threatens Democracy’ by Cathy O’Neil
’Coded Bias’ by Dr. Joy Buolamwini

Пікірлер: 100

  • @DavidBlatner
    @DavidBlatner11 ай бұрын

    The topic is fascinating and important, and the images (and especially the AI-generated videos with lip-synched audio) are stunning.

  • @SankofaSongs
    @SankofaSongs10 ай бұрын

    Thank you for this very important post. Given that 'seeing is believing' for many, developing critical skills for creating, curating, and viewing content is truly essential.

  • @causeitso
    @causeitso6 ай бұрын

    Isn't it also a bias to assume that the janitor or the social worker professions are inferior to doctors and engineers?

  • @vladoportos
    @vladoportos11 ай бұрын

    Was the midjurney trained on US only images ? how about to world wide statistic ? Looks like mixing terms between reality and "fairness"... you get all male top CEO pictures because that what you asked it to do... if you want 50/50 you need to ask for it with correct prompt.

  • @originalwhig
    @originalwhig11 ай бұрын

    Ask yourself the opposite question: what might unbiased images look like? Bet you'll find that really difficult but, even if you can come up with a description, you won't be able to find images that "everyone agrees with". The problem is that the word "biased" suggests there is some kind of neutral, objective reality that we can all agree on - but we can't. Social issues are not mathematics. All that generative AI does is reflect the world and our experiences back to us. It's not "bias" at all.

  • @gedr7664

    @gedr7664

    11 ай бұрын

    but there is mathematical foundation in the statistics that were shown before, at least with regards to jobs and gender ratios. On the other points I'm not sure if data exists

  • @gedr7664

    @gedr7664

    11 ай бұрын

    of course this would be biased towards the US

  • @engerim

    @engerim

    11 ай бұрын

    all these bias and social norms exist even without AI. So why regulate it? It's called recency bias..

  • @BrettCooper4702

    @BrettCooper4702

    10 ай бұрын

    The data set is bias.

  • @camrodam

    @camrodam

    10 ай бұрын

    I think you entirely missed the section on representational bias.. which is exactly the issue you claim can't be fixed.

  • @user-ue8li7ni4b
    @user-ue8li7ni4b5 ай бұрын

    What editing software did you use when editing this video?

  • @jshap31
    @jshap3111 ай бұрын

    The feedback loop point is really interesting

  • @FPA4
    @FPA410 ай бұрын

    The clip at around 6:50 of the politician asking if the TikTok app accesses the home WiFi appears to be used in this video to demonstrate a politician not understanding the technology, while in effect what he was asking was if the TikTok app was mapping the user's home WiFi network and sending the data back to China. One of the reasons someone might wish to do this would be, at some time in the future, to infect via the app a vulnerable router which has had its internet-facing management disabled as a security measure. Compromising the router from the device side is relatively easy and would add one more node to a home-router based botnet. Botnets of tens of thousands of home routers, all connected to the network, can be used to knock out websites via DDoS attacks and frequently are. I found the video fascinating until that point, which sorta ruined the overall impact for me after that...

  • @eiRuNLiMiteD
    @eiRuNLiMiteD11 ай бұрын

    why did buzz feed delete the barbie article? AI terrifies me.

  • @rodnee2340

    @rodnee2340

    8 ай бұрын

    Because it is a waste of time even for them! AI sucks, but there are much worse things it is doing. Like destroying the careers of artists and musicians.

  • @BrettCooper4702
    @BrettCooper470210 ай бұрын

    Gender and skin colour can be defined in the text prompt. Using minimal text prompts does show a bias, but the user can correct that with good prompt engineering.

  • @LetterSignedBy51SpiesWasA-Coup

    @LetterSignedBy51SpiesWasA-Coup

    10 ай бұрын

    It's not really a bias. AI is showing you the most common result and if you want something different than reality, that's your bias you are welcome to introduce it with parameters.

  • @Razumen

    @Razumen

    10 ай бұрын

    @@LetterSignedBy51SpiesWasA-Coup AI doesn't reflect reality, it reflects it's training data. It will ALWAYS be biased, no matter how much people like this complain about it.

  • @armondtanz

    @armondtanz

    7 ай бұрын

    @@Razumen So if you were to depict a festival concert being built, how would you depict the roadies and light fitters and the people who assemble the scaffolding?

  • @Razumen

    @Razumen

    7 ай бұрын

    @@armondtanz Try to come up with a question that's relevant, please.

  • @armondtanz

    @armondtanz

    7 ай бұрын

    @@Razumen 100% relevent. Realist like me would say old rocker type , males, big build etc... Gen Z (im offended by everything types) would argue and call me a bigot... Literally 100% legit. Stop Gatekeeping peoples points???

  • @JeffreyHamlin
    @JeffreyHamlin10 ай бұрын

    Pandora's box has been opened, image datasets are but one problem in bias, consider insurance - where GLMS are being used in risk assessment, medicine etc. These models are in use around the world and being trained on new data all the time, how can we possibly put the genie back in the bottle?

  • @lovely-shrubbery8578
    @lovely-shrubbery857811 ай бұрын

    Yeah, it's a fantastic idea to artificially modify datasets for political reasons when using AI. That couldn't go wrong at all.

  • @batsy3

    @batsy3

    11 ай бұрын

    no, that's dumb, the point is to make the data sets the same as real life

  • @greenockscatman

    @greenockscatman

    11 ай бұрын

    Datasets are just someone cropping a bunch of images to 512 x 512 and writing a note on each image saying what it's a image of. It's not really feasible for governments to regulate that.

  • @armondtanz

    @armondtanz

    7 ай бұрын

    @@batsy3 that was a sarcastic post???

  • @brackcycle9056
    @brackcycle905610 ай бұрын

    There is bias in this too... you portray being a CEO & being a criminal as being different.

  • @MikhailKutuzov-wx2gy
    @MikhailKutuzov-wx2gy4 ай бұрын

    Okay, yes to all the contents in this video. Generative AI does produce stereotypical outputs and programmers should and do treat those biases in datasets all the time (with mixed results). We understand that AI cannot be trustworthy if it learns biased associations or propogates social injustics. But not all algorithmic bias originates from training data. There's a misconception that all AI does is "hold up" a mirror to society, but for the AI experts who actually model algorithmic bias, they look at biases sources from the entire AI learning process. This is includes instrinsic tendencies in the neural net (like temporal baises) to how users sometimes attribute "objectivity" to an algorithmic output. I think the misconception about the "mirror" idea is that it gives AI enthusiasts an excuse to say, "oh AI is not biased, we are." Or even say, "aha, there was no bias. We just live in an unfair world." Sometimes that's true, other times not. Bias is a relative concept. You can train an AI to reflect your prejudices about the world, or you can "naturalize" your predjudices by assuming a stereotypical output is an accurate reflection of reality. It may not be. Some things to pay attention to is what data is included in the model, how that data is labeled, and what reinforcement protocols are used to train your product. My argument, just as somebody who works with generative AI everyday and teaches AI ethics, is that there is no magic escape route for either users or programmers to not really refelct about how AI works, how it is biased, and be honest about what your expectations for AI are. You should not look to AI for an accurate description of the world (that's what other academic scholarship is for) because AI models have to be selective about their data pipeline to have particular functionality desired.

  • @Exegesis66
    @Exegesis662 ай бұрын

    Def. need to define "bias" in all discussion of this. The issue of representational bias has to do with whether AI should show the world as it is, or as it could be, right? If my daughter asks Dall-E to produce a picture of a CEO and sees only men, that reinforces the idea that this career path is not for women. Surely within the four images it offers one can be a woman. The same with astronaut, doctor, lawyer, but also bricklayers and construction workers. What they will do eventually is build in diversity as the default and then you can search more specifically in your prompt after that.

  • @user-uj1ub2zr9z
    @user-uj1ub2zr9z11 ай бұрын

    Wow this has really made me think. Great vid

  • @JS-vn9zj
    @JS-vn9zj11 ай бұрын

    Fantastic use of AI multimodal tools to make the case and show how bias manifests and spreads through use of AI. Please keep creating and sharing these exceptional learning tools.

  • @jasonchatto

    @jasonchatto

    17 күн бұрын

    The only bias I see is in the narrative. AI just reflects reality.

  • @makaila8860
    @makaila886011 ай бұрын

    yea..ai is scaring me

  • @poisonapple146
    @poisonapple1465 ай бұрын

    Everyone who uses the internet should take the time to understand feedback loops. That’s exactly what you get when you spend a lot of time on apps that customize content based on past viewing habits etc. When I notice that I’m constantly getting the same type of content I make a point to search for totally new topics to switch things up.

  • @africaart
    @africaart10 ай бұрын

    you can prompte it to change the race or gender of your result

  • @izzyworks8789
    @izzyworks878911 ай бұрын

    Ignoring those who wish to pander, AI bias will ultimately revolve the grey zone where ones frame of reference is part of the facts, we can’t model our reality only approximate. So, Id use that as an argument, that this bias is a free market problem to be solved. We need to protect against monopoly and censorship of ideas. We need to support open source, and ideally, regulate the technology as a utility, not a luxury service. In a perfect world, there would be models built by government branches, specialized NGOs, by for profit private entities, by artists, and through public companies, etc. Let, the effectiveness of said data be the real measure -garbage in garbage out will do the weeding.

  • @Itsprez93
    @Itsprez939 ай бұрын

    I don't usually comment on ads that appear on videos I watch, but this one triggered me. then the quote at the start about humans are biased. generative AI is even worse? should read the other way round, AI is the result of humans, the problems will always lie with humans. So that dataset put into the algorithm would have to be regulated as you said in the video because preconceptions and biased views is where the problem sounds like where it starts.

  • @ImprovementGang
    @ImprovementGang11 ай бұрын

    This NEEDS to be discussed! Stories by nature are stereotypical, so it's NO surprise that images, films, and other media forms have a dilated perspective of the world. Where the data comes from matters too. If the database came from India, I'd bet the images would be dramatically different. Also, who is to say what is fair representation? I am Latino and have always loved rap, and I want to listen to GREAT rap artists NO matter their ancestry. It's about HOW we identify. Identifying with a group because of skin color/ancestry is just TOO simple. I'd say we should start to align with principles, ideas, and values RATHER than your ancestry. Is that such a bad idea? Or is it too simple as well?

  • @armondtanz

    @armondtanz

    7 ай бұрын

    In woke community, thats white suprem talk!

  • @anthonyprice1743

    @anthonyprice1743

    7 ай бұрын

    Just keep capitalizing non proper nouns. It'll get picked up 😂

  • @spookymulder945
    @spookymulder94511 ай бұрын

    What? Cenrtal and south America have ALOT of lighter skinned people because many italians, spanish, germans, French and other Europeans migrated there. We are not all short and brown. Lol

  • @JojoZarrapastroza

    @JojoZarrapastroza

    11 ай бұрын

    Good for you!

  • @lexaviles1378
    @lexaviles137811 ай бұрын

    As someone who uses AI and loves what it’s been doing for me. This is 💯 important to discuss.

  • @cristianymiguel6432
    @cristianymiguel643210 ай бұрын

    Next time try prompts on Leila Lopez (Miss Univere 2011) as African Barbie.

  • @LetterSignedBy51SpiesWasA-Coup
    @LetterSignedBy51SpiesWasA-Coup10 ай бұрын

    It's not really bias. AI is showing you the most common result and if you want something different than reality, that's your bias you are welcome to introduce it by refining your request such as "black scientist" or "Asian inmate."

  • @thetranstan

    @thetranstan

    10 ай бұрын

    the “most common result” is drawn from available datasets, not straight from real world numbers. one of the arguments here is that there is no such thing as a neutral or ‘objective’ database, so any data used by AI generators is already skewed by bias.

  • @smule77
    @smule7711 ай бұрын

    AI depicts more or less what's real - and not some utopia where everybody can be anything. That's just not how it is. Instead of going on about why AI is biased and "bad" (when it's not really), it would be much wiser to tell people - from a very young age - that they don't have to take stereotypical depictions of society too much to heart and follow what they want to do and be who they choose to be. But one should also be honest that most people don't end up where they dreamt themselves to be - most people will have to make some compromises in their lives. If you're smart enough to study medicine no one is going to stop you because you're female and doctors "are usually male". Your chances depend a lot more on individual circumstances like class or ability and determination than on "harmful stereotypes" which really only harm those who allow themselves to get influenced by it. That's the devil that should be fought, not the stupid AI pictures.

  • @Razumen

    @Razumen

    10 ай бұрын

    "AI depicts more or less what's real" Wrong, it depicts what it's trained on. These models are not AI, and have no ability to distinguish what's "real", much less any sort of idea what "real" means.

  • @ardoren5442

    @ardoren5442

    10 ай бұрын

    @@Razumen What PRESICELY leads you to believe that the datasets don't accurately mirror the real world? Is it based on your own biased perspective of how you wish the world's numbers to be? If you're not familiar with the dataset's composition and its creation process, how can you definitively assert that it exhibits an undesirable bias and doesn't genuinely portray reality as it is?

  • @Razumen

    @Razumen

    10 ай бұрын

    @@ardoren5442 All datasets are biased, especially when we're talking about things like photographs, which have to be taken and collected by someone, who will be affected by their own biases, whether they are part of themselves, or external biases imposed on them by their environment. A better question is, how can YOU know they do represent reality accurately? If that's what they claim to do, and you think that's what they do, you should be able to confirm this somehow.

  • @SergyMilitaryRankings

    @SergyMilitaryRankings

    7 ай бұрын

    ​@@Razumenin America most high paying jobs are held by men, most white collar jobs are held by white people. These are not Offensive it rAcIsT it's just reality

  • @greenockscatman
    @greenockscatman11 ай бұрын

    Well the solution to this problem would be to make more datasets of underrepresented folks and train AI with them. Funnily enough, what might be missed out in the "skin tone" discussion is that the AI datasets you have in let's say Stable Diffusion models aren't actually biased towards overrepresenting what you might think are European features, but instead it tends to be skewed towards East Asian features.

  • @nikolepascetta5836
    @nikolepascetta583610 ай бұрын

  • @Fres-no
    @Fres-no11 ай бұрын

    Food for thought...great vid!

  • @petem3883
    @petem388310 ай бұрын

    An AI portraying the real world accurately means that the AI is well designed.

  • @thetranstan

    @thetranstan

    10 ай бұрын

    the AI portray *representations* of the real world based on datasets that are in turn also representations of the real world. available datasets are not one-for-one statistics for real world numbers. a representation of a representation often distorts the original - the video specifically talks about this ‘feedback loop’.

  • @Gauldoth06

    @Gauldoth06

    10 ай бұрын

    ​@@thetranstan "feedback loop" where did you study and what do you do for living? "feedback loop" is a separate problem from what people call "AI bias". "available datasets are not one-for-one statistics" again this is not a problem with the data but with our world. If you really think that average cleaner or fastfood worker is white then you need a reality check. Our world sucks. What will you do about it? Fight for equal rights or pretend that the problem is not happening by forcing statisticians to produce actually biased data? 🤡 You are not helping.

  • @ardoren5442

    @ardoren5442

    10 ай бұрын

    @@thetranstan You seem to be suggesting that these representations might not be accurate. But what if they actually are? What if the datasets truly reflect the real-world numbers as they are? In that case, criticizing the representation would simply mean disagreeing with the factual state of affairs. For instance, if there are more men working as bricklayers and more women working as caregivers worldwide, and you dispute this representation, it doesn't necessarily imply that the dataset is biased. AI doesn't take into account your personal view of reality or the specific outcomes you desire from the representations. However, you can certainly customize your prompt to instruct the AI on the exact representation you're interested in. This can be done by introducing your own bias through the way you frame your input.

  • @anniecberry
    @anniecberry11 ай бұрын

    Love it!!! Great topic

  • @jasonchatto
    @jasonchatto17 күн бұрын

    AI just represents reality, it is not bias. What YOU WANT TO DO, is make it bias. FACT.

  • @RobertLoud-ft4gk
    @RobertLoud-ft4gk10 ай бұрын

    I wonder if "A I" will fight against "A I." Is that possible.?

  • @inongekhabele
    @inongekhabele9 ай бұрын

    So... we should train AI to depict the utopian world we want, instead of the factual data of the world as it is. Is that not deception?

  • @tareaslizeth4376
    @tareaslizeth43769 ай бұрын

    Thank you for this useful information.

  • @feagaifaalavaau392
    @feagaifaalavaau39211 ай бұрын

    Im getting INTERFACE vibes rn.

  • @user-fp8ov6gc3m
    @user-fp8ov6gc3m11 ай бұрын

    Woah!

  • @gigicollins3498
    @gigicollins349811 ай бұрын

    Skynet

  • @extremekiller1205
    @extremekiller12053 ай бұрын

    Having to give an example of ‘93% of prisoners are men’ shows how woke it is to represent all jobs equally by gender. I wish a future where it is not needed to give examples like that to counter-argue woke ideology.

  • @marthakatharina3491
    @marthakatharina34919 ай бұрын

    AI is just holding a mirror to us. It's time to face our own biases.

  • @j7ndominica051
    @j7ndominica051Ай бұрын

    Woke people want to have their cake and eat it too, eh? If you want an even split among the CEOs, then you have to also women plumbers, construction workers and other dirty, taxing jobs, and the prisoners. You can't pick and choose. It's up to the artist to show the aspects he wants. With AI, he can try to give a prompt that combines less common attributes. How would you feel if there was a law requiring that you switch your talking CEO mid programme to a more standard one for fair representation? The AIs are already heavily limited for NSFW and deepfakes.

  • @sierralvx
    @sierralvx9 ай бұрын

    I don't see why this is so alarming, the AI is not to blame for these biases, but the existing stereotypes and prejudices that already exist in culture and shared on the internet. That's all the AI is drawing from so of course it would make these images if requested to. To treat the software as biased itself like it has agency or ill-intent isn't fair since it's a only a generator, not a creator. To put it another way, it's a mirror of humanity, both good and bad. Stop using AI image generators altogether and you avoid this.

  • @cortext_io
    @cortext_io9 ай бұрын

    Why have government regulation, on this specific issue? Create the option to select "Make the AI generated images I see representative of [Reality, Population, Birth Rates, What my leaders want for me, What'll make me feel good]"... You're saying "It's good that THAT is true for THEM but not for THEM OVER THERE"... that is a fault in the truth telling of this video

  • @MorganBarbarian
    @MorganBarbarian7 ай бұрын

    Based ai

  • @DanODNC1
    @DanODNC110 ай бұрын

    The gun wasn't included by a prompt? Really? I smell bullshit.

  • @matulopez5347
    @matulopez534710 ай бұрын

    Kek

  • @Edgeye
    @Edgeye9 ай бұрын

    They are not biased. Most people from Latin American countries (indigenous and original) were not black, this is not biased, you are biased, this is not racist, you are, you are racist for pushing on people and persisting the belief that for something to be equal, to be fair, to be racially accepted, It must be including black people entirely, no it must not. We don't go into Africa saying that there should be more white people there and that they're racist just from their world views and traditions do we? so why should white people have to face this constantly exercised lack of respect? not everything needs to be about black people, not everything needs to be about white people. Just because an AI showed Latin American barbie dolls and LATIN AMERICAN and not African does not mean it is racist? ALSO WHY SHOULD IS BE AFRICAN WHEN THE DOLLS ARE LATIN AMERICAN?

  • @GQ2593
    @GQ259311 ай бұрын

    Reality isn't egalitarian. AI understands this, woke academics not so much.

  • @thetranstan

    @thetranstan

    10 ай бұрын

    AI only understand the available datasets, which any academic knows is not the same thing as real world numbers or “reality”

  • @petem3883
    @petem388310 ай бұрын

    Seems like the problem here is that you have a woke bias.

  • @armondtanz
    @armondtanz7 ай бұрын

    This sums up woke BS. 'We want a 50,50 in the good stuff, but not a 50.50 in the bad stuff' Slow handclap, your saying the quiet parts outloud...

  • @filicefilice
    @filicefilice8 ай бұрын

    Save Ukraine!

  • @SergyMilitaryRankings

    @SergyMilitaryRankings

    7 ай бұрын

    Why do you want to save Nazi

  • @weelewism8442
    @weelewism84428 ай бұрын

    first world problems much? 😂

  • @404TVfr
    @404TVfr10 ай бұрын

    Cringe.

  • @SergyMilitaryRankings
    @SergyMilitaryRankings7 ай бұрын

    Who cares ?

  • @SergyMilitaryRankings
    @SergyMilitaryRankings7 ай бұрын

    So you're upset at statistical reality lmao

  • @Razumen
    @Razumen10 ай бұрын

    Yes, because a black coat jacket with no other distinguishing details is SOOOO reminiscent of an Nazi SS uniform. 🙄 This video really reaches in its attempts to drum up outrage.🥱

  • @suburbanyobbo9412

    @suburbanyobbo9412

    8 ай бұрын

    What the conclusions illustrate first and foremost is the bias of the author.