Can You Tell When A Video Is Fake?

Myles breaks down why deepfakes can cause so much damage and talks to Jabril, the host of KZread channel ‪@Jabrils‬, about how to spot them.
SUBSCRIBE so you never miss a video!
bit.ly/3tNKzhV
And follow us on Instagram and Twitter!
/ abovethenoisepbs
/ atn_pbs
Today’s internet meme culture thrives on Photoshopping images for comic effect. But there is a newer form of image manipulation using AI technology to create hyper-realistic alterations to video. These so-called ”deepfakes” can be very difficult to detect or debunk.
**What are deepfakes?
Deepfakes are videos that have been manipulated using AI technology. This tech can scale, rotate or splice videos and images together to alter videos so that they can be very different from the original and tough to spot as fakes.
**How do deepfakes spread?
As AI becomes cheaper and more accessible to more people, deepfakes are more difficult to detect. Social media platforms like KZread, Facebook and Twitter help to spread them, partially because of the ease and speed of sharing.
**How do deepfakes contribute to misinformation online?
According to the Pew Research Center, 85% of teens use KZread, and two-thirds of all Americans get at least some of their news from social media platforms. Right now, these companies are doing little to stop the spread of deepfakes on their platforms. High-profile figures like politicians and celebrities who have a ton of photos and videos on the internet are the most common targets for deepfakes, due to the huge volume of data available to manipulate. While they are often used for parody or satire, deepfakes can be used for more sinister purposes. Imagine what could happen if somebody created a fake video of a leader inciting violence or falsely accusing another politician. Experts say deepfakes could become a serious threat to our security and democracy in the near future.
** If I’m not famous, what kind of threat do deep fakes pose to me??
Women are the most common victims of deepfakes. Usually, someone will splice the face of a woman they know into a pornographic or sexually provocative video to humiliate them. While good technical skills are required to create really convincing fakes, there are plenty of how-to’s and other resources out there to get people started.
**How can we stop the spread of deepfakes?
People and organizations are developing detection algorithms to spot deepfakes, but these methods usually lag behind the technology used to create them. Often, you can spot a deepfake by looking closely at eye movements and facial expressions. If a video looks a little suspicious, try to find another version of it online using Google image search. Also, the KZread Data Viewer shows you when a video was uploaded and provides thumbnails for reverse image searching. But the best way to stop the spread of deepfakes is to slow down your impulse to share, and demand that social media companies do more to combat it.
SOURCES:
“This PSA About Fake News From Barack Obama Is Not What It Appears”
www.buzzfeednews.com/article/...
“Fake-porn videos are being weaponized to harass and humiliate women”
www.washingtonpost.com/techno...
“Does This Photograph Show President Bush Reading a Book Upside-Down?”
www.snopes.com/fact-check/bus...
KZread DataViewer
citizenevidence.amnestyusa.org/
“Social Media Use in 2018”
www.pewinternet.org/2018/03/0...
“What to Watch for in the Coming Wave of “Deep Fake” Videos”
gijn.org/2018/05/28/what-to-w...
Co-produced with Data & Society Research Institute. @datasociety
Special thanks to Data & Society Researcher Britt Paris
About KQED
KQED serves the people of Northern California with a public-supported alternative to commercial media. An NPR and PBS member station based in San Francisco, KQED is home to one of the most listened-to public radio stations in the nation, one of the highest-rated public television services, and an award-winning education program helping students and educators thrive in 21st-century classrooms. A trusted news source, leader, and innovator in interactive technology, KQED takes people of all ages on journeys of exploration - exposing them to new people, places, and ideas.
Funding for KQED’s education services is provided by the Corporation for Public Broadcasting, the Koret Foundation, the William and Flora Hewlett Foundation, the AT&T Foundation, the Crescent Porter Hale Foundation, the Silver Giving Foundation, Campaign 21 donors, and members of KQED.
#deepfakes #misinformation

Пікірлер: 90

  • @AboveTheNoise
    @AboveTheNoise5 жыл бұрын

    Hi all! Thanks for watching this video! We just caught a typo. In the Pew Research that we cite at 1:17, we incorrectly state that 73% of Americans use Facebook. The actual percentage is 69. Our apologies for the typo!

  • @5x3x1x5

    @5x3x1x5

    4 жыл бұрын

    Above The Noise nice

  • @acejohnstonmafiawars

    @acejohnstonmafiawars

    4 жыл бұрын

    Now it is 73%...

  • @crashfan11

    @crashfan11

    3 жыл бұрын

    Ahem... Nice

  • @lifenoggin
    @lifenoggin5 жыл бұрын

    It be pretty difficult to recreate me

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    How can you fake perfection, Blocko? AI has met it's match for sure.

  • @user-vn7ce5ig1z

    @user-vn7ce5ig1z

    5 жыл бұрын

    I don't know; there's plenty of Blocko head-shots to use from all the videos, including from different angles…

  • @euph0rya672

    @euph0rya672

    4 жыл бұрын

    - can’t wait for a blocko family pies video

  • @Ceelvain
    @Ceelvain5 жыл бұрын

    The amount of data required to train an AI is decreasing continuously. I've seen a recent result able to produce videos from a single image. It could make Mona Lisa talk (both funny and creepy). The solution to revenge porn (be it fake or real) is to acknowledge that everyone has a sex life, and technology is part of it. People send nudes, make porn and watch porn. That's life. Private... life. People must learn to not juge others based on their private life. That's just common sens.

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    Thanks for watching and sharing. I will say that part of the problem many people have with "revenge porn" is that it means one person making a decision to make something private very public, without the consent of everyone involved. It's up to each of us to decide if that is fair or not on a personal ethics level, but it is not legal.

  • @Ceelvain

    @Ceelvain

    5 жыл бұрын

    @@AboveTheNoise It's definitely neither moral or fair. But if everybody were at ease with their own sexuality and that of others, revenge porn would not even be "a thing". By removing the consequence sought by the ex-bf, there would just no longer be a motivation for it. And as a bonus, people would just be happier overall.

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    @@Ceelvain maybe in some cases...but I'm pretty sure that many porn deepfakes are strangers added the faces of women they either don't know or didn't actually have an erotic experience with to pornographic images of other people. Hard to imagine anyone ever feeling cool with that.

  • @bluji1250
    @bluji12504 жыл бұрын

    Jabril and the Crash Course video on algorithmic bias sent me here! Even though I was aware of the concept of deepfakes, the part about how to recognize them was new - and likely and sadly quite useful in the future - info on them. Thanks!

  • @AboveTheNoise

    @AboveTheNoise

    4 жыл бұрын

    Bluji thanks for coming over to check us out! We love Jabril and Crash Course. Hope you like our channel and subscribe!

  • @CD-gz1tr
    @CD-gz1tr4 жыл бұрын

    Diana from Physics girl sent me towards your channel. Really enjoying your content

  • @AboveTheNoise

    @AboveTheNoise

    4 жыл бұрын

    Thanks for taking Diana’s advice! We love Physics Girl! And glad you are enjoying our channel.

  • @cestlavegan5793
    @cestlavegan57935 жыл бұрын

    I think this is just the beginning of a very complicated future. At this moment in history deepfakes are generally crude enough to be detected, but it's easy to imagine a time not too far ahead where the lines between real and fake will be far too blurry to differentiate the two. Flawless deepfakes could potentially be generated in realtime. For example, a hacker / scam artist might FaceTime you pretending to be a loved one, rendering sensitive information very vulnerable. Seems like audio and video will be worthless in terms of evidence for anything at that point. What can we do about it? No clue. But I believe truth and knowledge must always be pursued and protected if our species is to advance and evolve.

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    Those concerns are what drove us to make the video and bring the conversation into classrooms with young people. This is their future -- and we can't rely on technology alone to provide solutions. Critical thinking -- especially about the intentions behind the video message -- is so important.

  • @vvskiitlesvv

    @vvskiitlesvv

    4 жыл бұрын

    A big issue is that we are seeing it a lot especially now. I saw one one this morning of Adam schiff from a news station

  • @vvskiitlesvv

    @vvskiitlesvv

    4 жыл бұрын

    The biggest giveaways were the lack of body movement below the neck. Pixels didn’t match. Lighting was very slightly off and if you slow them way down to frame by frame you can point out when things (like ears, nose, jaw, and mouth) go a little wonky and don’t line up

  • @ToneyCrimson
    @ToneyCrimson4 жыл бұрын

    YES! Me avoiding social media is paying off!

  • @thomasr.jackson2940
    @thomasr.jackson29405 жыл бұрын

    Sharing on social media is essentially publishing. Don’t publish stuff you don’t know is real.

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    Good advice!

  • @CybershamanX
    @CybershamanX5 жыл бұрын

    Thank you for talking about this. We need to do our best to get ahead of the coming onslaught of fake videos. Right now, the one thing in our favor is that you can't fake the voice, but make no mistake the fake computer voices are not far behind. Sadly, even with everyone talking about this now, there is still going to be a large segment of the population that is going to be fooled by this. I'm not sure what we're going to do about that. I guess we'll just have to make sure we have the knowledge and the knowhow to both spot and then refute deep fake videos. And I think places like KZread and Facebook are working on systems that will detect fake videos, but they likely have a long way to go before they are reliable. Hoo, boy, it's going to be a wild ride, man. Thanks again. I love your channel! Criminally undersubscribed. But, they will come! So, hang in there! :)

  • @mysticstrikeforce5957
    @mysticstrikeforce59572 жыл бұрын

    was typing this in because for some reason people keep saying even a real video is out its all fake just cause someone is recording the situation. Now i know why they keep saying it.

  • @Aishaa_aa
    @Aishaa_aa4 жыл бұрын

    But what can social media companies really do to stop this?

  • @zhubajie6940
    @zhubajie69405 жыл бұрын

    Jabril's awesome see his KZread channel.

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    Zhu Bajie we are big Jabril fans.

  • @ductuslupus87
    @ductuslupus874 жыл бұрын

    Jabril sent me from Crash Course.

  • @AboveTheNoise

    @AboveTheNoise

    4 жыл бұрын

    Welcome!!! Glad you came to check us out!

  • @csgoclipz3737
    @csgoclipz37372 жыл бұрын

    Jabril sent me

  • @AboveTheNoise

    @AboveTheNoise

    2 жыл бұрын

    Jabril! We love him. Glad you checked us out!

  • @gigglysamentz2021
    @gigglysamentz20215 жыл бұрын

    4:34 for how to spot them Thanks AtN ♥

  • @gigglysamentz2021
    @gigglysamentz20215 жыл бұрын

    0:54 for the comparison of real vs phony ;') Creepy... BUT COOL ! Liked eue That's what I shamefully was hoping for...

  • @grayovercast
    @grayovercast2 жыл бұрын

    You can also tell by the lighting. They never get that right.

  • @A-Viking
    @A-Viking5 жыл бұрын

    Phew I guess all the time resisting the urge to sign up for facebook and other "theft of ones Digitallife" programs is finally going to have been worth it :D

  • @Asteroid_Jam
    @Asteroid_Jam5 жыл бұрын

    Luckily I do not have my face on many photos

  • @user-vn7ce5ig1z

    @user-vn7ce5ig1z

    5 жыл бұрын

    They called us recluses and weirdos for not spewing everything on social-media sites, but now who's laughing? 😀

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    @@user-vn7ce5ig1z You all sound lucky. But we need to think about the collective culture, don't you think? Even if YOU are not directly at risk, there are a LOT of people who are. Our goal is just to get more young people aware and thinking about this.

  • @SupLuiKir
    @SupLuiKir5 жыл бұрын

    You can't. Your only choice now is to disbelieve everything you see or hear.

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    Or at least think more critically before you believe or disbelieve. We like to think you have the choice to pause, analyze and reflect before sharing a video or image on social media.

  • @SupLuiKir

    @SupLuiKir

    5 жыл бұрын

    @@AboveTheNoise You gave the current methods to detect deep fakes, and you mentioned that the tech creating deep fakes and detecting deep fakes will be in an arms race, but you failed to follow that logic to its conclusion, that eventually it'll become completely impossible for humans to detect deep fakes at all. Once we get there, we'll have no choice but to disbelieve everything. And programs that detect deep fakes won't likely be trustable, anyways. If it's a black box, then it certainly won't be able to be trusted, since whoever's behind it can manipulate the results to declare inconvenient truths as fake or pass fake news as truth. Theoretically, deep fakes can become perfect at their mimicry, at which point there would be no errors for a deep fake detector program to find. Finally, it'll no longer be a matter of trust. There will come a point where the only option is to disbelieve everything wholesale.

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    @@SupLuiKir That may be the inevitable conclusion to this story -- but we really do believe that there is a difference between losing faith in all representations of reality and developing a healthy skepticism/critical eye.

  • @SupLuiKir

    @SupLuiKir

    5 жыл бұрын

    @@AboveTheNoise losing faith in "representations" of reality, in other words, losing faith in video evidence, is not the same thing as losing faith in reality itself. I made allusions to absurd philosophical stances that purport that reality isn't real because I thought it was funny to do so, considering the topic. Excuse me.

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    @@SupLuiKir fair enough! We really do appreciate your comments!

  • @AlessandroRodriguez
    @AlessandroRodriguez5 жыл бұрын

    5:22 "...Journalists finding multiple sources...." you make laugh so hard with that one....

  • @user-vn7ce5ig1z

    @user-vn7ce5ig1z

    5 жыл бұрын

    Yeah, Myles' naïveté is adorable, stuck in the previous decade when journalism still had integrity. I envy him that. :-\

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    @@user-vn7ce5ig1z wish you could be a fly on the wall during our editorial meetings, when Myles and the show writers discuss, debate and analyze a pretty impressively diverse range of sources for every episode. We go to pretty extreme lengths to analyze different perspectives and include them in our videos (as long as they rely on solid, scientific evidence).

  • @harikirankante883
    @harikirankante8833 жыл бұрын

    Gabriel sent me here =)

  • @jd3092003
    @jd30920035 жыл бұрын

    i can tell the fake ones.. its not perfect yet.. the texture and lighting isnt perfect yet.. you can still see quite a bit of pix elation.. but it will get a whole lot better..

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    It's true...there are still some tell-tale signs of deepfakes right now...but the tech is evolving quickly!

  • @user-vn7ce5ig1z

    @user-vn7ce5ig1z

    5 жыл бұрын

    Plus, it depends on the generator, some are more equal than others.

  • @TheTwick
    @TheTwick4 жыл бұрын

    Jabril sent me over [my AI wants me to replace Jabril with Gabriel! I don’t think he’d like that.] ;-) I know like and subscribe (and ring that bell)

  • @AboveTheNoise

    @AboveTheNoise

    4 жыл бұрын

    Welcome!! Thanks for checking us out!

  • @ZOOTSUITBEATNICK1
    @ZOOTSUITBEATNICK15 жыл бұрын

    imo There used to be a tv game show called "Who Do You Trust?". The correct answer is nobody that you don't know well. imo

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    It does seem like today's media culture requires a healthy dose of skepticism.

  • @NotHPotter

    @NotHPotter

    5 жыл бұрын

    Honestly, I don't this has ever not been the case. At some point, though, society seems to have collectively decided that "I read it on the internet so it must be true" became the new standard.

  • @ZOOTSUITBEATNICK1

    @ZOOTSUITBEATNICK1

    5 жыл бұрын

    @@AboveTheNoise Seems that way to me. Has for a long time.

  • @ZOOTSUITBEATNICK1

    @ZOOTSUITBEATNICK1

    5 жыл бұрын

    @@NotHPotter No big argument from me.

  • @TSquared2001
    @TSquared20013 жыл бұрын

    Interesting

  • @nixdorfbrazil
    @nixdorfbrazil5 жыл бұрын

    Already old. You can make videos out a single portait now. Like the Monalisa and others that are quite convincing. They are making videos of historical figures and you guessed....

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    Yeah, we know there is no way our video will stay current about the state of deepfakes tech for long! Crazy how fast it's getting more sophisticated.

  • @pvsrod

    @pvsrod

    Жыл бұрын

    To be fair I don't belive the Monalisa qualifies as a deepfake, since the only point of reference is 1 painting, which many people believe is not even of a real person but a product of Da Vinci's imagination, so how can it be convincing just based on 1 painting from over 500 years ago?, Same applies to many historical figures, it actually seems easier to base it in just a few imagines as you can just randomly generate details that you don't have, plus there is not way of verifying that the generated AI would actually fool someone that knew that person. Deepfakes on real time, now that is going to become a problem

  • @codertommy6883
    @codertommy68834 жыл бұрын

    jabril sent me

  • @AboveTheNoise

    @AboveTheNoise

    4 жыл бұрын

    Glad to hear it! Welcome to our channel!

  • @oafkad
    @oafkad5 жыл бұрын

    This is something I actually want to make a video about sometime. I was thinking about it recently and in my opinion deep fakes, and fake news, are really only as scary as the people they are used on. You might think "Don't you mean the people making them?" But no. If you are not looking for a win they have no real influence on you. If your life isn't predicated on the need to be right, the need to show your "enemies" just how virtuous you are, these sort of tools don't really work. Because you end up taking time to learn about something, confirming whether or not data is accurate. That time spent not knee jerking is what cripples these tools. Lies and propaganda work when the people being given them want them to be true.

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    You raise an interesting point. Thanks for watching!

  • @oafkad

    @oafkad

    5 жыл бұрын

    @@AboveTheNoise Always. Love the show!

  • @AboveTheNoise

    @AboveTheNoise

    5 жыл бұрын

    @@oafkad Thank you!

  • @khatunatabatadze5661
    @khatunatabatadze56613 ай бұрын

    ერა ბიგ ფეიკს!

  • @dileeloo506
    @dileeloo5064 жыл бұрын

    Ehh i was watching that live that book was upside down 🙄 BS VIDEO THIS IS..

  • @legitimatelucy
    @legitimatelucy11 ай бұрын

    This guy is a deep fake 😂

  • @thomasstanford9451
    @thomasstanford94515 жыл бұрын

    Yawn

  • @user-eu6tk2vn1v
    @user-eu6tk2vn1v6 ай бұрын

    Jabril sent me