Faking it – clear and present danger

Written By :

Category :

Artificial Intelligence, Communication, Crisis communication, Society, Trust

Posted On :

It’s easy to fake. You might even give it a go. It will be used to change hearts, minds, lives and liberty and not for the better. Deepfake video is here – and it’s here to stay.

Last week’s widely shared ‘deepfake’ video of Mark Zuckerberg made global mainstream media headlines. It followed instances where others have had words put into their mouths and misleading images of them circulated across the web. The Zuckerberg video was, according to one of its creators, Bill Posters on Instagram, made as an artwork and released on his bill_posters_uk account as part of a series of AI generated video works. It was created, he explained, for Spectre, ‘an immersive exploration of the digital influence industry, technology and democracy’. After the uproar, the creators received a ‘misinformation’ flag from Facebook and Instagram which de-prioritises videos on newsfeeds and searches. The irony is that the Brandalism Project run by the video’s creators, sets out to question the power held by the tech giants and the influence of technologies and data shaping our understanding of the world around us.

It is hard enough for people to discern truth from lies when it comes to the written word, especially when lies are regularly propagated as truth by some. It is doubly hard for people to comprehend that training data used to develop machine learning is itself frequently riddled with bias that spills into active AI applications. The availability and access to technology that creates deepfake videos means discernment will be messy, perhaps even impossible, when we are faced with the many faces of fake. Unlike the Brandalism Project artworks, deepfake videos will not be neatly labelled as ‘creations’ and helpfully marked as fictitious. Videos will be created by all manner of people driven by all manner of intent. Words spoken by those we see in the frame (or the way in which the words are delivered) may contradict our knowledge of the person involved. But there they’ll be, up close and personal, spouting words that perhaps incite hatred, confirm bias, undermine communities or organisations and, ultimately, destroy any remaining trust individuals and their communities have in the systems that are supposed to serve them. And what will happen if the person on screen is in fact dripping vitriol and hatred in a genuine, unaltered video? Will their position simply be dismissed as ‘faked’ later on? Elections are on the horizon in many places next year and, as things stand, it is unlikely that 2020 will produce clarity of vision for anyone. Instead we will have to wade through mucky waters as deepfake videos flood societal consciousness. It is a serious and deeply disturbing concern.

This use of technology also presents a whole new challenge for public relations and communication practitioners involved in reputation guardianship. I question whether the majority of those working in this area are even aware of the dangers this technology poses, not just to their organisations but to the people and communities they serve. Samsung technology revealed last month can produce deepfake videos of the Mona Lisa - or your profile picture or, indeed, any still image. Given the predilection for shaming and outrage that has taken hold on the web in the last three years, anyone will be at risk from a stalker, troll or disgruntled critic. If we thought identity theft was a problem today, how will we cope when we see ourselves animated and voicing opinions that are the antithesis of our values and beliefs?​If we can’t believe what we see – or worse, we unquestioningly decide to believe what we see – trust is dead. Not just in the media context in which the untruths and fakes are served to us but in society itself. An uncivil war sparked by make-believe and manipulation, fuelled by the power-hungry to the detriment of most.

My question to those charged with guarding reputations is simple – what’s your plan? What are you doing now, today, to meet this very specific challenge presented to us by those who have developed this particular artificial intelligence capability? What are the professional bodies for public relations and communications doing to address this clear and present danger? How are our ethical codes being updated to ensure good, transparent behaviours? What are our university courses and professional development sessions doing to equip our future practitioners? Looking at current offerings around the world, I’d suggest nobody (in any field) is doing enough – and playing catch-up isn’t an option.

Photo by Christian Gertenbach on Unsplash