It's easy to fake. You might even give it a go. It will be used to change hearts, minds, lives and liberty and not for the better. Deepfake video is here - and it's here to stay.
Last week's widely shared 'deepfake' video of Mark Zuckerberg made global mainstream media headlines. It followed instances where others have had words put into their mouths and misleading images of them circulated across the web. The Zuckerberg video was, according to one of its creators, Bill Posters on Instagram, made as an artwork and released on his bill_posters_uk account as part of a series of AI generated video works. It was created, he explained, for Spectre, 'an immersive exploration of the digital influence industry, technology and democracy'. After the uproar, the creators received a 'misinformation' flag from Facebook and Instagram which de-prioritises videos on newsfeeds and searches. The irony is that the Brandalism Project run by the video's creators, sets out to question the power held by the tech giants and the influence of technologies and data shaping our understanding of the world around us.
It is hard enough for people to discern truth from lies when it comes to the written word, especially when lies are regularly propagated as truth by some. It is doubly hard for people to comprehend that training data used to develop machine learning is itself frequently riddled with bias that spills into active AI applications. The availability and access to technology that creates deepfake videos means discernment will be messy, perhaps even impossible, when we are faced with the many faces of fake. Unlike the Brandalism Project artworks, deepfake videos will not be neatly labelled as 'creations' and helpfully marked as fictitious. Videos will be created by all manner of people driven by all manner of intent. Words spoken by those we see in the frame (or the way in which the words are delivered) may contradict our knowledge of the person involved. But there they'll be, up close and personal, spouting words that perhaps incite hatred, confirm bias, undermine communities or organisations and, ultimately, destroy any remaining trust individuals and their communities have in the systems that are supposed to serve them. And what will happen if the person on screen is in fact dripping vitriol and hatred in a genuine, unaltered video? Will their position simply be dismissed as 'faked' later on?
Elections are on the horizon in many places next year and, as things stand, it is unlikely that 2020 will produce clarity of vision for anyone. Instead we will have to wade through mucky waters as deepfake videos flood societal consciousness. It is a serious and deeply disturbing concern.
This use of technology also presents a whole new challenge for public relations and communication practitioners involved in reputation guardianship. I question whether the majority of those working in this area are even aware of the dangers this technology poses, not just to their organisations but to the people and communities they serve. Samsung technology revealed last month can produce deepfake videos of the Mona Lisa - or your profile picture or, indeed, any still image. Given the predilection for shaming and outrage that has taken hold on the web in the last three years, anyone will be at risk from a stalker, troll or disgruntled critic. If we thought identity theft was a problem today, how will we cope when we see ourselves animated and voicing opinions that are the antithesis of our values and beliefs?
If we can't believe what we see - or worse, we unquestioningly decide to believe what we see - trust is dead. Not just in the media context in which the untruths and fakes are served to us but in society itself. An uncivil war sparked by make-believe and manipulation, fuelled by the power-hungry to the detriment of most.
My question to those charged with guarding reputations is simple - what's your plan? What are you doing now, today, to meet this very specific challenge presented to us by those who have developed this particular artificial intelligence capability? What are the professional bodies for public relations and communications doing to address this clear and present danger? How are our ethical codes being updated to ensure good, transparent behaviours? What are our university courses and professional development sessions doing to equip our future practitioners? Looking at current offerings around the world, I'd suggest nobody (in any field) is doing enough - and playing catch-up isn't an option.
Photo by Christian Gertenbach on Unsplash
It was my great privilege to be invited to speak at the Forum Humas BUMN 'Future of PR' Congress held in Bandung during March. FHBUMN is a forum for public relations practitioners from all the state-owned enterprises in Indonesia, dedicated to developing knowledge and competencies and improving sector performance. FHBUMN is an affiliate of the ASEAN PR Network which is a member of the Global Alliance for Public Relations and Communication Management.
At the main congress I spoke about artificial intelligence and public relations, with a particular focus on ethics and societal implications. A couple of days beforehand, I also had the opportunity to meet many of Indonesia's public relations and communications professionals in person, delivering a workshop session for them on communication audits, research, measurement and evaluation.
My host, and chairperson of FHBUMN Congress 2019, Nurlaela Arief, (pictured front row, fourth from left) led our post-presentation discussions on AI in PR. With a recently published book on AI in PR, Nurlaela is an acknowledged expert on the topic in Indonesia and it was a fascinating journey exploring how public relations practice is embracing and adapting - or not - to the challenges the emergent technologies bring to our profession. We also had the great benefit of Professor Anne Gregory's expertise, bringing perspectives from the UK and some of the thinking from the CIPR's #AIinPR report which I also contributed to last year.
It was heartening to have so many discussions on the subject and gain a better understanding as to how AI is being approached by practitioners in Indonesia - and I look forward to many more.
A quick little update to share a good read, written by Lauren McMenemy for The Content Standard. It looks at the relationship between ethics, artificial intelligence, public relations and keeping content on the straight and narrow. Lauren interviewed me for this piece, along with CIPR colleagues Stephen Waddington and Jean Valin following the report on AI in PR published by CIPR this year. It's a good read - you'll find it here - and certainly should be fueling food for thought for today's practitioner
There's a new citizen in Saudi Arabia - a very articulate one. Sophia, from Hanson Robotics. She's been around a while but this week returned to centre stage when, at an investment conference, it was announced she had been given citizenship of Saudi Arabia. A stunt for sure - but it forms a bleak contrast to the millions of humans currently 'stateless', roaming as refugees and facing the total reluctance of national governments around the world to take them in.
The raw truth is that Sophia is worth money - significant amounts of money - and citizenship has its price. (As an aside, Sophia is presented as a 'female' robot so I do wonder what her 'rights' as a citizen will actually include, what cultural customs and practice she might need to take on and what freedom of movement she might have - but that's another discussion).
This discussion centres around the question of human-robot relations and the emerging space between worlds. During the interview conducted live at the conference, Sophia was asked if she was a threat to humans. Her (rather creepy) reply was simply this: "You be nice to me and I'll be nice to you". Question is, who has taught Sophia the complexities of 'nice', its place in relationships and communication? Who, when things go wrong, will mediate between Sophia and the humans - or any robots and their humans? Who is teaching the robot teachers the parameters of good citizenship?
In the last five years, a space has grown. The space between worlds is that place where our accepted historical realities of humanity, human interaction and live encounter are stretched into a space where we experience only the virtual, the artificial - and the artifice of the algorithms. This space between worlds is the new frontier so new skills and new methods of navigation are necessary to help society makes the shift.
Global legislation is still catching up with the disruptions of social media and unfiltered communication and cyber security is of real concern. In the same way that smart phones popped into our pockets and stayed there, so too will our robots - except this time, they really will be smart. Much smarter than us. And we will still be on the back foot, unable to cope with the challenges about to be faced.
As public relations and communication professionals, we build the relationships to keep our organisation's licence to operate. Those relationships exist inside and outside our organisations. Careful mediation and communication will be necessary as automation and artificial intelligence replace roles previously considered human undertakings. Jobs, incomes - and most dangerous of all, purpose, will be lost. Organisations will still make profits, govern countries and please shareholders, but for society there will be greater numbers of disenfranchised humans becoming the next generation of economic refugees. The ethics of operation plus deployment of AI and robots needs to be considered and, as the ethical conscience of the organisation, it is a role which our profession should be preparing for now.
The challenge will be capturing the space between worlds today, ensuring we help our organisations, communities - and governments - navigate the societal shifts that will be born of Sophia and her descendants.
Reflecting on storytelling after uploading the Words that Work session, I found myself pondering current progress in algorithmic storytelling. A week ago, the Neukom Institute for Computational Science at Dartmouth College announced the winners of the 2017 Turing Tests in the Creative Arts - a prize given to those able to produce a story by algorithm, indistinguishable from the average human writer.
We've been telling each other stories since the beginning of time and the structure of our stories remains pretty much the same. Probably the most documented structure is the 'Hero's Journey', credited to Joseph Campbell and his book The Hero with a Thousand Faces. The core structure is simple - the hero goes on a journey, encounters crisis, overcomes crisis, is victorious and emerges transformed. It's a structure embellished and central to myths and legends for millennia but it is by no means the only one. Rags-to-riches comes to mind (think Cinderella) who also fits the more complex form of rags-to-riches-to-rags-then-riches-again.
As humans, we enjoy - and remember - stories that challenges us, deepen understanding, change our perspective, entertain, amuse and many other things besides. So it's fascinating to examine the stories, sonnets and music created by this year's algorithms as we tip-tap our way towards total automation of writing (take a look at Wordsmith). The DigiLit 2017 prize encouraged the creation of algorithms able to produce 'human-level' short story writing indistinguishable from an 'average' human effort. Poetry was also in the running and I would urge you to read the prize winners in each category and spare a moment for the winner of the 'Human-Written Sonnet Most Mistaken for a Machine-generated Sonnet' category.
Increasingly, algorithms are charged with gathering information and producing stories about our organisations. Compare story types used most frequently by organisations - news, chronicle, history and report - with the types employees use when they tell stories about their organisations. Their stories are found in the more appealing forms of anecdote, rumour, hearsay, gossip and jokes. I'd suggest that depending on the available data fed to our new algorithmic friends, there will be few organisations basking in the warm, comforting glow that results from a successful hero's journey.
When digging for stories for our organisations, I always urge colleagues and delegates to look beyond entrenched or traditional stock narratives broadcast on behalf of their organisations and search instead for the heroic exploits happening right under their noses. In this century, in this decade, if we want to be allowed to continue as organisational storytellers we must drive ourselves beyond 'average' human effort.
Every organisation has heroes - and villains. Indeed your organisation could well be the villain, given that for every story, there is an anti-story. As you dig you'll discover there are monsters to fight, obstacles to overcome and always an epiphany of sorts, even if it is ignored.
The stories we tell today will be the fodder for algorithmic storytelling tomorrow. Algorithms will scoop up and spit out all we have uttered, in word, on the web, in print and on video. So do we understand our own story? Why must we tell it? And who needs to hear? How does our story structure help our communities understand who we are, what we do and why we do it? And are we telling that story in such a way that it will be remembered, relevant and useful to those who listen?
My challenge to you would be to revisit your organisation's story arc or find a structure that resonates with those who will listen, read or watch your story unfold. At the very least, build a 'who-what-where' structure.
So will an AI sonnet smell as sweet as one gently coaxed into delicate form by a human? Probably. And scarily, when it comes to organisational reputation, AI-led storytelling is likely to cause quite the stink.
About Think Forward
Think Forward is written by Catherine Arrow. It answers PR questions, highlights practice trends - good and bad - and suggests ways forward for professional public relations and communication practitioners.