It was fine while artificial intelligence was just driving a few cars around or sorting out your spam folder. It’s been useful having a grammar tool that prompts us to correct errors and typos during a busy day. Picture correction, summaries, transcriptions – just great. But then came the rise of deep fakes, language models were declared ‘too powerful’ to let loose, and the stealthy shift into a brave new world began.
And here we are today - positively drowning in artificial intelligence this, artificial intelligence that. Surely it’s not that big a deal? Well it is. For all of us, in every aspect of our lives. And while the ramifications of generative artificial intelligence will be felt by everyone, my focus today is the second possible future for public relations – and this future is powered by AI.
We’ve been discussing the potential impact of generative artificial intelligence for nearly a decade now - I’ve lost count of the number of webinars, presentations and training sessions I’ve run on the topic since 2013 - but it is only since November last year when Open AI released ChatGPT3 that practitioners started to really consider the implications of this new technological union - for practice, for organisations and for society. For better and for worse.
Despite the explosion of interest, coping with speed of change and the constant consequences of each new application is hard. This week has seen the release of ChatGPT’s Code Interpreter which puts the data analysis we were talking about yesterday into the ‘really easy-peasy’ basket. On the back of that, comes today’s news that billionaire Elon Musk has launched xAI so - it’s been reported - he can ‘understand the true nature of the universe’.
Anyway - I digress. What does all this mean specifically for public relations? For future practice? Essentially, it’s a game changer - but not necessarily a nice one. What we’ve seen since ChatGPT3 was released into the wild is the greatest shift since the advent of the internet. The main task ahead for practitioners is helping our organisations navigate this shift, minimise division, guard against misinformation, train and guide policy, people and prompts. Asking the right questions - an existing superpower - becomes more important than ever.
Practitioners need to understand what they are dealing with - the difference between AI types and models and their application. There’s a good book by Field Cady - The Data Science Handbook - which I’d recommend along with his ‘Data Science: The Executive Summary – A Technical Book for Non-Technical Professionals’ which you might find useful - you can find more information here - https://field-cady.github.io/homepage/
Interestingly, there is common ground between data professionals and public relations professionals - both disciplines set out to frame the problem by asking the right questions and it is this common ground where I believe the AI powered public relations future lies. There is no going back from this (as long as the electricity stays on) as artificial intelligence is embedded in our lives – and practice – from this point on. This particular future - as they say – is now.
Last night I was interviewed about the impact of AI on public relations and found myself frustrated because the questions were centred on task replacement, and reflected an approach to practice that aligns public relations solely with content generation. If the approach to practice prevails then, quite frankly, the role is redundant. Although everything generated by AI needs to be checked, reviewed and the source declared, there will be organisations out there that will replace their ‘content creators’ without a second thought. These organisations won’t consider the long term impact on their reputation or the disengagement and splintered relationships that will follow. Generative AI can produce every type of ‘content’ you can think of - probably faster than you can think it - so if the approach to public relations is centred on outputs, on ‘sending stuff out’, there is no future practice to consider.
If, on the other hand, the relationship is at the heart of practice then generative AI becomes a tool that assists with analysis, insights, planning, crisis and issues management. We frame the problem, ask the right questions, develop and implement strategies that improve relationship and societal outcomes.
We must remember that you can’t trust generative AI. I’ve nicknamed ChatGPT ‘Dobby’, after the house-elf in the Harry Potter books. Every day Dobby produces inaccurate and fictional information which, when questioned for accuracy, is followed by long and profuse apologies with elaborate back stories on its fictions and hallucinations. Often, because it is working on probability, it appears to be something of a people pleaser which we know is not a good operational model.
As a society, global or local, we can't function if we don't trust each other, and the ways in which those bonds of trust are formed have shifted and changed. Trust is central to the work we do - building and sustaining the relationships to maintain our licence to operate - and without trust, leadership is virtually impossible. Trust connects us, binds us together and allows us to move forward while an absence of trust leaves a vacuum filled by fear and suspicion. As people, we do better together than we do apart but without trust, we become less able, less likely to connect.
How does generative AI undermine trust? Or affect reputations? On the positive side, we have task speed and a ‘massive brain’ at our disposal but on the negative - well, here’s Dobby’s – sorry ChatGPT’s - explanation: “If not used properly, the use of ChatGPT can raise concerns about the authenticity and credibility of the information provided by organisations. If an organisation's use of ChatGPT becomes known, it may raise questions about whether the content is truly original or if it's just generated by a model. This could lead to mistrust and scepticism among customers and other stakeholders, which can negatively impact the organisation's reputation. Additionally, if organisations use the model to generate inappropriate or offensive content, it could lead to negative publicity and legal trouble. It is important for professionals to understand the capabilities of limit and limitations of large language models like this, and how to use them responsibly.'
There you have it - the onus is on us to develop our skills and expertise if we are to remain relevant. But there's more - much more. It is imperative that we consider the ethics of application, what that means for our stakeholders, our communities of interest and for society - so roll up your sleeves and get to grips with algorithmic ethics.
AI is only as good as the data it's trained on and if that's accurate or inaccurate, biased, unbiased, well informed, misinformed, that's what we will be served. The greatest danger to our role within organisations is being regarded only as tactical implementers rather than strategists and relationship builders necessary to maintain the licence to operate. The greatest dangers to society include the issues of misinformation, deep fakes, fabrication, fractured social cohesion and the digital divide becoming a chasm.
Rolling out language models without ethical consideration is irresponsible, and the potential for great harm to communities, individuals and organisations is significant. The digital environment is now a place of great contradictions where we can - as always - have amazing or terrible things, depending on the choices we make the values we hold, and the underlying intent.
I've always said the greatest competency for a practitioner is courage - the courage to speak out, to listen, to advocate, to evaluate, and - pardon the cliche - to speak truth to power. Perhaps we can add to that spotting untruths in power. If that role is neglected or ignored, then we really will be replaced by ChatGPT and its successors faster than Dobby could iron his hands.
As organisations seemingly move into a new golden age of purpose (having forgotten all about it for a few decades) and attempt to align their purpose and values as we enter this new era, perhaps a good starting point would be to clean up their act online and be mindful of good outcomes as they undergo technological transformation. Ensure that they use data responsibly and redraw digital terms of engagement for benefit and without dark patterns. Enabling trust online isn't about shiny new tech, artistic artificial intelligence or a great serve from search. It is, as it's always been, about the way we behave, the choices we make, and a genuine desire to build a fair and equitable society. Enabling trust online and offline is up to us. That’s our space in an AI powered future and, as we’ve observed, that future is now.
Hope to see you tomorrow for the third future – and it’s all about you.
It’s been a tragic start to the year. In just a few weeks we’ve seen devastating natural disasters around the world, killing thousands and affecting millions of people. Crisis has rolled into crisis and here in New Zealand we’ve just weathered our third major climate event since the year began with Cyclone Gabrielle taking lives and destroying homes and businesses. We’ve also seen some very different styles and approaches to leadership communication and this prompted one of the sections in this month’s PR Horizons update which looks at some of the highs and lows of crisis communication witnessed through each event and includes some tips for dealing with difficult leaders.
While crises have dominated collective thinking and focus - and rightly so - there have been other events on the horizon, including rapid developments in artificial intelligence, particularly with regard to search and two of the internet giants going head to head. The tussle for control between Microsoft and Google is discussed, along with implications of evolving AI for practitioners plus we touch on the business of trust, misinformation and rumour.
For a short time, the February briefing is free access for subscribers and you can take a look here.
I hope it gives you some food for thought and a chance to reflect on some of the issues and horizons we need to watch this month.
Heading into Christmas, there was an early 'gift' from OpenAI this month with the release of a new model, ChatGPT. You can read their blog post here and it gives you the detail on how it works, what it does and how it can be used. It is extraordinarily clever and a powerful tool that can be used in a myriad of ways - but, as my motto remains - just because we can, doesn't mean we should. I've covered the ethics of AI and its application many times over the years and as the capability of tools such as ChatGPT increases, so too do the ethical challenges. Question is, will ChatGPT be on Santa's naughty or nice list this Christmas? Will people opt for the lazy route of generating content (which will mean a world where algorithms chat to each other but we don't) or will practitioners realise they are at a point of evolution for the profession where they can be eliminated by the tactical or elevated by wise counsel? Jury will be out on that for a while - meantime, Happy Christmas - ChatGPT had this to say on the season when I asked it to describe and explain Christmas (which it did in less than a second):
"Christmas is a holiday that commemorates the birth of Jesus Christ. It is typically celebrated on December 25th by Christians around the world and is considered to be one of the most important holidays in the Christian faith. The holiday is often marked by religious services, the exchange of gifts, and the decoration of homes and churches with festive decorations, such as Christmas trees, lights, and mistletoe. Many people also celebrate Christmas by preparing special meals and gathering with friends and family. The holiday has also been adopted by many non-religious people as a cultural celebration, and is often associated with secular traditions such as Santa Claus, gift-giving, and the singing of Christmas carols."
However you celebrate - enjoy the season.
About Think Forward
Think Forward is written by Catherine Arrow. It answers PR questions, highlights practice trends - good and bad - and suggests ways forward for professional public relations and communication practitioners.