AI and the tightrope of truth
In this month's PR Horizons I explore AI and the tightrope of truth that organisations have started to walk in the last few weeks, we'll look at how political parties have begun their dance of deception using generative AI and we'll also look at coming legislation that's designed to rein in some of the effects and impacts of artificial intelligence. I touch on this month's announcements from the major players as they race to get ahead and I've also got some questions for you to consider concerning your own and your organisation's AI journey.
Everything and everyone is not as they should seem this month - one day the Titans are warning how dangerous at all is, the next they're parading their latest shiny new toy. One of the major problems is the pace of change, and the inability of us poor old humans to keep up with the implications of generative AI and other machine learning applications as they are released into the wild. The great content churn is well underway and we're seeing more frequent examples of misinformation disguised as productivity - AI applications start to make stuff up and it's shared in volume. The companies behind it all are heroes one day and villains the next and we saw this month the emergence of models trained on the Dark Web. You can access the full briefing here and find out more about the ways in which political parties have already begun to use generative AI to produce images that mislead.
We're in a bad romance
This month's PR Horizons looks at the tortured romance between AI and humans, confirm why, in the words of New Zealand's ex-Police Chief Mike Bush that 'minutes matter' when crisis hits, and finally, why you need to be thinking about sharpening up your foresight to develop the situational intelligence that we'll need for what lies ahead. April's AI developments have had me humming Lady Gaga's 'Bad Romance' pretty much the whole month. Things have got tangled and twisty and the complexity, the number of ethical monsters emerging from the AI closet, bad behaviors and misuse has continued to increase at pace. But the global infatuation has strengthened to the point of obsession. The month started with a report from the US and Europe on steps towards regulating the AI-sphere. Europe has had legislation in the pipeline for the last couple of years. But lawmakers have been at loggerheads as to how to adapt, change or amend the proposals to stem the flood of generative AI and its application. You can access the briefing here, format is podcast with pictures and I'd recommend checking out the tips for improving your foresight.
AI Genie is out of the bottle
PR Horizons March briefing looks at what's next now the AI Genie is out of the lamp and into mainstream use, how great measurement and evaluation can support public sector communicators (and everyone in public relations and communication) explores the most powerful question you can ask - and why you should be asking it all the time. You can access the briefing here.
How to use AI in PR and use it well
Join me - and newly created colleague Alison - to learn how to use AI in PR and how to use it well. You can register here - https://bit.ly/How-to-useAIinPR. Look forward to seeing you then.
It’s been a tragic start to the year. In just a few weeks we’ve seen devastating natural disasters around the world, killing thousands and affecting millions of people. Crisis has rolled into crisis and here in New Zealand we’ve just weathered our third major climate event since the year began with Cyclone Gabrielle taking lives and destroying homes and businesses. We’ve also seen some very different styles and approaches to leadership communication and this prompted one of the sections in this month’s PR Horizons update which looks at some of the highs and lows of crisis communication witnessed through each event and includes some tips for dealing with difficult leaders.
While crises have dominated collective thinking and focus - and rightly so - there have been other events on the horizon, including rapid developments in artificial intelligence, particularly with regard to search and two of the internet giants going head to head. The tussle for control between Microsoft and Google is discussed, along with implications of evolving AI for practitioners plus we touch on the business of trust, misinformation and rumour.
For a short time, the February briefing is free access for subscribers and you can take a look here.
I hope it gives you some food for thought and a chance to reflect on some of the issues and horizons we need to watch this month.
PR Horizons - what's new and next?
Major practice shifts have arrived and PR Horizons, the first in our 2023 briefing series and available from today, is designed to give you the insights you need into what's ahead for public relations and communication.
Join us - free - to explore smart, powerful language processing tools, including ChatGPT from OpenAI, as we face the biggest changes in decades. The session includes insights into the challenge of AI, addresses the ethical and social considerations ahead, why AI has come for your job - and what to do about it.
You can access the course here - https://bit.ly/PRHorizons23 - and, if you would also like to be a founding member of our new learning community - https://prknowledgehub.podia.com/pr-knowledge-hub-community send an email to learn-at-prknowledgehub-dot-com and let us know why you would like an invitation to join.
cyclone of change blows in 2023
Well the start of 2023 has certainly been a blast here in New Zealand with Cyclone Hale wreaking havoc around the coast and adding to the woes of what has been billed the worst summer in nearly twenty years. As I write, people on the East Coast are still without power, the road network needs massive reinstatement and it has been - literally - a washout of a summer.
Personally, this local example of climate change has moved me off the beach - which is where I had hoped to be - and into a cosy chair for my brief holiday break, distracted with some reading (once we got power and services back on and dealt with the waterlogging).
Professionally, most conversations have been concerned with ChatGPT, its power, ability and general cleverness. The big 'miss' has been the conversation we need to have around ethical application. That's the first chat on the agenda. The second is the impact it will have on our profession. There's no doubt that it has come for the jobs but only if public relations and communications professionals fail to elevate their undertaking from the tactical to the strategic. And only if organisations realise the automation of content may look easy and save them a few bucks (for now) but without context or direction, automation will lead to their demise, huge reputational damage and a disconnection from their stakeholders and communities.
I've spent the last week or so whipping up a professional development session that I hope will help practitioners understand not just what is on the horizon but what is knocking at the door ready to eat their lunch. I'll post the links here next week when (hopefully) it will be ready to go and I'm making it free access because I believe it is essential to encourage practitioners to actively consider and understand what lies ahead for their organisations and themselves. Meanwhile, I suggest you read this year's edition of the World Economic Forum's Global Risk Report (gloomy but essential New Year reading) and determine how you'll be making a difference in the months ahead -mand what you need to learn to help you make that difference.
Chatgpt - naughty or nice?
Heading into Christmas, there was an early 'gift' from OpenAI this month with the release of a new model, ChatGPT. You can read their blog post here and it gives you the detail on how it works, what it does and how it can be used. It is extraordinarily clever and a powerful tool that can be used in a myriad of ways - but, as my motto remains - just because we can, doesn't mean we should. I've covered the ethics of AI and its application many times over the years and as the capability of tools such as ChatGPT increases, so too do the ethical challenges. Question is, will ChatGPT be on Santa's naughty or nice list this Christmas? Will people opt for the lazy route of generating content (which will mean a world where algorithms chat to each other but we don't) or will practitioners realise they are at a point of evolution for the profession where they can be eliminated by the tactical or elevated by wise counsel? Jury will be out on that for a while - meantime, Happy Christmas - ChatGPT had this to say on the season when I asked it to describe and explain Christmas (which it did in less than a second):
"Christmas is a holiday that commemorates the birth of Jesus Christ. It is typically celebrated on December 25th by Christians around the world and is considered to be one of the most important holidays in the Christian faith. The holiday is often marked by religious services, the exchange of gifts, and the decoration of homes and churches with festive decorations, such as Christmas trees, lights, and mistletoe. Many people also celebrate Christmas by preparing special meals and gathering with friends and family. The holiday has also been adopted by many non-religious people as a cultural celebration, and is often associated with secular traditions such as Santa Claus, gift-giving, and the singing of Christmas carols."
However you celebrate - enjoy the season.
Enabling trust online
It was my privilege this month to speak at the International Public Relations Summit in Bali and I thought it might be useful to share some of my presentation here - particularly the thoughts on DALL-E, the reliability of data and the bias inherent in AI. Text and some of the images below - I hope it gives you some food for thought.
Kia ora, good morning everyone and thank you Ibu Elizabeth for your kind introduction and I would like to take this opportunity to thank you for all the work you have done to elevate the profession over many years, particularly through this event. I am very grateful to be here and have the opportunity to speak together today.
Before I begin I would like to offer my condolences for the devastating loss of life caused by the earthquake this week. My heart goes out to all those who have lost loved ones, to those still searching for friends and family and those involved with the rescue and recovery work. It is pertinent that we are going to be discussing trust this morning as the enormous effort to help those so deeply affected by this disaster highlights that we can still trust the goodness, bravery and care of our fellow humans.
As a society - global or local - we can’t function if we don’t trust each other and the ways in which those bonds of trust are formed have shifted and changed. As we know, trust is central to the work we do, which is building and sustaining the relationships we need to maintain our licence to operate - the permission we are given to do the things we do. Trust is one of the agreed and measurable components of our relationships - the others being satisfaction, commitment, loyalty, mutuality - and I add reputation as a reputation, good or bad, will either hinder or help the start or demise of a relationship.
So I thought a really good place to start would be with some sheep. As you may know, I am coming to you from New Zealand where we have our fair share of sheep, scenery and sparkling seas. These sheep I’m going to introduce you to are not any old sheep - because they pose the question ‘are they trustworthy sheep’? You see, they were created by DALL E - an artificial intelligence system that creates art and images from text. I simply told the AI engine what I want to see and it has presented me with options. Pretty cute eh - couple of sheep, enjoying the sea view underneath a rainbow. Where’s the harm? The next ‘image’ I requested was for public relations professionals at work - and I got these.
But my final request was for public relations leaders at work - and look what DALL E served me. Images based on data provided and loaded with bias.
The pandemic accelerated digital transformation in organisations, with much automation and reliance on data. Learning engines have been deployed across sectors - everything from DALL E style images to recruitment but with the deployment comes bias, burrowing its way into systems and data because the AI we get is only as good as the data on which it is based and if that data is preloaded with bias, the actions undertaken as a result will not be trustworthy - or accurate. This then proves something of a quandary for our leaders as they will be guiding us based on inaccurate data capable of creating stereotypes with the potential for societal harm.
Back in 2002 I had high hopes for digital engagement - blogs were the primary form of online connection (along with forums, message boards and other tech now considered archaic). The embryonic online world removed the filters and barriers to communication and allowed us to connect directly to our communities, our customers and stakeholders. Leap forward to 2012 when Elizabeth began this series of events and leaders had realised the power of digital engagement with authenticity and trustworthiness online approaching its peak. We moved beyond direct communication, binding our lives to the cloud, using it to meet many of our needs and wants. But, from 2015, the souring of tone, the bullying behaviour and exploitation of others began to accelerate. We saw the trolls come out in force, we saw bad actors manipulate data for profit and gain, we saw political candidates launch an onslaught of hateful speech, unleash the curse of misinformation and open the doors wide to conspiracy theorists. Some have deliberately exploited the loopholes and shortcomings of social media, simultaneously reducing trust to rubble or inspiring their followers to take to the streets - the digital environment is a now place of great contradictions where we are capable of amazing or terrible things, depending on the choices we make and the values we hold. Digital transformation accelerated in 2020 with the arrival of COVID19 - a Deloitte CEO study reported 77% of them had pushed forward digital initiatives as a result of the pandemic. We became used to connecting like this and had to learn to trust the strength of our connections - both literal and figurative.
Today, as people seek to connect with trustworthy information, organisations and people, the online environment is once more fragmenting - this time into closed communities of interest, run on platforms like sub stack or community platforms like guild. Research into trust is regularly undertaken - indeed Adrian has taken us through some of Edelman’s trust findings this morning - and they are not alone. Ongoing monitoring of trust by researchers at Our World in Data demonstrates the correlation between levels of interpersonal trust - how much we trust each other - and other areas of trust such as in government and media. The levels we saw in 2020 are probably very different in 2022 but one constant result is that countries with high levels of interpersonal trust were also more stable, safer, and civil discourse was still - well, civil.
Sadly, online engagement has undermined societal trust significantly in the last decade and this distrust has been thrown into sharp relief in the last two years as we have navigated the pandemic. The ‘infodemic’ that resulted from COVID19 and its management created deep cracks in social cohesion, even in those countries with previously high levels of civil discourse, resulting in unrest and violence. Ongoing developments on the many social networks have worsened the situation - the Elon Musk takeover of Twitter does nothing to increase hope for change and the Facebook controversies concerning privacy, accuracy, bias, hate speech and other topics fans the flames of division.
So what is our part in all this? As public relations and communication professionals we have an ethical duty of care to ensure that data used by our organisation is clean, accurate and unbiased. We have a duty of care to equip our leaders with the tools they need to speak and act honestly and transparently in the digital environment and we have a duty of care to help our organisations, our stakeholders and our communities of interest to navigate the turbulent waters that await them each time they are online - which for most people, is most of their time.
This may seem a little bleak when my topic is enabling trust in the digital environment but we have to understand where trust is being disabled in order for us to create or encourage circumstances where trust can take root and thrive.
The people who operate the platforms and networks we use have a great responsibility but we have traded our privacy and in some cases civility for access to the platforms. As has been said so often, the product peddled by the networks is us. But the new, evolved product isn’t simply ‘us’, it is our attention, it is our emotion, it is our state of mind, gripped and manipulated by algorithms that seem to know us better than we know ourselves. Algorithms that are tweaked and tinkered with so they serve us controversy that prompts us to engage, allowing our attention to be gobbled up by advertisers. Given this known manipulation, should our organisations follow the lead of those who ran a Facebook advertising boycott? Withdrawing ad revenues may not seem to0 great an action for these commercial companies but it certainly has a significant impact on reputation, stock prices and hopefully moderation of operations.
As organisations seemingly move into a new ‘golden age’ of purpose - having forgotten all about it for a few decades - and are now attempting to align their purpose and values, perhaps a good starting point from which to demonstrate this renewed dedication is to start by cleaning up their act online and not using technology for dubious purposes. For example, there’s a row here this morning concerning a supermarket chain that has deployed facial recognition technology in a bid to combat shoplifters but there has been a lack of transparency about that deployment, the use of data and long term consequences of the data acquisition - and quite rightly they have been called out for this behaviour.
Instead of capturing faces how about building trust through communities instead - we have the tech to do this. How about we use data responsibly and redraw our digital terms of engagement? Because enabling trust online isn’t about shiny new tech, artistic artificial intelligence or great reviews on Google - it is, as it has always been, about the way we behave, the choices we make and the genuine desire to build a fair and equitable society. Enabling trust online is up to us - let’s discuss how we can make a start today.
political horror show haunts UK
What a horror show. Astounding to watch the chaos unfold in the UK in recent weeks and the ruinous effect that total incompetence has had on the population. Millions will be forced into hardship yet 'the government' does little besides prop up its tentative hold on power and stroke the egos of the Westminster power cliques. You really couldn't make it up.
About Think Forward
Think Forward is written by Catherine Arrow. It answers PR questions, highlights practice trends - good and bad - and suggests ways forward for professional public relations and communication practitioners.