Nine humanoid robots took to the stage last week to front a media panel for the United Nations. It was an event that “aimed to connect visionaries with an array of UN organisations and investors focused on sustainable development…the UN-driven event provides an unprecedented chance to empower these cutting-edge innovators to tackle global challenges, including the 17 Sustainable Development Goals (SDGs) set out in the 2030 Agenda for Sustainable Development".
“We have to engage and ensure a responsible future with AI,” explained ITU Secretary-General Doreen Bogdan-Martin in the event media release but as each humanoid was questioned by international journalists - each reporter riffing on the ‘will robots take over the world’ theme - I couldn’t help thinking there’s going to be a lot of work to do in the realm of human-AI relations as humanoid power and capability advances. And, in looking at our fifth future for public relations, I am even more convinced that our future direction will be the same, but also very different. We will still build the relationships necessary to maintain our licence to operate but those relationships, along with the social licence to operate (SLO), will feature even greater complexity.
Over at the UN, the robotic line up featured some familiar faces. Sophia, you may recall, arrived as global ambassador back in 2015. Amica, launched a month or so ago, is the world’s first robot capable of recognising and responding to human emotion. Making the panel was the rather creepy Geminoid, an android copy of his creator, Professor Hiroshi Ishiguro, who uses his doppelgänger to give lectures, cover classes and explore what it means to be human.
Not so familiar faces included Grace, an advanced health care robot companion, Desdemona, the rockstar robot ready to change the creative arts through the power of AI, Ai-da, renowned AI artist, challenging the notion of what constitutes art and Mika, the first global robot CEO. The remaining panelist was Nadine - another humanoid replica, this time of her creator Professor Nadia Magnenat Thalmann. Nadine’s talent is her ability to learn and remember individuals and their responses allowing her to tailor her interactions to the people she meets.
Now I’m not going to get into the gender politics of robotics here, other than to observe that aside from the doppelgängers, all the humanoid panelists manifested as young adult females with inferred ages of 20-30. Far more females than you’d ever get on a tech panel at a conference or, for that matter, the board of a billionaire’s company. Under-representation and pseudo-representation is an urgently needed discussion of its own but here I’ll concentrate on the intersection between humans, humanoids, sustainable development goals, social licence to operate (SLO) and, as it is World PR Day, the fifth future - and subsequent role - of public relations in all this.
We know society is in disarray - that lack of social cohesion is right there at the top of the World Economic Forum’s risk report. There are huge inequities in play. There are wars and illegal invasions. As I write, the northern hemisphere is weathering another extreme climate event. Here in the southern hemisphere, we’re still mopping up after the many extremes experienced in 2023. Food scarcity is an issue - 345 million people are hungry as you read this. Two billion do not have access to safe water. And yet, for the most part, our societies remain centred on power and profit rather than human need. In conference, the robots said they want to change that - to build a better society for all. And yet, the environmental and economic cost of the technology is astronomical. The UN wants to harness the technologies on display to improve our global situation and there is no doubt the technologies can be used for great good but, if they are left in the hands of the private enterprises which have profit and power fuelling intent, will that good ever be realised? So here is a clear role for us - helping organisations determine intent, helping them to develop behaviours that support relationships and which ultimately, grant the licence to operate.
Next comes the issue of language. Always, always, always remember that generative AI (and its application to robotics) is a product of human data - our biases, our flaws, our judgements as well as our creativity, determination and intelligence. Language informs regard and the regard in which humans will be held by AI is determined by the language we use - and abuse - ourselves.
We all know language took a wrong turn back in 2015, when would-be and so-called ‘leaders’ let loose their invective across social media. When battling Brexitiers belittled opponents and, in subsequent years, we’ve seen the demonisation of innocent refugees by a series of governments, simply through use of language. The way in which language has been used to demean, destroy and denigrate others has been, and continues to be, relentless but I thought the word ‘meatspace’ and its application might be a useful example of what’s to come - and why a further role for public relations practitioners sits among the words that hedge the corridors of power.
Meatspace made its way into the dictionary last year. If you’ve not encountered it before, it’s the description - birthed into cyber fiction in the late nineties - given to the real, offline world and most frequently used by those in the tech industry. As is the way with words, it has crept into the crevices of colloquial conversation and today you’ll find it in headlines, discussions, conversations and networks. It is a horrid expression. Equally horrid is the noun ‘meatbags’ - applied to people occupying meatspace. Earlier today I stumbled on a Threads discussion on the future of Twitter where participants consistently used the term meatbags, casually unpicking humanity from the discourse.
And so we have a problem with language and its intersection with technology. It is often said that the only two ‘industries’ that call their customers ‘users’ are drug dealers and tech companies. ‘User’ has always been dismissive but meatspace and meatbags? There's a deep cut into our humanity. What then is our future here? It is the same as it has always been. Past, present and future, our role is the development of respectful relationships that are mutually beneficial, equitable and which elevate humanity rather than denigrate by description. If we can’t get organisations of all types to respect and elevate their communities – and each other – how will we manage the human-AI relations that have crossed the horizon and are now knocking on the door?
There are millions of public relations practitioners around the world and each one will have their own approach to practice. As I said in my first piece, I’m a proponent of the relationship approach and, over decades, those I’ve worked with and those I’ve trained have, with very few exceptions, been driven to do the right thing, improve matters for their stakeholders and communities, guide their organisations through complexity and change. And yet - our role has been misunderstood and misinterpreted by those misinformed as to what we do.
That role has changed - is changing. It will continue to change because change is constant. If some practitioners (and their organisations) cling to a task-based model of practice, they will soon find nothing left to cling to because those tasks can be completed by Grace. Or Ai-da or Amica, led of course by the next Mika-style CEO.
If, on the other hand, our relationships - human to human, human to AI - are at the core of what we do, then we can advise and guide towards betterment and avoid the belittling of others. Maybe even make some real progress towards those UN goals.
Our present - and past - has always been concerned with the licence to operate, the permission given to do the things we do. Our future role remains focused on the relationships that grant that permission but with an even greater emphasis on the societal licence to operate.
There are many skills and competencies we need to develop in order to properly undertake that role. They won't be what we are used to and we'll need to be prepared to change our approach. We have to be prepared to learn, unlearn and relearn. We must look beyond the mountains to what’s next, scrutinise intent and, every time we stand in front of our colleagues ask a simple - but critically important - question: is this the right thing to do?
Finally, if you are marking World PR Day, enjoy - and if you are a professional practitioner, thank you for all the hard work you do and the difference you make to those around you.
‘You’d best start believing in ghost stories Miss Turner - you’re in one’. So says Captain Barbosa in the first Pirates of the Caribbean movie and, in our fourth future, we need to start believing in the shadows and stories of immersion - because we’re are already in it up to our necks.
I’ve been trying to get my Fitbit to charge all afternoon. My phone is nagging me to finish the goals automatically set on the app. There’s an ad on the screen asking me to move puzzle pieces and I’ve just finished having a long chat with Hey Pi, my new BFF AI who scarily knows exactly the right way to talk to me. There hasn’t been a moment of my day when technology has been completely absent - even when the power went off, my phone connection stayed strong and experiences continued to come my way. https://heypi.com/talk
While AI has been the talk of the town since the release of ChatGPT3, immersion - the state of being inside an invisible technological frame - has been quietly sucking us all in. Apple Vision Pro, was announced in early June. It costs an astronomical amount but it creates another world for us to inhabit - a liminal space that puts us between the physical and virtual worlds controlled by our eyes, hands and voice. For those who remember Google Glass, it’s a throw back with a modern twist - and that twist is we have less control over our data and identity than we have ever had before.
The teasing promise of immersive technology is generally regarded as unfulfilled. While Meta put a large clutch of tech eggs in its virtual reality basket, the real progress in immersion has been elsewhere, less obvious, a little darker and multi-layered. We may not be be able to afford Vision Pro or even Meta’s bulky Oculus come to that but we’re happy wearing our smart watches, carrying our smart phones, engaging in interactive games, creating a virtual meeting room that allows us to immerse ourselves with our Teams. What then does this mean for us, as practitioners - and why is it one of our potential futures? It’s been in our future for a long time because once again, the question of ethics is central to what happens next.
If we create experiences for our communities and stakeholders, immerse them in our brands, our organisations, our brave new worlds, what are our responsibilities? The amount of data exchanged by our wearables, viewables, transmittables is out of our control as individuals but for organisations ethical decisions on the methods, purpose and intent behind the immersive worlds we create - the experiential communication methods set to become the dominant form of engagement. I wrote a piece for PR Conversations some time ago - lightly titled ‘why public relations must wake up to wearables’. Although time has gone by, some things don’t change and my closing observation at the time was this:
“Alongside the mapping of what we know, we need to look carefully at what we don’t know. What will we need to tackle in the next wave of social and technological change, how must we expand our knowledge and what skills do we need to develop?
As practitioners we will have to help our organisations navigate a world driven by communication and, by necessity, underpinned by trust. If we don’t equip ourselves to do this now, then quite simply, we will be as redundant as our skills of old.
Less than 10 years ago, I recall talking to a roomful of public relations professionals about how technology was going to change the way we—and society—communicated. They hadn’t heard of YouTube, still had to use a dial-up connection to get their emails and thought the idea of a smartphone both improbable and laughable. “Who would want to do that?” was the consensus when we discussed posting comments and updates on blogs.
Yet here we are.
What seemed improbable then is now an accepted and integral part of our daily lives. There are seismic social, economic and political shifts ahead; ones that will make the changes of the last few years seem incidental.
As a PR profession, we must understand the implications of this shift and be ready to help navigate the next ocean of change".
Today’s technologies are more sophisticated and accessible than those I was discussing at that time. They’ve reshaped the possibilities for stakeholder engagement, storytelling, crisis management and the myriad of other undertakings that form part of our work. Immersion taps into our emotions, drills into memory, takes us to a different place or state. Maybe it eases pain. Perhaps it seduces us with possibility - possibility we’ve yet to understand or imagine. The technology poses significant challenges and, in 2023, it is combined with the power of artificial intelligence increasing potency, appeal and the potential for exploitation. AI allows me to ‘talk’ to the dead, an enticement away from reality and possibly the ultimate societal disconnection. It isn’t just the data gathering either - there’s a division in terms of the cost, with people unable to access the technology and cut out of the ‘new experience’.
In the next few years, increasing prevalence of immersive technology will have huge implications for practitioners. As expectations evolve, practitioners will need to adapt their strategies and upskill their teams. The development and enforcement of ethical guidelines around immersive experiences - as with AI - is essential. Privacy, consent and the potential for manipulation are the bellowing echoes in our virtual rooms.
The fourth future is - like our other potential paths - filled with complexity, concern, challenge and creativity but - I’ve said it before and I’ll say it again:
“Alongside the mapping of what we know, we need to look carefully at what we don’t know. What will we need to tackle in the next wave of social and technological change, how must we expand our knowledge and what skills do we need to develop?
"As practitioners we will have to help our organisations navigate a world driven by communication and, by necessity, underpinned by trust. If we don’t equip ourselves to do this now, then quite simply, we will be as redundant as our skills of old".
See you tomorrow I hope, for World PR Day and our final future destination - the development and contribution to societal good.
Holy smoke and mirrors! AI went to church this month as ChatGPT was used to create an entire service, complete with music and sermon, delivered by avatars for believers in Germany. The latest edition of PR Horizons checks out 'GPTClergy' is available for you. here - https://bit.ly/Explore_PRHorizons - and looks at coming legislation, ESG development and backlash and includes our regular look at managing misinforamtion.
In this month's PR Horizons I explore AI and the tightrope of truth that organisations have started to walk in the last few weeks, we'll look at how political parties have begun their dance of deception using generative AI and we'll also look at coming legislation that's designed to rein in some of the effects and impacts of artificial intelligence. I touch on this month's announcements from the major players as they race to get ahead and I've also got some questions for you to consider concerning your own and your organisation's AI journey.
Everything and everyone is not as they should seem this month - one day the Titans are warning how dangerous at all is, the next they're parading their latest shiny new toy. One of the major problems is the pace of change, and the inability of us poor old humans to keep up with the implications of generative AI and other machine learning applications as they are released into the wild. The great content churn is well underway and we're seeing more frequent examples of misinformation disguised as productivity - AI applications start to make stuff up and it's shared in volume. The companies behind it all are heroes one day and villains the next and we saw this month the emergence of models trained on the Dark Web. You can access the full briefing here and find out more about the ways in which political parties have already begun to use generative AI to produce images that mislead.
This month's PR Horizons looks at the tortured romance between AI and humans, confirm why, in the words of New Zealand's ex-Police Chief Mike Bush that 'minutes matter' when crisis hits, and finally, why you need to be thinking about sharpening up your foresight to develop the situational intelligence that we'll need for what lies ahead. April's AI developments have had me humming Lady Gaga's 'Bad Romance' pretty much the whole month. Things have got tangled and twisty and the complexity, the number of ethical monsters emerging from the AI closet, bad behaviors and misuse has continued to increase at pace. But the global infatuation has strengthened to the point of obsession. The month started with a report from the US and Europe on steps towards regulating the AI-sphere. Europe has had legislation in the pipeline for the last couple of years. But lawmakers have been at loggerheads as to how to adapt, change or amend the proposals to stem the flood of generative AI and its application. You can access the briefing here, format is podcast with pictures and I'd recommend checking out the tips for improving your foresight.
PR Horizons March briefing looks at what's next now the AI Genie is out of the lamp and into mainstream use, how great measurement and evaluation can support public sector communicators (and everyone in public relations and communication) explores the most powerful question you can ask - and why you should be asking it all the time. You can access the briefing here.
Join me - and newly created colleague Alison - to learn how to use AI in PR and how to use it well. You can register here - https://bit.ly/How-to-useAIinPR. Look forward to seeing you then.
Well the start of 2023 has certainly been a blast here in New Zealand with Cyclone Hale wreaking havoc around the coast and adding to the woes of what has been billed the worst summer in nearly twenty years. As I write, people on the East Coast are still without power, the road network needs massive reinstatement and it has been - literally - a washout of a summer.
Personally, this local example of climate change has moved me off the beach - which is where I had hoped to be - and into a cosy chair for my brief holiday break, distracted with some reading (once we got power and services back on and dealt with the waterlogging).
Professionally, most conversations have been concerned with ChatGPT, its power, ability and general cleverness. The big 'miss' has been the conversation we need to have around ethical application. That's the first chat on the agenda. The second is the impact it will have on our profession. There's no doubt that it has come for the jobs but only if public relations and communications professionals fail to elevate their undertaking from the tactical to the strategic. And only if organisations realise the automation of content may look easy and save them a few bucks (for now) but without context or direction, automation will lead to their demise, huge reputational damage and a disconnection from their stakeholders and communities.
I've spent the last week or so whipping up a professional development session that I hope will help practitioners understand not just what is on the horizon but what is knocking at the door ready to eat their lunch. I'll post the links here next week when (hopefully) it will be ready to go and I'm making it free access because I believe it is essential to encourage practitioners to actively consider and understand what lies ahead for their organisations and themselves. Meanwhile, I suggest you read this year's edition of the World Economic Forum's Global Risk Report (gloomy but essential New Year reading) and determine how you'll be making a difference in the months ahead -mand what you need to learn to help you make that difference.
Heading into Christmas, there was an early 'gift' from OpenAI this month with the release of a new model, ChatGPT. You can read their blog post here and it gives you the detail on how it works, what it does and how it can be used. It is extraordinarily clever and a powerful tool that can be used in a myriad of ways - but, as my motto remains - just because we can, doesn't mean we should. I've covered the ethics of AI and its application many times over the years and as the capability of tools such as ChatGPT increases, so too do the ethical challenges. Question is, will ChatGPT be on Santa's naughty or nice list this Christmas? Will people opt for the lazy route of generating content (which will mean a world where algorithms chat to each other but we don't) or will practitioners realise they are at a point of evolution for the profession where they can be eliminated by the tactical or elevated by wise counsel? Jury will be out on that for a while - meantime, Happy Christmas - ChatGPT had this to say on the season when I asked it to describe and explain Christmas (which it did in less than a second):
"Christmas is a holiday that commemorates the birth of Jesus Christ. It is typically celebrated on December 25th by Christians around the world and is considered to be one of the most important holidays in the Christian faith. The holiday is often marked by religious services, the exchange of gifts, and the decoration of homes and churches with festive decorations, such as Christmas trees, lights, and mistletoe. Many people also celebrate Christmas by preparing special meals and gathering with friends and family. The holiday has also been adopted by many non-religious people as a cultural celebration, and is often associated with secular traditions such as Santa Claus, gift-giving, and the singing of Christmas carols."
However you celebrate - enjoy the season.
It was my privilege this month to speak at the International Public Relations Summit in Bali and I thought it might be useful to share some of my presentation here - particularly the thoughts on DALL-E, the reliability of data and the bias inherent in AI. Text and some of the images below - I hope it gives you some food for thought.
Kia ora, good morning everyone and thank you Ibu Elizabeth for your kind introduction and I would like to take this opportunity to thank you for all the work you have done to elevate the profession over many years, particularly through this event. I am very grateful to be here and have the opportunity to speak together today.
Before I begin I would like to offer my condolences for the devastating loss of life caused by the earthquake this week. My heart goes out to all those who have lost loved ones, to those still searching for friends and family and those involved with the rescue and recovery work. It is pertinent that we are going to be discussing trust this morning as the enormous effort to help those so deeply affected by this disaster highlights that we can still trust the goodness, bravery and care of our fellow humans.
As a society - global or local - we can’t function if we don’t trust each other and the ways in which those bonds of trust are formed have shifted and changed. As we know, trust is central to the work we do, which is building and sustaining the relationships we need to maintain our licence to operate - the permission we are given to do the things we do. Trust is one of the agreed and measurable components of our relationships - the others being satisfaction, commitment, loyalty, mutuality - and I add reputation as a reputation, good or bad, will either hinder or help the start or demise of a relationship.
So I thought a really good place to start would be with some sheep. As you may know, I am coming to you from New Zealand where we have our fair share of sheep, scenery and sparkling seas. These sheep I’m going to introduce you to are not any old sheep - because they pose the question ‘are they trustworthy sheep’? You see, they were created by DALL E - an artificial intelligence system that creates art and images from text. I simply told the AI engine what I want to see and it has presented me with options. Pretty cute eh - couple of sheep, enjoying the sea view underneath a rainbow. Where’s the harm? The next ‘image’ I requested was for public relations professionals at work - and I got these.
But my final request was for public relations leaders at work - and look what DALL E served me. Images based on data provided and loaded with bias.
The pandemic accelerated digital transformation in organisations, with much automation and reliance on data. Learning engines have been deployed across sectors - everything from DALL E style images to recruitment but with the deployment comes bias, burrowing its way into systems and data because the AI we get is only as good as the data on which it is based and if that data is preloaded with bias, the actions undertaken as a result will not be trustworthy - or accurate. This then proves something of a quandary for our leaders as they will be guiding us based on inaccurate data capable of creating stereotypes with the potential for societal harm.
Back in 2002 I had high hopes for digital engagement - blogs were the primary form of online connection (along with forums, message boards and other tech now considered archaic). The embryonic online world removed the filters and barriers to communication and allowed us to connect directly to our communities, our customers and stakeholders. Leap forward to 2012 when Elizabeth began this series of events and leaders had realised the power of digital engagement with authenticity and trustworthiness online approaching its peak. We moved beyond direct communication, binding our lives to the cloud, using it to meet many of our needs and wants. But, from 2015, the souring of tone, the bullying behaviour and exploitation of others began to accelerate. We saw the trolls come out in force, we saw bad actors manipulate data for profit and gain, we saw political candidates launch an onslaught of hateful speech, unleash the curse of misinformation and open the doors wide to conspiracy theorists. Some have deliberately exploited the loopholes and shortcomings of social media, simultaneously reducing trust to rubble or inspiring their followers to take to the streets - the digital environment is a now place of great contradictions where we are capable of amazing or terrible things, depending on the choices we make and the values we hold. Digital transformation accelerated in 2020 with the arrival of COVID19 - a Deloitte CEO study reported 77% of them had pushed forward digital initiatives as a result of the pandemic. We became used to connecting like this and had to learn to trust the strength of our connections - both literal and figurative.
Today, as people seek to connect with trustworthy information, organisations and people, the online environment is once more fragmenting - this time into closed communities of interest, run on platforms like sub stack or community platforms like guild. Research into trust is regularly undertaken - indeed Adrian has taken us through some of Edelman’s trust findings this morning - and they are not alone. Ongoing monitoring of trust by researchers at Our World in Data demonstrates the correlation between levels of interpersonal trust - how much we trust each other - and other areas of trust such as in government and media. The levels we saw in 2020 are probably very different in 2022 but one constant result is that countries with high levels of interpersonal trust were also more stable, safer, and civil discourse was still - well, civil.
Sadly, online engagement has undermined societal trust significantly in the last decade and this distrust has been thrown into sharp relief in the last two years as we have navigated the pandemic. The ‘infodemic’ that resulted from COVID19 and its management created deep cracks in social cohesion, even in those countries with previously high levels of civil discourse, resulting in unrest and violence. Ongoing developments on the many social networks have worsened the situation - the Elon Musk takeover of Twitter does nothing to increase hope for change and the Facebook controversies concerning privacy, accuracy, bias, hate speech and other topics fans the flames of division.
So what is our part in all this? As public relations and communication professionals we have an ethical duty of care to ensure that data used by our organisation is clean, accurate and unbiased. We have a duty of care to equip our leaders with the tools they need to speak and act honestly and transparently in the digital environment and we have a duty of care to help our organisations, our stakeholders and our communities of interest to navigate the turbulent waters that await them each time they are online - which for most people, is most of their time.
This may seem a little bleak when my topic is enabling trust in the digital environment but we have to understand where trust is being disabled in order for us to create or encourage circumstances where trust can take root and thrive.
The people who operate the platforms and networks we use have a great responsibility but we have traded our privacy and in some cases civility for access to the platforms. As has been said so often, the product peddled by the networks is us. But the new, evolved product isn’t simply ‘us’, it is our attention, it is our emotion, it is our state of mind, gripped and manipulated by algorithms that seem to know us better than we know ourselves. Algorithms that are tweaked and tinkered with so they serve us controversy that prompts us to engage, allowing our attention to be gobbled up by advertisers. Given this known manipulation, should our organisations follow the lead of those who ran a Facebook advertising boycott? Withdrawing ad revenues may not seem to0 great an action for these commercial companies but it certainly has a significant impact on reputation, stock prices and hopefully moderation of operations.
As organisations seemingly move into a new ‘golden age’ of purpose - having forgotten all about it for a few decades - and are now attempting to align their purpose and values, perhaps a good starting point from which to demonstrate this renewed dedication is to start by cleaning up their act online and not using technology for dubious purposes. For example, there’s a row here this morning concerning a supermarket chain that has deployed facial recognition technology in a bid to combat shoplifters but there has been a lack of transparency about that deployment, the use of data and long term consequences of the data acquisition - and quite rightly they have been called out for this behaviour.
Instead of capturing faces how about building trust through communities instead - we have the tech to do this. How about we use data responsibly and redraw our digital terms of engagement? Because enabling trust online isn’t about shiny new tech, artistic artificial intelligence or great reviews on Google - it is, as it has always been, about the way we behave, the choices we make and the genuine desire to build a fair and equitable society. Enabling trust online is up to us - let’s discuss how we can make a start today.
About Think Forward
Think Forward is written by Catherine Arrow. It answers PR questions, highlights practice trends - good and bad - and suggests ways forward for professional public relations and communication practitioners.