I have a love-hate relationship with artificial intelligence (AI).
On one hand, I find it quite useful for brainstorming relevant ideas, templating and saving me valuable time which I can then redirect to more strategic work. On the other, I do find its occasional wild tendencies amusing at best…and potentially dangerous at worst. Especially if the output it generates is taken at face value, without some level of responsible oversight or fact-checking by the person prompting it.
While we’re living in fascinating times, I do believe it would be a little haphazard to ‘hop on an AI hype train’ without first disclosing a few of its limitations. So, let’s take ChatGPT as a well-known example: It can’t cite sources (given its bounded function to recognize and reproduce patterns in text (data)), and it can’t count backwards. And let’s be honest, while the output it generates sure does come across as confident…the actual factualness of the output sometimes leaves a lot to be desired.
Why?
The accuracy of ChatGPT (or any AI) output depends on the truth of the datasets it crawls for and trains itself from – and by extension – the people programming those datasets. The tool itself is NOT concerned with, nor does it understand, truth, and is therefore prone to AI hallucination. If human biases are present in existing and emerging datasets, we can expect these biases to carry over into AI generated output, no matter how specific, high-quality and intentional a prompter’s input might be.
When disinformation is masqueraded as plausible ‘fact’ – there’s a problem.
Never mind the out-of-control robots we’ve seen in sci-fi films. The risk of real bad actors weaponizing and spreading falsehoods – perhaps in the form of propaganda production, elections meddling in political arenas and identity theft through convincing deepfakes – is plenty scary and carries with it potential harm, especially for people and communities who are unaware and ill-equipped.
This is at least one reason why I’m a believer in at least some degree of enforceable governance, oversight and rules for responsible use. To mitigate risk of unintended consequences, yes, but also to ensure we always retain command and control of the evolving technology, and not the other way around. We are already seeing regulatory conversations and steps taking shape in Europe, as governments and regulatory bodies around the world are scrambling to figure out the new ball game.
Given how easily accessible the evolving technology is to anyone, anywhere, at any time in the world…we can expect to find ourselves on a riveting rollercoaster ride in the long term, as more and more people learn the art and skill of prompt engineering to better train AI tools to do their bidding. Think, for instance, of the GPT-4 API: The technology which enables and powers ChatGPT, requested by millions of developers, and how it could be commoditized and commercialized as businesses seek to bring it in-house, to build and train their own AI programs.
While this could certainly fuel discussions around ownership and copyright, we can also expect resulting production increases and innovation to alter organizational workflows, shift business models and change entire industries…fast. When one leads, the rest follow, and the race to ‘beat the market’ is on.
Given this – I do believe AI is here to stay. And, it will continue to get smarter, sharper and more capable as it continues to learn more at breakneck speeds. However, I also believe that AI is, and will only ever be, as good (or not good) as the actor using it.
Consider two extremes:
On one extreme – Responsible use could save human lives, such as predictive AI accurately detecting cancer at an earlier, more treatable stage, as seen here in this recent new study, where AI-supported screening found 20 per cent more cancers as compared to the standard review by physicians who did not use AI.
On the other – Irresponsible use, or perhaps AI itself might mark the end of people, a possible worst-case scenario discussed earlier this year by Geoffrey Hinton, the Godfather of AI.
I suspect reality will probably fall somewhere in-between these two extremes. Regardless of how the dust settles, AI is not capable of human feeling – and dare I say – never will be. It might be able to mimic it, and perhaps even behave conversationally with ‘near’ human-like competence. But that is where I believe (and hope) the line is drawn, contingent on a minimum threshold of enforceable governance, oversight and responsible use.
We must also remember that while AI is powerful and capable: Human understanding and judgement, contextual nuance in communication, critical thinking, industry wisdom and preferred terms won’t just vanish into thin air. These human elements are here to stay, too. And I foresee that work which consistently demands these more ‘human’ elements to produce business outcomes, will be at lower risk of displacement or obsolescence, versus work which is more functional or baseline in nature.
This insight should help us rest easier as communication professionals – but not easy – as you won’t be replaced by the AI itself…but you could be replaced by the person who knows how to use AI, to its fullest capabilities.
With disruption of any kind, there will always be an initial fear, uncertainty, resistance or even backlash. We’ve seen it before, with the emergence of the Internet, Photoshop, social media and now AI. But, in the midst of chaos, there is also opportunity. And as history aims to repeat itself, I think we would be wise to seize this opportunity while we still can, regardless of my love-hate relationship.
For ourselves, for our communities and for our profession at large.
Interested in finding ways to use AI in your organization? Drop us a line at hello@apexpr.com.
Comments