Receive Focus insights straight to your inbox

Sending...

Please complete all required fields before sending.

Thank you

We look forward to sharing out of the ordinary insights with you

Sorry there seems to be a technical issue

 

There’s a scene in Blade Runner, Ridley Scott’s 1982 classic film (see video below), where Dekhart, Harrison Ford’s character, puts a series of probing questions to the sultry heroine, Rachael, to establish whether she is human or a “replicant”, an AI in a manufactured human body. After an extended interrogation, Rachael eventually slips up when she fails to bristle at the mention of “an entrée of boiled dog.”

That scene and the questions it raises have been on my mind a lot lately. Like many of us, I’ve been conducting my own personal Turing tests on ChatGPT, the phenomenal chatbot released in December by OpenAI, to probe just how indistinguishable its intelligence is from our own. So popular is this parlour game, played across dinner tables, water coolers and social networks, that ChatGPT achieved a record for the quickest adoption of a consumer product in history, reaching a million users in just five days. 

The buzz has been particularly shrill in my field of marketing, with practitioners expressing an equal measure of wonder and trepidation at the tool’s uncanny ability to do things previously considered the preserve of seasoned marketers and advertising professionals — from writing blog posts and ad copy to generating images and videos that appear not only credible but (heaven forbid!) creative. A boon for productivity and, at the same time, an existential threat to entire categories of marketing and advertising jobs.

Far less kerfuffle greeted the release, shortly after ChatGPT, of GPTzero, a technology which is arguably just as relevant to an AI-infused future. The brainchild of Princeton University student Edward Tian, GPTzero detects whether a text is generated by AI. I’ve found it to be quite accurate, despite purportedly being developed over just one weekend, and I suspect that Tian and others developing similar apps will attract ready funding for improvements.

After all, there are several reasons why it’s important to know the provenance of the words, images and videos that mediate our academic, professional and social lives — some obvious, others less so. But the most fundamental reason why this distinction matters goes to the very heart of what it means to be human.

Public school ban

The immediate problem that Tian claims his app will help solve is that of students handing in work generated by AI and claiming it as their own. So real is this risk that (wrongly, in my view) New York City’s public schools recently banned ChatGPT from their networks and student devices. They defended the move on the grounds that, unlike wholesale copying of Wikipedia articles, for example, AI-generated work can pass a plagiarism test with flying colours.

That’s because the AI synthesises inputs from multiple texts, just as humans draw on prior research to advance their own ideas. The critical difference, of course, is that the machine doesn’t know what these texts mean. It simply puts together copy (or imagery, video or voice) in a style, or a variety of styles, that it has been exposed to in its training: a process that Gary Marcus, emeritus professor of psychology and neural science at NYU, disparagingly calls “pastiche”.

For anyone in the business of communication — a category which arguably extends to most knowledge workers on the planet — the increasing prevalence of this type of pastiche should raise alarm bells. AI-generated posts are already starting to permeate blogs and social networks with content that is clearly derivative.

Projecting this forward, one foresees a sort of “Hallmarkisation” of the media. Hardly surprising, then, that Google penalises sites whose content is predominantly generated by AI, even as it continues to use AI to improve its own search engine results.

There’s also the thorny question of whether compensation is due to the original creators of the intellectual property that AI is trained on. A few copyright claims are already in the wings, including lawsuits by Getty Images and several independent artists alleging that Stability AI and Midjourney, creators of AI art generators, used copyrighted photographs and artworks without consent to train products that are now monetising permutations of those self-same images.

While new generative AI tools, like Google’s much-anticipated Bard, will at least attempt to credit the sources of their training data, this doesn’t address the question of what happens to the quality of human discourse when an inordinate proportion of what we read, see and hear is generated in whole or in part by machines.

More sobering still is the likelihood that advanced AI tools will increasingly be used to produce frighteningly credible propaganda at scale. Consider, for example, the hypothetical prospect of a Maga version of ChatGPT, trained on reams of alt-right and QAnon social media posts, or a Russian-designed bot claiming to be neutral, but primed on the opinion pages of Pravda.

The persuasive power of such tools will only grow alongside improvements in deep fakes — AI-generated simulations of real people saying and doing things that they never actually said or did. Even if critical-thinking people aren’t taken in by these shenanigans, there remains the problem of the “liar’s dividend”: simply put, once the fakes are convincing enough, how can I believe anything at all?

On a more prosaic note, I could envisage a situation where I’m in a virtual meeting with someone busy and important, only to find out later that I was actually conversing with her avatar — exquisitely trained to mimic not only her appearance, voice and gestures, but also her values, belief and opinions. Attractive as it may be to imagine sending my avatar off to dull meetings while I get on with the more interesting parts of my day, I suspect the person on the other end of the call might take a dimmer view, even if the outcome is exactly the same as if I’d attended in person.

The fact is, leaving aside all the practical reasons why detecting AI is important, there is something more primal at play here; something which compels us to confront the question of what is uniquely valuable about the product of human feeling and ingenuity. After all, as Yuval Harari reminds us in his book, Homo Deus, it’s hubris to believe that, by virtue of our consciousness, we can do anything that machines cannot, and that they won’t eventually do better and more efficiently. In Harari’s view, “consciousness is the biologically useless by-product of certain brain processes.”

But this doesn’t mean human consciousness is meaningless. On the contrary, it is only by virtue of consciousness that anything at all can be deemed meaningful.

This brings us back to Blade Runner. Rachael having left the room, her creator turns smugly to Dekhart and boasts that it took more than a hundred questions to determine that Rachael is a replicant. “She doesn’t know she’s a replicant,” Dekhart retorts indignantly. “How can it not know what it is?”

 

And herein lies the critical difference between Rachael and ChatGPT. Ridley Scott imputes consciousness to Rachael, just as we must if we are to feel any empathy for her or those of her kind. In the movie’s dramatic climax, the villain, Roy, also a replicant, is on the verge of killing Dekhart when he pauses.
 
“I’ve seen things you people wouldn’t believe,” he says. “Attack ships on fire, off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser gate…”

As he continues, Dekhart’s expression changes from fear to compassion. The hardened blade runner, a hunter of replicants, comes to realise that Roy has not only seen these things but experienced them just as deeply as a human would, and that Roy’s death, and the loss of these experiences, “like tears in the rain,” should be mourned rather than welcomed.

 

As marketers, in the business of forging human emotional connections, we’d do well to remember that it matters whether the words we read, the films we watch and the art we appreciate, are the product of sentient human beings who care and understand it, or by algorithms. Even if the end product is identical. The same goes for advertising, content marketing and any other form of brand communication.

This isn’t to say marketers shouldn’t use AI. AI tools already play a critical role in augmenting the skills of marketing professionals and will only become more powerful as machine learning advances at a dizzying pace.

Indeed, I’d wager with confidence that, while AI might not replace the marketing function, marketers and agencies that use AI will very quickly replace those that don’t. But equally, we will have to be transparent and honest about how we’re using AI, and wary of the ethical and brand risks we run should we use it irresponsibly. We need to care deeply about these things. Because ChatGPT doesn’t.

This article first appeared in Daily Maverick.