Not many conversations these days are able to avoid the inevitable mention of AI (and yes, it is getting a little boring). We talk about it as a labour-saving device, a way to structure a supervision essay or to debug a stubborn piece of code. We have become comfortable with the idea of delegating our intellectual heavy lifting to a machine, and to be honest I don’t think this phenomenon is worthy of any more articles than it has already received.

But this delegation sometimes creeps out of the library and into our pockets, infecting the way we engage with the world’s most harrowing tragedies. And this is where our reliance on AI starts being lazy not just in a superficial sense, but in a deeply immoral one too.

There has been a recent emergence of a new kind of social media trend: the AI-generated ‘activism’ post. Its viral debut came in the form of the rows of tents in the ‘All eyes on Rafah’ post, and its prominence has extended to activism for Sudan, Congo, Papua, and, most recently, Iran. Suddenly, my Instagram story has become a gallery of machine-made imagery. And it is becoming horrifyingly clear that we are now more than capable of avoiding the raw, uncomfortable reality of human suffering, all whilst actually receiving praise for sharing a polished, AI-optimised version of it.

To be clear, the problem isn’t really that these posts get shared around as trends. In a digital age, trends are the circulatory system of information; they are how awareness is built and how our generation signals what matters. The problem is that it limits activism of real-world issues into just trends – that and nothing more. It allows the user to check the box of being a good person without ever having to look at a real photo of a casualty or read a difficult dispatch from a reporter. This is the lowest, most pathetic level of virtue signalling, and the laziness that comes with it is just as bad as explicit moral failings.

“If the eyes on Rafah were only looking at an AI-generated camp that doesn’t exist, they were (quite literally) never really looking at Rafah at all”

When we share an AI-generated graphic of a warzone – with its perfect symmetry and cotton-candy clouds – we are choosing a version of the event that is palatable for our aesthetic feeds. This is not admirable; it’s deeply disrespectful and distasteful.

The ‘All Eyes on Rafah’ trend, for example, was successful in one sense – it reached the feed of millions. But there’s a serious question that remains: how many of those millions had a genuine understanding and a credible level of sympathy for what they were posting?

At the time of the post’s popularity, social media experts told the BBC that the post received such impressive traction largely due to “the AI-generated nature of the image, the simplicity of the slogan, the ease at which Instagram users can share the post in just a couple of clicks, and its uptake by celebrities.” Of course, almost everything is as easy to post as this – so why this one specifically? Why is my problem with this AI-generated post in particular?

Well, deferring to the BBC once more, “notably absent [from the AI-generated image] are pictures of dead bodies, blood, shots of real people, names or distressing scenes.”

Ah, Bingo – that is why it’s so easy to post it. You can do the moral work without carrying the emotional weight of it. Rings a bell, right? It’s almost like writing that supervision essay without having to do the reading or the thinking for it. The latter is the superficial intellectual flaw we have as students, the former is moral corruption. Because, if we’re being brutally honest, if the eyes on Rafah were only looking at an AI-generated camp that doesn’t exist, they were (quite literally) never really looking at Rafah at all.

“We are already being told that our jobs, our art, and our essays can be handled by soulless machines. If we concede our moral voices to them too, what is left?”

You may very well have your own differing views on whether we should be posting these things at all. Yet, at the very least, we must come to a mutual agreement about what not to post. By using these machine-made templates instead of sharing actual information, aid links, or boots-on-the-ground reporting, we help facilitate the erasure of the very people we claim to support.

The people of Palestine, Iran, Congo, Sudan, and wherever else it may be that the world’s atrocities reside, have spent decades asking the world to see them. When we respond by posting a computer’s hallucination of their struggle, we are effectively saying that their actual reality is too messy, too graphic, or too inconvenient for our social media presence.


READ MORE

Mountain View

Cambridge can’t beat AI

We are already being told that our jobs, our art, and our essays can be handled by soulless machines. If we concede our moral voices to them too, what is left? Morality is a muscle; it requires the discomfort of empathy and the effort of discernment. When we see a tragedy, the moral response should be a heavy one. It should involve the difficulty of looking at things we wish weren’t true and the labour of finding ways to help. When we replace that process with a ‘one-tap’ AI template, that muscle atrophies.

This doesn’t necessarily mean we should stop using social media to spread awareness, but it does mean we should be suspicious of any activism that feels ‘too easy’. If a post about a genocide looks like it belongs in a Pixar movie, it probably shouldn’t be your primary way of showing solidarity.

AI can write your essays and summarise your reading lists; it can even generate a beautiful, symmetrical image of a tragedy. But it cannot feel the weight of that tragedy for you. It cannot replace the human act of witness. In the end, if we want to make meaningful change to the world, we have to be willing to look at it as it actually is – not as an algorithm thinks we’d like it to be.