The age of terrible AI-generated charts is upon us
Bad charts generated by AI are proliferating on LinkedIn, touching social, political or business issues. There are clear paths by which this phenomenon may lead to a permanent erosion of civil liberties. State intervention is likely to be ineffective and come with immense trade-offs. This is a call for LinkedIn content producers and consumers to be responsible in the age of AI.
Two broad categories of bad charts have stood out to me in recent times:
- Charts that do not contain factual errors, but sources are not cited anywhere
- Charts that contain blatant factual errors (eg, figures don’t add up)
I have seen influential business leaders (including at least one data leader) disseminating these.
Inevitably someone in the comments questions the rigor of the data. In a few instances the authors defend themselves by admitting that it was produced by an LLM but claiming the data was “directionally correct”, or some such nonsense. They fall short of quipping “trust me bro”. In one particular instance the author went so far as to claim that they didn’t even need to be rigorous on LinkedIn.
In many of these cases, the chart is ideologically loaded in one direction or another. This is definitely a cause of polarization, as people hotly debate around a blatantly false starting point. But there’s a more insidious and eroding effect than pushing this or that ideological bias (which on its own would be fine in a democratic society), one that operates on a more metaphysical level: bad charts slowly erode our capability to agree on ground truth. The concept of truth itself degrades.
I was not too surprised to see the emergence of the “post-truth” politician - politics has been very rarely based around reason or universals anyway, and the whole concept of “post-truth” is probably bogus anyway. And I am not even surprised about how data can be manipulated in a mass society: the mainstream media have been doing it for ages and entire books have been written on the subject.
But at least these phenomena were largely centralized, contained both temporally and spatially. It was easier to switch off by turning off the TV or throwing the paper in the trash. Now however, the phenomenon is completely decentralized or capillary: anyone with a Claude or ChatGPT sub and a Twitter or LinkedIn account can become a bullshit producer.
As a consumer, it is becoming increasingly difficult to isolate oneself from this phenomenon. Even minimizing social media use, the exposure is great. I don’t use any other social media than LinkedIn on occasion, and yet feel constantly exposed to blatantly bad charts. I try to be charitable on occasional misgivings, but ruthlessly block or mute suspects that have become usual. Even so it’s all becoming an endless game of whack-a-mole.
I do not think there will be a good political solution to this. The cats are out of the bag and they’re all over the place. State-driven solutions will be largely ineffective and will likely require potentially unacceptable tradeoffs, ranging from colossal economic costs to the erosion of civil liberties.
This only leaves a few humble propositions, for business and opinion leaders producing content on the one hand, and for consumers on the other.
If you’re a business leader, please be rigorous in how you treat data - for your reputation and for the sake of our civil liberties, including your own. The ask here is not even that you stop using AI to generate: just use AI more rigorously to produce better content, for example, ask a second or third LLM to review the output of the first.
And here’s another idea: you may not even need charts to make a persuasive case. Some of the world’s most influential thinkers and leaders weren’t rigorous empiricists and yet they managed to have universal impact that has last centuries up until our days. Rousseau is a classical thinker that comes to mind in this sense. In terms of political leaders, you can take your pick.
Persuasion and influence are reached through emotion rather than charts. AI might be making charts very easy to produce, but they still need some time and tinkering. Why not invest that time in making your message more emotional appealing? The end result will be the same, but you won’t risk being called out for posting bad data.
And if you’re a consumer of content, my only recommendation is this: be skeptical out there. The age of mass-distributed, shiny-looking visualizations based on fake data is upon us. We too, as consumers of content, are responsible for exercising critical judgement and calling out misleading visualizations.