AIs are not just machines. They are silicon wordsmiths with endless patience and flawless grammar. They can generate clean prose at scale and on demand: LinkedIn posts, scientific papers, company press releases. Let us delve into why that might — or might not — be a problem.
The advent of ChatGPT in November 2022 triggered much research that tried to identify “tells” in text generated by large language models. Some of them are in the first paragraph of this editorial (which was composed by a human — FT policy prohibits journalists from using AI to write): long dashes; the rhythm of three; “X with Y and Z” descriptors.
Barron’s recently searched for one common AI figure of speech — the “it’s not this, it’s that” phrasing used above. Its analysis of company documents, including regulatory filings and earnings statements, found “an intense ramp-up” of the distinctive structure in 2024, suggestive of widespread LLM deployment. Other studies have detected a jump in certain words in scientific papers: “underscore”, “garnered”, “intricate”, and “delve”.