What the New York Times Gets Right (and Wrong) About AI Writing

A New York Times article, Why Does A.I. Write Like … That?, grabbed my attention because it identifies many of my frustrations in using AI to help me write.

It also supports an argument in my article, Solving Professors’ Dilemmas about Prohibiting or Promoting Student AI Use, that faculty – not to mention lawyers’ supervisors and clients as well as judges – often recognize AI-generated writing and resent it.

So, naturally, I asked ChatGPT to analyze the NYT article, identify accurate and misleading claims, and then write a blog post about it.  Here’s what it produced.

ChatGPT’s Critique of NYT’s Critique of AI Writing

The New York Times recently published a lively essay arguing that AI writing has become a pervasive, recognizable, and increasingly irritating presence in our daily lives.  The author paints a world where AI-generated prose lurks everywhere – in novels, news stories, restaurant menus and social media – all speaking in a voice he describes as overwrought, formulaic and strangely haunted.  The piece is witty and memorable.  It also raises an important question: is the author right?

This post offers a more grounded look at what the NYT piece gets right, where it goes too far and how we might think more clearly about the evolving relationship between human and AI writing.

What the NYT Article Claims

Although the essay is written in a humorous and exaggerated tone, its core claims can be summarized fairly:

  1. AI writing has a distinctive and easily recognizable voice. The author argues that AI-generated prose contains reliable stylistic giveaways:  excessive abstraction, breathy metaphors, overuse of certain words, synthetic earnestness and a tendency toward theatrical sentence structures.
  1. Humans are becoming increasingly skilled at spotting AI writing. According to the article, readers react almost instinctively to phrases that “feel” machine-generated.
  1. AI overuses rhetorical tropes. The article highlights familiar examples including triplets, the “It’s not X, it’s Y” construction and words such as tapestry, intricate, delve, whisper, liminal and echo.
  1. AI writing lacks authenticity because the model has no lived experience. The argument is that, without sensory grounding, AI leans on vague emotional references and conceptual fog.
  1. People resent writing that feels artificial. The claim is that audiences react with irritation or distrust when they believe a machine is speaking to them.
  1. AI-generated style is influencing human writing. Exposure, according to the author, is causing humans to pick up AI-like phrasing, often unconsciously.

These claims are presented with great confidence – and occasional theatrical flair – but confidence is not the same as accuracy.

Where the NYT Article Is on Solid Ground

The article identifies several genuine patterns worth acknowledging.

  1. Many readers can indeed sense when writing feels “AI-like.”

People often notice generic prose that sounds smooth but empty.  Certain patterns – structural or linguistic – do stand out, and early AI tools produced them frequently.

  1. Some stylistic quirks really were common in earlier AI models.

Words such as delve, intricate and tapestry appeared more often than in typical human writing.  So did upbeat abstractions, overly tidy sentences and a tone that tried a bit too hard to be uplifting.

  1. Readers sometimes react poorly to writing that feels automated.

Even without proof, suspicion alone can trigger a negative response.  People want to hear a human voice, particularly in writing that requires judgment or care.

  1. Humans do sometimes imitate AI constructions.

Language spreads socially.  If people encounter certain phrases repeatedly – whether from AI, journalists, corporations or social media – some of it will seep into everyday speech.

These are fair and reasonable observations.  They are also more limited than the article suggests.

Where the NYT Article Overreaches

The piece works as a polemic, but several claims do not hold up under closer scrutiny.

  1. AI writing is not monolithic.

The essay describes a single, uniform AI voice.  That may have been a reasonable caricature of earlier generations, but modern systems generate a wide range of styles depending on the prompt, constraints, and user guidance.  Treating AI as if it has one fixed voice oversimplifies what is now a very diverse set of tools and outputs.

  1. Humans are not nearly as reliable at detecting AI writing as the article implies.

People frequently misidentify human writing as AI-generated, especially when the writing is generic, hurried or overly polished.  Detection is a guess, not a diagnostic.

  1. The absence of lived experience does not disable metaphor.

The article argues that AI lacks sensory grounding and therefore cannot produce meaningful imagery.  AI does lack lived experience, but it can generate sensory metaphors because metaphors are linguistic patterns, not autobiographical recollections.  Plenty of effective human metaphors come from imagination rather than direct experience.

  1. The examples used to prove AI’s deficiencies are cherry-picked.

The author highlights dramatic or extreme outputs for comedic effect.  These are entertaining, but they are not typical.  Drawing broad conclusions from unusual samples overstates the case.

  1. Corporate writing did not suddenly become artificial because of AI.

Much of the “AI-sounding” language in public communications long predates generative tools.  Corporations have been producing overly polished, impersonal prose for decades.  Not every stiff paragraph is a machine’s fault.

A More Nuanced Perspective

The NYT essay is right that AI writing can sometimes be bland, repetitive or overly sentimental.  It is also right that readers can react negatively when they believe a machine has spoken for a human.

But the real story is more complicated.

AI-generated writing varies widely.  Some of it is generic; some of it is crisp.  Some of it sounds like early ChatGPT; some of it can be instructed to mimic a specific style, tone or genre with surprising accuracy.  Humans are sometimes good at spotting AI, and sometimes wildly off.  And as with any new technology, our relationship with AI writing will continue to shift.

The more useful question is not whether AI writing has a universal voice.  The better question is whether a piece of writing – human or machine – reflects judgment, clarity and purpose.  Those qualities matter more than the identity of the author.

If we keep that in mind, we can appreciate the NYT essay for what it is: a witty snapshot of a moment in the culture, not a final verdict on the future of writing.

Take a look.

 

Lande’s Critique of ChatGPT’s Critique Etc. Etc.

I have often asked ChatGPT – or RPS Coach, my bot built on ChatGPT – to write about AI and analyze criticisms of it.  It usually produced serious and plausible critiques of AI.  For example, I asked it to help draft a forthcoming piece, Bummed Out About AI?  What Are You Gonna Do About It?, and it generated a very extensive list of valid criticisms.  I have been surprised that these analyses generally have seemed candid, not defensive or trying to minimize the problems.

I think that the preceding blog post – and this chat that produced it – generally seems plausible, though I have a few bones to pick with its analysis.

I think it minimizes the problems.  Of course, AI is not monolithic and doesn’t always produce any particular pattern of responses.  But, based on my experience, the problems are more frequent than it implies.  And darn hard to resist.  I created instructions to override some of ChatGPT’s biases – like using the cursed em dash and buzzwords like “align” – but it often stubbornly refuses to follow the instructions.

It also suggests that the problems are mostly a thing of the past and have largely been corrected.  Not so.  For example, it frequently uses triplets as in the phrase at the end of its draft blog post – “reflects judgment, clarity and purpose.”  I am constantly battling the “It’s not X, it’s Y” construction.  And lots more.

So What Should We Do About It?

It’s tempting to completely reject AI when identifying its problems.  That would be throwing out the baby with the bath water.

Nothing is perfect.  If we demanded perfection, we couldn’t deal with any humans or technology – or really do anything.  Instead, we learn to distinguish what works properly from things that don’t.  We take advantage of the former and avoid the latter.

That’s what we should do with AI.

AI will become increasingly integrated into our lives.  The real question is whether we will use it responsibly.

AI Tools Can Be Fabulous Writing Coaches

AI has been an incredibly valuable writing coach to help me write and edit.  I give it detailed instructions to produce outlines and text.

It’s helpful to provides first drafts.  Sometimes it produces terrific stuff.  Other times, junk.  When it produces junk, I describe the problems and tell it to do it again.  Sometimes several times.  Most of the time, the responses reflect my intentions and often extend them in directions I wouldn’t have thought of on my own.

The key thing is my careful editing of its output, as I describe in my classic blog post, Writing with a Bot:  I’m Pretty Sure I Wrote Most of This.  For important documents, I repeat the process multiple times, reviewing the structure and language to make sure that it reflects my ideas and voice.

In my experience, no human is anywhere as good an editor as AI.  The value comes from the back-and-forth chats producing a final draft, not just using the first draft.

The Real Problem:  People Skipping the Editing

AI is so widely available that lots of people are using it and it’s becoming deeply integrated into our lives.  It’s tempting to produce a draft and leave it at that.

The problem isn’t that AI produces writing – it’s that people often use it without careful editing. The solution, in my view, is not to avoid AI, but to teach people to use it properly.

In legal education, that could mean helping 2Ls and 3Ls learn how to produce and evaluate AI-generated output and then revise it appropriately.

I suggested good writing techniques in Using AI to Improve Your Writing (Without Losing Your Voice).

Just Saying No Is Tempting – But Unlikely to Work

“Just say no” is not likely to be an effective policy, as I described in another of my classic posts, What Do AI and Sex Have in Common?  We should deal with the world as it is, not the one we might wish for.  Failing to educate people – about sex or AI – may actually increase the risks of irresponsible behavior.

Publishers are developing various policies about how authors may or may not use AI and what disclosures they must provide.  I think that disclosure policies probably make sense, at least in the short term.  Presumably, these policies will evolve as we get more experience.

What do you think about all this?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.