MIT Writing Professor Catches Students Using AI, Finds Unexpected Teaching Moment

MIT Writing Professor Catches Students Using AI, Finds Unexpected Teaching Moment

The telltale signs appeared in the opening paragraphs. The prose was too polished for undergraduates, the story arcs too neat, every character arriving pre-formed, every metaphor a borrowed echo. When Micah Nathan, who has taught fiction writing at MIT since 2017, read two student submissions at the start of last semester, he knew immediately what had happened: artificial intelligence had written them.

He didn't need detection software. The emptiness was palpable. "I knew within their opening paragraphs that both stories were written by AI," Nathan recalls. "The prose was too polished, the arcs too tidy, every character prepackaged, every metaphor a pastiche without context."

Rather than punish the students or invoke institutional policy, Nathan halted the workshop and made a direct confession: he knew what they had done. He assured them they weren't in trouble, partly because MIT's AI guidelines were still in flux, and partly because Nathan recognized something in their choice that felt uncomfortably human. "Had AI been available during my undergrad years, would I have resisted its help? Of course not," he told the class.

What followed was neither lecture nor condemnation. Instead, the students began to speak. One confessed through tears that she used AI because she was terrified of appearing stupid, of being criticized for weak writing. She described a creeping dependency: first checking grammar, then accepting line edits, then structural suggestions, then a complete rewrite. "She said she loved writing stories and hated having used AI," Nathan recalls. "But she couldn't stop herself."

The second student offered a different admission. He had an idea but had never written a short story before. He didn't know where to start. When Nathan asked why he hadn't reached out for help, the student simply shrugged.

Then other hands rose. One student challenged the logic: if the stories were based on their ideas, why was AI authorship wrong? Another asked how using an AI tool differed from hiring a human editor. A third pointed out the irony that MIT itself pioneered AI research in 1959, so shouldn't the institution embrace the technology's labor-saving potential?

These questions, though reasonable on their surface, opened a door to deeper ground. Nathan shifted the conversation away from rules and toward the purpose of writing itself.

"Writing, I told them, isn't supposed to be easy," he explains. "Writing isn't just the production of sentences. It's the training of endurance by way of sustained attention. It's a way of learning what one thinks by attempting to say it."

This distinction matters. An LLM can generate the appearance of that learning process. It cannot produce the actual transformation that occurs during the struggle to translate thought into language. The value of a workshop, Nathan argued, lies not only in the finished text but in the visible evidence of how it came to be. When AI finishes the author's thinking, that evidence vanishes. The workshop loses its subject: an actual person wrestling with an actual problem on the page.

Nathan draws a parallel to George Orwell's 1946 essay on book reviewing, where Orwell describes critics inventing responses to books they haven't genuinely engaged with. "Mindless manufacture of responses erodes judgment, and standards collapse," Orwell wrote. When AI-generated fiction fills a workshop, everyone becomes the kind of reviewer Orwell warned against: performing the shape of feedback without having truly encountered thought.

The irony is sharp. AI was supposed to automate rote tasks. Instead, it's turning the act of creation into something rote: feeding an idea into a machine and accepting whatever emerges as finished work. The friction that once revealed a writer's process, the struggle itself, has been eliminated.

Since that night, something unexpected happened in Nathan's workshops. Students talk more openly about frustration, about moments when a draft resists the author's intentions. They discuss why their thinking matters, why the struggle to find words isn't failure but growth. The conversation has shifted from how to write faster to what happens in the space where words haven't yet arrived.

Nathan's new policy is explicit: no AI-generated work. He wants their words, their voice, their visible thinking on the page. Not because machines are morally wrong, but because the workshop only functions when a human author occupies the room, someone whose struggle can be witnessed and learned from.

The danger, as Nathan sees it, isn't that AI will replace writers. It's that students will become so accustomed to bypassing the friction that once revealed their own process that they forget what it felt like to think through language. The workshop becomes a sanctuary for that lost skill, a place where everything on the page belongs to an actual person and not yet on the page remains their work to do.

Author James Rodriguez: "The real lesson here isn't about policing technology, it's about defending the messy, inefficient, deeply human act of learning to write."

Comments