Artificial intelligence is steadily embedding itself into writing workflows, and its impact is now being felt even in highly specialized academic fields. While AI tools promise efficiency and speed, they are also reshaping the nature of intellectual labor in ways that many scholars did not anticipate.
A recent viral post by a PhD scholar, Marshall Steinbaum, has brought this shift into sharp focus, sparking debate about whether research roles are quietly evolving into something far more mechanical.
PhD economist says his job is now only to remove em-dash.
Steinbaum, who earned his doctorate from the University of Chicago, shared on X that a surprising portion of his current workload involves editing AI-generated content. Rather than focusing solely on economic analysis or academic writing, he described spending time refining machine-produced text to make it appear more natural and less recognizably artificial.
In a widely circulated post, he remarked that his primary task now involves removing em-dashes from outputs generated by AI tools such as Claude, so that the writing does not immediately appear machine-made. The comment resonated with a wide audience, many of whom noted the irony of altering standard elements of good writing simply to avoid triggering assumptions of AI use.
The discussion also highlighted a broader challenge. For now, AI hallucinations remain a significant issue, with models occasionally producing inaccurate or entirely fabricated information. This makes human oversight indispensable. Scholars are not only correcting stylistic quirks but also verifying facts, ensuring logical consistency, and filtering out misleading content.
In effect, AI may generate the first draft. However, experts are still required to validate and refine the output before it can be used in any serious academic or professional context.
Can AI take away a scholar’s job?
The conversation extended beyond economics. Professionals from other disciplines, including engineering, reported similar experiences of editing AI-generated reports, presentations, and corporate documents. This suggests that the trend is not isolated but indicative of a wider transformation across knowledge-based industries.
At its core, the debate raises important questions about the future of research work. AI tools are undeniably accelerating the production of written material, enabling faster drafting and idea generation.
However, the outputs often reflect underlying biases, templated structures, and occasional inaccuracies. This places scholars in a new role, not just as creators of knowledge but as curators and editors of machine-assisted content.
Rather than replacing researchers outright, AI appears to be redefining their responsibilities. The intellectual challenge is shifting from generating original text to ensuring quality, authenticity, and reliability. While this may streamline certain aspects of academic work, it also risks reducing complex scholarly tasks to routine editing processes.
For now, the human element remains critical. Until AI systems can consistently produce accurate, nuanced, and context-aware content, researchers will continue to play a central role in shaping and validating knowledge. The viral post serves as a reminder that the AI revolution in academia is not just about automation, but about adaptation.





