As professors in the field of writing studies, we understand why many colleagues are calling for resistance to GenAI. Their concerns about learning, agency, and equity are real and pressing. Leaders in this movement contend that radically restructuring our discipline around GenAI is both premature and misaligned with core values. Professors Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes (2024) argue:
“Radically revising and restructuring our discipline around GenAI is premature at this time, given the uncertainty and instability of the future of these technologies. Instead of centering a technology that is misaligned with so many of our disciplinary values, we can choose to opt out of active use of GenAI technologies until these products are better aligned with our values…” (para. 2).
Learning and cognition. First, critics worry about AI’s effects on learning. Cognitive and literacy researchers note that when students offload composing to machines, they risk losing the intellectual struggle that writing demands—discovering what one wants to say, listening to inner speech, and grappling with ambiguity (Moxley, 2025). Neuroscientific and cognitive studies suggest that reliance on AI can dampen the brain’s engagement with composing and reduce the stylistic diversity of student writing (Wu, Yang, Li, Huang, & Sun, 2023; Kosmyna et al., 2025). Related psychological research underscores why such risks matter: people naturally avoid effortful thinking, which is often experienced as unpleasant (David, Vassena, & Bijleveld, 2024). If writing already feels cognitively taxing, GenAI offers students an appealing shortcut.
Denial. These anxieties about learning are often paired with claims that generative AI lacks real legitimacy. Critics emphasize that large language models (LLMs) are statistical systems designed to predict the next word, not entities capable of comprehension. Bender, Gebru, McMillan-Major, and Shmitchell (2021) famously described these systems as “stochastic parrots,” mimicking patterns from training data without true understanding. Some extend this denial to argue that GenAI is unlikely to improve. In her 2025 CCCC Chair’s Address, Sano-Franchini (2025) highlighted hallucinations, fabricated citations, and reduced linguistic diversity as endemic failures, concluding: “Some believe that there will be a time when ChatGPT will get these things right. I don’t believe it” (p. 4).
Anger. Other resistance rhetoric is marked by frustration at the ways AI undermines disciplinary values. Faculty worry that plagiarism detection tools are unreliable, prone to false positives, and likely to turn teachers into “police” rather than mentors (MLA–CCCC Task Force, 2024). Some propose a return to handwritten, in-class essays as a safeguard, yet this suggestion seems misaligned with the workplace writing practices students need in 2025 and beyond. Eaton (2025) documents the wide variation of institutional AI policies, which often leave instructors without clear guidance. This policy vacuum fuels anger: faculty feel abandoned, asked to manage an unstable technology with few supports while protecting academic integrity.
Depression. Finally, many educators and administrators express despair about the future of their profession. Faculty fear that GenAI will atrophy critical thinking skills, stunt intellectual development, and reduce young people’s capacity to participate as engaged members of society (Sano-Franchini, McIntyre, & Fernandes, 2024). Administrators echo this concern. In a national survey, higher education executives reported feeling “ill-prepared” to manage AI’s impacts on teaching and learning, describing the technology as a disruptive force that could destabilize entire academic programs (Watson & Rainie, 2025). These institutional voices reinforce a sense of loss: the worry that the core mission of education itself may be diminished.
Equity, labor, and environment. Critics also foreground systemic risks. Sano-Franchini (2025) argued that large-scale AI projects are “inherently aligned with white supremacist and eugenic ideologies” (p. 6), building on Gebru and Torres’s work on extractive data practices and the erasure of global English varieties. Scholars warn that training and deploying LLMs consumes vast amounts of energy, generating carbon emissions greater than some nation-states (Bender et al., 2021). Others emphasize that automation threatens to displace skilled knowledge workers, while profits flow disproportionately to a handful of corporations (Tomlinson, Jaffe, Wang, Counts, & Suri, 2025).
Yet workplace research complicates this picture. Surveys show that workers are not waiting for institutional approval. McKinsey’s Superagency in the Workplace study found that employees are three times more likely to already be using GenAI for a third or more of their daily tasks than their executives recognize (Mayer, Yee, Chui, & Roberts, 2025). Similarly, Microsoft and LinkedIn’s Work Trend Index report (2024) documents that GenAI adoption has reached near-universal levels among knowledge workers. These findings suggest that resistance alone may not prepare students for the professional realities they will face.
These critiques deserve careful attention. Change is difficult, and disciplinary resistance is grounded in legitimate anxieties about values, learning, equity, labor, and sustainability. Yet given the rapid pace of adoption—already near universal among students (Digital Education Council, 2024; Eaton, 2023, 2025; Freeman, 2025) and knowledge workers (Microsoft & LinkedIn, 2024)—and the trillions of dollars being invested in datacenter expansion and AI infrastructure, resistance is neither realistic nor sustainable. The central challenge for writing studies is not how to refuse AI, but how to guide students to engage it critically—preserving their agency, integrity, and intellectual growth.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922 David, L., Vassena, E., & Bijleveld, E. (2024). The unpleasantness of thinking: A meta-analytic review of the association between mental effort and negative affect. Psychological Bulletin. Advance online publication. https://www.apa.org/pubs/journals/releases/bul-bul0000443.pdf Digital Education Council. (2024). What students want: Key results from DEC Global AI Student Survey 2024. https://www.digitaleducationcouncil.com/post/what-students-want-key-results-from-dec-global-ai-student-survey-2024 Eaton, L. (2025). Syllabi policies for AI generative tools. Google Docs. Eaton, S. E. (2023). Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. International Journal for Educational Integrity, 19, Article 23. https://doi.org/10.1007/s40979-023-00144-1 Freeman, J. (2025, February 25). Generative AI and the student experience: Survey findings from over 1,000 UK university students. Higher Education Policy Institute. https://www.hepi.ac.uk/wp-content/uploads/2025/02/HEPI-Kortext-Student-Generative-AI-Survey-2025.pdf Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing tasks. arXiv. https://arxiv.org/abs/2506.08872 Mayer, H., Yee, L., Chui, M., & Roberts, R. (2025). Superagency in the workplace: Empowering people to unlock AI’s full potential. McKinsey & Company. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work Microsoft, & LinkedIn. (2024, May 8). 2024 Work Trend Index annual report: AI at work is here. Now comes the hard part. Microsoft. https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part Moxley, J. M. (2025, July 7). AI challenges higher education to put more emphasis on writing, not less. In Have chatbots killed the student essay? Times Higher Education. https://www.timeshighereducation.com/depth/have-chatbots-killed-student-essay Sano-Franchini, J. (2025, April 10). Timely, (Un)Disciplinary, and Solutions-Oriented: Remembering and enacting abundance in these times when we just have to keep going [Conference address transcript]. Conference on College Composition and Communication. https://docs.google.com/document/d/1d-LaO7oMoWFBcXgjoyylD0FRqrB1jQZMq9NttPfZOKY/edit?usp=sharing Sano-Franchini, J., McIntyre, M., & Fernandes, M. (2024). Refusing GenAI in writing studies: A quickstart guide. Refusing Generative AI in Writing Studies. https://refusinggenai.wordpress.com Tomlinson, K., Jaffe, S., Wang, W., Counts, S., & Suri, S. (2025). Working with AI: Measuring the occupational implications of generative AI. arXiv. https://arxiv.org/abs/2507.07935 Wu, Z., Yang, Y., Li, Z., Huang, Y., & Sun, M. (2023). Analyzing the impact of AI tools on student study habits and academic performance. arXiv. https://arxiv.org/abs/2412.02166 Watson, C. E., & Rainie, L. (2025, January). Leading through disruption: Higher education executives assess AI’s impacts on teaching and learning. American Association of Colleges and Universities & Elon University’s Imagining the Digital Future Center. https://www.aacu.org/research/leading-through-disruptionReferences


























