What critical literacy strategies can we develop to empower students faculty and the American people to be masters of their technology and not its unthinking servants

In response to the rise of AI in education, the Pew Research Center believes the major question confronting the humanities is “What is the future of human agency?” (Anderson and Rainie 2023).

Interestingly, 56% of the 540 “technology innovators, developers, business and policy leaders, researchers, academics and activists” surveyed by Pew believe AI will limit human agency, expression, and creativity.

Clearly, the transformative work of Sam Altman and his team at OpenAI as well as Jensen Huang and his team at NVDA challenge conventional critical literacies — especially the ability of humans to have agency, to achieve their goals when thinking/composing as opposed to downloading thoughts and language from GAI tools and representing it as their own work/thinking.

From middle school to graduate school, teachers report students are using GAI tools to cheat on their exams, homework assignments, and papers. A 2023 survey of 1600 students across 600 institutions conducted by Turnitin found 46% of students had used AI for coursework (Coffey 2023). A second 2023 survey of 1000 undergraduate and graduate students conducted by BestColleges found 56% had used AI to complete coursework (Nam 2023).

Teachers find themselves in a somewhat powerless position in response to the avalanche of AI-authored works their students are submitting as their own writing. No AI detection software can capture the dialog students have had with AI to research a topic, refine their thinking, or compose a text (Open AI 2024). That said, some teachers endeavor to police student work by having them use a tool like Revision History, a Chrome extension or Draftback or Txtreplay. These tools help teachers track students’ composing processes, to see whether or not their work exhibits the normal features of human writing — small changes, moving words and sentences around, and deleting bits as opposed to wholesale changes and huge chunks of standard written English being dropped into the draft. Other faculty resist this additional labor, and they point out that students can trick these systems by manually inputting parts of GAI-authored texts.

Beyond these academic-integrity concerns, researchers have found that GAI tools undermine self-expression and creativity when students use GAI tools to short circuit learning processes (Biermann 2022). They deprive “learners of the opportunity to develop academic writing skills and the cognitive, linguistic, and socioemotional competencies that they could gain through employing and experiencing authentic processes of academic writing” (Yeo 2023).

In response to the rise of AI, some faculty have rejected any use of GAI tools for writing or coursework, arguing its unethical, based on the greatest intellectual property theft of all time. Here they note that companies like Open AI vacuumed up the world’s texts — books, articles, websites and other publicly available content — that were copyrighted in order to create large datasets that could be trained. The New York Times, for instance, has proven that Open AI’s bots snuck behind their paywall and stole millions of Times’ articles without permission.

In contrast, other faculty have openly embraced AI usage. Since 2023, Lance Eaton has been collecting faculty members’ AI-policy statements. Eaton’s corpus of 121 policy statements from faculty across disciplines and institutions demonstrates enormous disparities in faculty members’ acceptance of AI in student work: some faculty permit AI solely for invention, research or revision; others reject AI usage altogether. STEM and business faculty tend to permit AI usage whereas humanities faculty are more likely to reject AI usage, considering it to be academically dishonest. Across disciplines, faculty are unsure of how students should cite AI-authored works: some faculty require every AI-generated sentence to be cited; others permit students to write a general note about their dialogs with chatbots; others call for students to estimate AI’s contribution as a percentage of involvement with GAI tools. Citation styles such as MLA and APA call for writers to cite all work drafted by AI. In practice, however, should students be expected to cite a lengthy discussion they’ve had with a chatbot for days, even weeks, that helped them develop their thinking when they didn’t copy/paste any text? Should writers be expected to cite AI if they’ve asked it to revise/edit their work yet not change their ideas?

Technorhetoricans assure us this is not a new story (Dobrin 2002). Writing, a technology, is always under constant evolution. As a species, we are always looking for new ways to express ourselves. We are quick to adopt technologies, processes, and writing spaces/media that make it easier for us to accomplish the tasks we want to accomplish (Baron 2009).

Yet even if this not a new story, even if it’s just another evolution in literacy, it matters a great deal because the affordances and constraints of today’s hardware (e.g., the Blackwell GPU) and software (e.g, ChatGPT4) are altering the human experience, remediating our ways of researching and solving problems, interpreting information, and thinking/composing/creating.

Consider, for example, how the printing press threatened the authority of the Catholic Church: it introduced counter narratives. It facilitated the spread of ideas, hermeneutics, textual-methods, empirical research methods — and the overall emergence of the conversation of humankind (Bolter 2001).

In other words, GAI tools matter because they change how we think, teach, and communicate. They create new methods of composing. They will alter our sense of what is possible and challenge our traditional notions of academic integrity, copyright, and intellectual property. In summary, the emergence of AI signifies not just “a paradigm shift” in the Kuhnian sense but an “epochal shift” — a monumental transformation in the fabric of ethics, authority, intellectual property, and agency.

Submission Guidelines

At Writing Commons, we would like to serve as a resource for teachers who are trying to figure out how they teach students to use AI ethically. We welcome articles that investigate ways generative artificial intelligence (GAI) tools impinge on academic integrity, authorship, learning, interpretation/reading, language, composing, and agency. We are open to multiple research methods and media. Possible topics to pursue:

  1. Is it possible for students to use generative artificial intelligence (GAI) tools throughout the writing process in ways that maintain academic integrity and empower students, faculty, and the American people to be “masters of their technology and not its unthinking servants?” (U.S. Congress).
  2. Is use of GAI tools (in)appropriate in some stages of the writing process but not others, from prewriting, inventing, drafting, collaborating, researching, planning, organizing, designing, rereading, revising, editing, proofreading, sharing or publishing?

A House Divided

Since 2023, Lance Eaton has been collecting faculty members’ AI-policy statements. Eaton’s corpus of 121 policy statements from faculty across disciplines and institutions demonstrates enormous disparities in faculty members’ acceptance of AI in student work: Some faculty permit AI solely for invention, research or revision; others reject AI usage altogether. STEM and business faculty tend to permit AI usage whereas humanities faculty are more likely to reject AI usage, considering it to be academically dishonest. Across disciplines faculty provide disparate citation policies: some faculty require every AI-generated sentence to be cited; others permit students to write a general note about their dialogs with chatbots; others call for students to estimate AI’s contribution as a percentage of involvement with GAI tools. Citation styles such as MLA and APA call for writers to cite all work drafted by AI. In practice, however, should students be expected to cite a lengthy discussion they’ve had with a chatbot for days, even weeks, that helped them develop their thinking when they didn’t copy/paste any text? Should writers be expected to cite AI if they’ve asked it to revise/edit their work yet not change their ideas?


How can writers be masters of their technology and not its unthinking servants





snf inspire authors to

authorship, learning, interpretation/reading, language, composing, and agency,

References

Anderson, J. & Rainie, L. (2023, February 24). The Future of Human Agency. Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2023/02/24/the-future-of-human-agency/
Biermann, O. C. (2022). Writers want AI collaborators to respect their personal values and writing strategies: A human-centered perspective on AI co-writing (doctoral dissertation). University of British Columbia, Vancouver, Canada. 10.14288/1.0420422 Black, P., & Wiliam, D. (2009).
Eaton, Lanxce (2024) Syllabi Policies for AI Generative Tools. Google Docs.
U.S. Congress. (2010). U.S.C. Title 20 – EDUCATION. SUBCHAPTER I—NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES. https://www.govinfo.gov/content/pkg/USCODE-2010-title20/html/USCODE-2010-title20-chap26-subchapI.htm

Yeo, M. A. (2023). Academic integrity in the age of artificial intelligence (AI) authoring apps. TESOL Journal. https://doi.org/10.1002/tesj.716

Related Articles: