A new methods paper by faculty in the School of Writing, Literature, and Film and the School of Communication provides guidance to instructors of technical writing on how to appropriately incorporate AI in the curriculum
By Colin Bowyer, Communications Manager - November 27, 2024
The purpose of technical writing is to convey complex information in a clear, concise, and accessible manner. A trained technical writer is traditionally more concerned about structure and accuracy to enhance understanding. The use of generative artificial intelligence (GAI) large language models (LLMs) has increased in both professional and classroom technical writing settings, requiring instructors and administrators to address how much or little GAI should appropriately be utilized by students. Two starkly different approaches from college administrators, prohibitive and critical, have arisen across collegiate writing classrooms leading to varying results, as well as dynamic shifts in instructors' relationships with students.
From a new interdisciplinary collaboration, associate professors Ehren Helmut Pflugfelder of the School of Writing, Literature, and Film and Joshua Reeves of the School of Communication have proposed a novel, nuanced approach to address the use of GAI by students in technical writing. The “CARE” framework: critical, authorial, rhetorical, and educational, emphasizes ethical and contextual AI use, while avoiding a one-size-fits-all prohibitive restriction on using AI in the technical writing classroom. Their article, titled “Surveillance Work in (and Teaching) Technical Writing with AI,” appeared in a special issue of the Journal of Technical Writing and Communication.
In the last few years, the use of GAI LLM chatbots has proliferated, impacting instructors, administrators, and students at all levels of education. Responding to this rapid, widespread use, many colleges and universities sought tools, e.g. Turnitin, to address what was seen as a widespread plagiarism concern. Unfortunately, GAI detection tools have not proven to be especially reliable.
For instructors and administrators, a common approach to academic dishonesty is to increase the surveillance of student conduct, including visual student supervision, standardized testing, audiovisual classroom monitoring, and online test proctoring.
“Surveillance and incorporating plagiarism detection may be the best way to identify where student writers are utilizing generative AI,” said Reeves, “but the bigger and more interesting question is, ‘what does that mean for writing instructors? How does that change the dynamic between instructors and students?’”
Instead of becoming a “surveillance agent” and taking a prohibitive approach, Pflugfelder and Reeves encourage instructors to integrate GAI LLMs in the technical writing classroom, while also encouraging critical reflection on the roles that automated text generation and prompt engineering may play in their future careers.
“These new technologies can either be banned outright,” explained Pflugfelder, “which requires instructors to police any trace of their use, or these new platforms can be embraced as part of a pragmatic strategy to turn students into ethical and responsible users of the technology.”
The “CARE” framework, critical, authorial, rhetorical, and educational, provides general principles upon which instructors can reflect in order to determine a suitable path. CARE emphasizes the promise of GAI while cautioning instructors against allowing the technology to redefine their relationships to students.
“GAI has the potential to significantly change technical writing instruction and work,” said Pflugfelder. “That does not necessarily mean, however, that we have to let it change our work in such a way that plagiarism anxiety and surveillance work comprise even more of our labor. We want students to be successful in every writing situation and to think critically about using GAI to their advantage.