Zephyrnet Logo

Professors Try ‘Restrained AI’ Approach to Help Teach Writing – EdSurge News

Date:

When ChatGPT emerged a year and half ago, many professors immediately worried that their students would use it as a substitute for doing their own written assignments — that they’d click a button on a chatbot instead of doing the thinking involved in responding to an essay prompt themselves.

But two English professors at Carnegie Mellon University had a different first reaction: They saw in this new technology a way to show students how to improve their writing skills.

To be clear, these professors — Suguru Ishizaki and David Kaufer — did also worry that generative AI tools could easily be abused by students. And it’s still a concern.

They had an idea, though, for how they could set up a unique set of guardrails that would make a new kind of teaching tool that could help students get more of their ideas into their assignments and spend less time thinking about formatting sentences.

“When everyone else was afraid that AI was going to hijack writing from students,” remembers Kaufer, “We said, ‘Well if we can restrain AI, then AI can reduce many of the remedial tasks of writing that keep students from really [looking] to see what’s going on with their writing.”

The professors call their approach “restrained generative AI,” and they’ve already built a prototype software tool to try it in classrooms — called myScribe — that is being piloted in 10 courses at the university this semester.

Kaufer and Ishizaki were uniquely positioned. They have been building tools together to help teach writing for decades. A previous system they built, DocuScope, uses algorithms to spot patterns in student writing and visually show those patterns to students.

A key feature of their new tool is called “Notes to Prose,” which can take loose bullet points or stray thoughts typed by a student and turn them into sentences or draft paragraphs, thanks to an interface to ChatGPT.

“A bottleneck of writing is sentence generation — getting ideas into sentences,” Ishizaki says. “That is a big task. That part is really costly in terms of cognitive load.”

In other words, especially for beginning writers, it’s difficult to both think of new ideas and keep in mind all the rules of crafting a sentence at the same time, just as it’s difficult for a beginning driver to keep track of both the road surroundings and the mechanics of driving.

“We thought, ‘Can we really lighten that load with generative AI?” he says.

Kaufer adds that novice writers often shift too early in the writing process into making fragments of ideas they put down into carefully crafted sentences, when they might just end up later deleting those sentences because the ideas may not fit into their final argument or essay.

“They start really polishing way too early,” Kaufer says. “And so what we’re trying to do is with AI, now you have a tool to rapidly prototype your language when you are prototyping the quality of your thinking.”

He says the concept is based on writing research from the 1980s that shows that experienced writers spend about 80 percent of their early writing time thinking about whole-text plans and organization and not about sentences.

Taming the Chatbot

Building their “notes to prose” feature took some doing, the professors say.

In their early experiments with ChatGPT, when they put in a few fragments and asked it to make sentences, “what we found is it starts to add a lot of new ideas into the text,” says Ishizaki. In other words, the tool tended to go even further in completing an essay by adding in other information from its vast stores of training data.

“So we just came up with a really lengthy set of prompts to make sure that there are no new ideas or new concepts,” Ishizaki adds.

The technique is different from other attempts to focus the use of AI for education, in that the only source the myScribe bot draws from is the student’s notes rather than a wider dataset.

Stacie Rohrbach, an associate professor and director of graduate studies in the School of Design at Carnegie Mellon, sees potential in tools like those her colleagues created.

“We’ve long encouraged students to always do a robust outline and say, ‘What are you trying to say in each sentence?” she says, and she hopes that “restrained AI” approaches could help that effort.

And she says she already sees student writers misuse ChatGPT and therefore believes some restraint is needed.

“This is the first year that I saw lots of AI-generated text,” she says. “And the ideas get lost. The sentences are framed correctly, but it ends up being gibberish.”

John Warner, an author and education consultant who is writing a book about AI and writing, says he wondered whether the myScribe tool would be able to fully prevent “hallucinations” by the AI chatbot, or instances where tools insert erroneous information.

“The folks that I talk to think that that’s probably not possible,” he says. “Hallucination is a feature of how large language models work. The large language model is absent judgment. You may not be able to get away from it making something up. Because what does it know?”

Kaufer says that their tests so far have been working. In an email follow-up interview he wrote: “It’s important to note that ‘notes to prose’ operates within the confines of a paragraph unit. This means that if it were to exceed the boundaries of the notes (or ‘hallucinate’, as you put it), it would be readily apparent and easy to identify. The worry about AI hallucinating would expand if we were talking about larger discourse units.”

Ishizaki, though, acknowledged that it may not be possible to completely eliminate AI hallucinations in their tool. “But we are hoping that we can restrain or guide AI enough to minimize ‘hallucinations’ or inaccurate or unintended information so that writers can correct them during the review/revision process.”

He described their tool as a “vision” for how they hope the technology will develop, not just a one-off system. “We are setting the goal toward where writing technology should progress,” he says. “In other words, the concept of notes to prose is integral to our vision of the future of writing.”

Even as a vision, though, Warner says he has different dreams for the future of writing.

One tech writer, he says, recently noted that ChatGPT is like having 1,000 interns.

“On one hand, ‘Awesome,’” Warner says. “On the other hand, 1,000 interns are going to make a lot of mistakes. Interns early on cost you more time than they save, but the goal is over time that person makes less and less supervision, they learn.” But with AI, he says, “the oversight doesn’t necessarily improve the underlying product.”

In that way, he argues, AI chatbots end up being “a very powerful tool that requires enormous human oversight.”

And he argues that turning notes into text is in fact the important human process of writing that should be preserved.

“A lot of these tools want to make a process efficient that has no need to be efficient,” he says. “A huge thing happens when I go from my notes to a draft. It’s not just a translation — that these are my ideas and I want them on a page. It’s more like — these are my ideas, and my ideas take shape while I’m writing.”

Kaufer is sympathetic to that argument. “The point is, AI is here to stay and it’s not going to disappear,” he says. “There’s going to be a battle over how it’s going to be used. We’re fighting for responsible uses.”

spot_img

Latest Intelligence

spot_img