Man, This (AI Text) Sucks

I have toyed around with OpenAI’s various public trials, knowing full well they are closely tracking everything said about their prompts, but still. I couldn’t help myself. I looked at their new release, ChatGPT, and offered it this prompt:

Write an 800-word blogpost about the implications of using AI-generated text in university teaching

A lot happened. Let’s post what it put out, and I’ll add some thoughts afterward:

The use of AI-generated text in university teaching has the potential to revolutionize the way that students learn and engage with course material. With the use of natural language generation (NLG) technology, educators can quickly and easily create custom-tailored course materials that are designed to meet the specific needs of their students.

One of the key advantages of using AI-generated text in university teaching is that it allows educators to create materials that are tailored to the specific needs of their students. By leveraging machine learning algorithms, NLG technology can analyze a student’s past performance and learning style, and generate materials that are designed to support their unique learning needs. This can help to ensure that students receive personalized instruction that is specifically designed to help them succeed in their coursework.

Another advantage of using AI-generated text in university teaching is that it can help to save time and resources for educators. By automating the creation of course materials, NLG technology can help educators to reduce the time and effort that they need to spend on creating and organizing their course materials. This can free up more time for educators to focus on other important tasks, such as providing personalized instruction and support to their students.

In addition to saving time and resources, using AI-generated text in university teaching can also help to improve the overall quality of course materials. By using machine learning algorithms to analyze large amounts of data, NLG technology can generate materials that are more accurate, comprehensive, and engaging than those created by humans alone. This can help to ensure that students receive high-quality instruction that is designed to support their learning and help them succeed in their coursework.

Another potential benefit of using AI-generated text in university teaching is that it can help to increase student engagement with course materials. By providing students with materials that are tailored to their individual learning needs, NLG technology can help to create a more personalized and engaging learning experience. This can help to keep students motivated and engaged, which can in turn lead to better learning outcomes.

Despite these potential benefits, there are also some potential drawbacks to using AI-generated text in university teaching. One of the main concerns is that, as with any technology, there is the potential for errors and biases to be introduced into the materials that are generated. To address this concern, it will be important for educators to carefully review and evaluate the materials that are generated by NLG technology, and to make any necessary corrections or adjustments to ensure that they are accurate and unbiased.

Another potential concern is that the use of AI-generated text in university teaching could lead to a reduction in the quality of education. Some critics argue that the use of technology to automate the creation of course materials could lead to a decrease in the level of personalization and individualized attention that students receive from their educators. To address this concern, it will be important for educators to carefully balance the use of technology with personal interaction and support, in order to ensure that students continue to receive high-quality instruction.

Despite these potential drawbacks, the use of AI-generated text in university teaching has the potential to greatly improve the way that students learn and engage with course material. By leveraging the

So, a couple of things to take away here:

  1. Despite the unnervingly conversational tone, this system is very bad at tailoring its responses to a specific length based on contextual cues – for example, my prompt at the top of this post would have been interpreted as a call to finish the post in 800 words, not just write until you hit the 800 word mark without any conclusion.
  2. This model, unsurprisingly, is heavily biased in favor of itself, and pretty bad at identifying downsides of its own usage – for example, I would consider the primary drawback of AI text generation in education not even at the level of errors and biases, or “decrease in personalization,” but rather that it ends up shortcutting the students’ ability to synthesize and critically think about a topic they’re writing about.
  3. While the prompt was about teaching, I thought there was an obvious context that includes students – This answer doesn’t even hint at students as the level of analysis, only the teacher. Still, your students not learning to write or to think and organize because AI does it for them is also a teaching issue?
  4. This model is really good at generating fairly meaningless filler language, and about on par with a graduating high school senior or a first semester college student. It also does not have a lot of embellishments I see in a lot of student writing, some of which works and some of which doesn’t. It’s lack here is glaring – if a first year university student turned this in I’d immediately notice the unusual diction.
  5. There is unnecessary repetition in this answer – it finds several ways of explaining how AI text can make education materials more customized per student. I wonder how much of this is internal intentions with the design, and how much it is pulling from outside. Also, inexperienced writers often do this as well, restate their argument multiple times without adding new information, so I wonder if that’s a stylistic choice.
  6. I have caught people using the original GPT-3 API to generate text for assignments. You can kind of tell, though not always, with that generation of the technology. This is much closer to human text, errors and all. I am not a Luddite, I think it’s fine to build these systems. What concerns me is, just like with photoshop, the technology is racing ahead of our capacity to establish norms about its use. I think, even more than the “ethical AI” nonsense that’s just tech boy echo chamber navel gazing anyway, we need to work on AI norms that govern appropriate and inappropriate uses.
  7. Lastly, these damn developers need to include a checksum tool so that people who do care about plagiarism (they clearly do not) can at least see whether or not it was used to cheat on an assignment.
comments powered by Disqus