Rethinking the Use of Online Journals and Forums: The Urgent Need for Review
Are Discussion Forums and Journal Assignments as Teaching Assessments the First Major Examples of Generative AI’s disruptive force? I am experiencing this question as I teach my winter intersession class that – ironically – is about online behaviors and values.
You might like: Introducing AI Scenario Playquests
I plan to approach this article as another AI Scenario Playquest. This activity has helped clarify my thoughts on generative AI and its relationship with a specific learning event pertaining to me and my classroom environment.
Valuing Writing Assignments
Now, let’s go on a short tangent for a moment before I begin my AI Scanrio Playquest. For over a decade, I have used blogging to clarify my thoughts on education, technology, creativity, and parenting. I take my understanding of a topic and express it through words. I form unique connections, consider new ideas, and formulate knowledge that applies to some aspects of my real-world environment. I can articulate my thoughts on paper far better than I can articulate them verbally.
This action represents a learning process; if we were to evaluate my blog articles purely as a final product, perhaps focusing on writing techniques, composition, factual information, etc., we would not be focusing on this process. However, from what I have expressed in that article, we can judge my knowledge and capacity to apply my understanding to real-world situations. So, under these conditions, it serves as a strategy for conducting assessments of a student’s learning of the class material.
These are two reasons why I value journal assignments in my online courses. First, a prompt can assist students in reflecting on the material, and writing their responses can engage critical thinking and formulate real-world connections. Consider an assigned reading on centralized v decentralized systems, followed by an example such as Apple’s App Store or an open-source product. A student who’s grasped these concepts might then write on how this all connects to the freedom (or limitations) for their capacity to contribute to mods on their preferred game/gaming platform.
Research supports this position so long as the learning task is designed correctly and accompanied by feedback. Studies have shown that prompts that challenge students to explain information are an effective learning strategy that leads to more learning when compared with students not prompted to explain. When done successfully, a prompt can target a specific learning objective and encourage students to make real-world connections that improve comprehension and help with knowledge transfer. Furthermore, facilitating asynchronous peer interactions through forum discussions can enhance the experience.
A Short AI Scenario Playquest
Let’s engage in a short AI Scenario Playquest to explore my concerns further. So often in education, we focus on an end product, and as outlined above, this isn’t lousy teaching; end products can serve as formative and summative assessments and an effective learning technique that targets self-explanation.
When teaching online, I suspect many of us continue to share prompts in response to material and evaluate the product produced from this activity. In this scenario, if a student used generative AI to create their journal response or provide the content to reword, they will likely deliver on the assigned prompt. We may also determine that they have met the learning objectives because they have addressed the touch-points of the reading, and therefore, they will receive a positive grade.
I also suspect student submissions will have fewer grammatical errors; we’ll experience an increase in having the prompt addressed fully and potentially more unique connections if the students can engineer their prompts to include things like “include in the response how this relates to streaming services, Roblox, etc.).
However, if the response was produced by or in collaboration with generative AI, theatrically, a student didn’t have to engage in the reading, and they may submit something not representative of their understanding of the concepts explained in the material. And, while some may give credit to their prompt engineering, which I think is a total fad at this time, they still didn’t explore the connections; they merely proposed them.
Now, let’s expand further; in this scenario, an instructor may use generative AI to create the prompt (which I know is being heavily promoted at the moment), and when we fast forward a few months, we might see generative AI tools used to evaluate student responses to this prompt.
Therefore, could we see the following scenario play out if left unabated?
- Instructor A makes discussion prompts using generative AI
- Student B uses generative AI to answer discussion prompts
- Instructor A uses an AI tool to evaluate AI-generated responses.
Did Student B engage in the reading? Did Instructor A evaluate Student B’s knowledge of the material, or did they grade based on delivering a satisfactory product?
I’m smiling as I write this because I confess to participating in this situation on LinkedIn. I was asked to comment on an AI-generated post that scored 99% on an AI checker and then explored using generative AI to write a comment. Is this our future? How do we feel about this?
Future AI Alternatives
At this time, some readers of this article might be saying to themselves, duh, that’s why we need to change our approach to instruction, but that’s the article’s point. I understand that, but what are we changing, and what evidence-based learning strategy are we using as we design or adopt an alternative?
I want to come around full circle and return to my process of writing this article: if we allow the use of generative AI to help the student produce better products (.e.g., a better journal assignment), are we significantly changing how students experience learning as they engage in the process of producing that product? We don’t know! There’s no research to explore that question, but it’s important to consider as we re-evaluate the impact of generative AI on everyday learning activities such as forum discussions and online journal assignments.
We know the benefits of facilitating social interactions to support learning; exposing students to posts shared by peers and asking them to reflect and comment can help expand existing knowledge. We also know the value of effective feedback and the benefits of self-explanation when prompts are designed effectively.
My initial response is to promote using Flip (Video) to facilitate online discussions, which I’ve continued to do since COVID-19, and I see it as a potential alternative. However, it doesn’t fully eliminate some of the concerns shared in this article if students still use generative AI to outline their discussion points.
So, as I close this AI Scenario Playquest and think about the way generative AI might enhance our use of online discussions and journal responses, I foresee a future chatbot that can replace the read, post, and comment routine on this popular online activity.
Instead, students interact with the material; they then respond to a generative AI prompt produced by a chatbot customized by the teacher who wants to focus the discussion on the learning objectives. The chatbot then continues to challenge the students’ thinking by assigning new prompts in response to what it thinks the student understands and, if necessary, asks that they return to a specific aspect of the reading or review a summary to assist their understanding.
Finally, the generative AI tool can personalize the discussion and facilitate connections based on the unique interests and reading level of the student before providing the teacher with information that the instructor can use to evaluate the student’s learning.
Alas, I need to highlight how this idea emerged from writing this article, and I hope that we can continue to see writing as a viable way to facilitate self-explanation at a distance. I just need to find a way to help my students appreciate this value as a process that is, at times, more important than the product.
Teachers are designers. We even have a related field of study called instructional design. Design is about solving problems and solving problems within the constraints of a box. That’s why teachers must think inside the box.
Why it might be time to re-evaluate our approach to teaching Divergent Thinking in the Classroom? All the times we say "come up with ideas" but in reality the time constraints or lack of knowledge
AI Literacy can help students know when they are most likely to experience errors, bias, and the potential for misinformation, which might help emphasize the value of human judgment. And understanding how generative AI uses prompts to make predictions can assist future creatives in ideation.
As a mindset, design thinking remains highly relevant across various fields and contexts; design thinking is a collection of principles promoting data use to validate ideas and inform how we solve real-world problems.