Understanding the Role of AI in Upholding Academic Integrity

As the use of AI-based writing assistance tools becomes increasingly prevalent in educational settings, educators are struggling(mainly in secondary and college levels) with the challenge of maintaining academic integrity while embracing these technological advancements. A recent incident involving a student who used an AI-based proofreading tool for a paper submission has sparked a debate on the fairness and ethics of penalizing students for using AI tools, especially in the absence of clear guidelines or policies.

The Situation

In a course a student submitted a paper revised using an AI-based proofreading tool. The instructor detected signs typical of such tools, like unique watermarks or hidden text layers(I am not sure tools that do this so I would love to know which tools actually do this), and considered this a violation of academic guidelines. Consequently, the student received a failing grade and has chosen to appeal. This incident raises several important questions about the role of AI in education and the need for clear policies and guidelines.

Discussion Points

Here are some ideas I have been working through as this is not the first time this will happen in an education setting.

AI Policy and Fairness

If a course, campus, or department does not have an AI policy, is it fair to fail a student for using AI tools? Furthermore, if we as professionals have not taught students how to use tools in accordance with the objective of the task, should we fail a student for using them? I will take it further, if we are using AI to create the lessons and tasks AND then don’t allow students to leverage the AI tools is that fair?

Guidelines for AI Usage

What should be the guidelines for using AI writing assistance in academic work? How can educators define the boundaries for AI tool usage?

Clarity and Communication

How can educators clearly communicate their expectations regarding the use of AI in coursework? What role does transparency play in helping students understand these boundaries?

Course-Specific Policies

Should the policy on using AI assistance vary depending on the course type (e.g., writing intensive vs. other subjects)? How can educators balance the educational benefits of AI with the learning objectives of different courses? I know there are suggestions like A.J. Juliani, college policies crowdsourced here, Ditch That Textbook What’s Cheating to help us navigate answers.

Identifying AI Use

What are the ethical considerations and practical aspects of detecting AI assistance in student submissions? What should be the appropriate response upon discovering such use?

As always read the research

There is another paper out today highlighting the limitations of AI-writing detectors such as this, or this issues for non native speakers, and how this post about chasing cheating is a distraction.

As stated in one of favorite reads of the moment, Shift Writing into the Classroom with UDL and Blended Learning” by Catlin Tucker and Katie Novak, they state

Technology is not the problem; traditional teaching practices are the problem.

And this statement:

The emergence of this AI technology is another force beyond the classroom that’s shining a spotlight on the shortcomings and limitations of the traditional approach to teaching.

Handling Appeals

If we are going to fail students, then have we prepared and planned for when a student faces penalties for using AI tools, what should the appeal process look like? How can fairness and educational value be ensured in the resolution process?

I am going to assume most have not thought this through to this point as many still have not even used a LLM.(and if you don’t know what LLM is, then perhaps failing an student is the least of your concerns right now)

Teaching Academic Integrity

How can this scenario be used to educate students about academic integrity in the context of digital resources? What are the key lessons about honesty, originality, and the responsible use of technology?

Let’s be honest this is a tale as old as time. We freaked out with the end of thinking with

  • students gaining access to the internet
  • students have 1:1 devices
  • students using Wikipedia
  • students using YouTube

AI’s Future in Education

How might the evolving capabilities of AI tools shape educational practices and policies regarding academic integrity in the future?

Again many of the issues being framed as AI issues are not AI issues, they are human issues. They are academic issues where perhaps our practices and assessments are on the wrong things.

Insights from the Teaching AI Toolkit

I also wanted to practice using the The Teaching AI Toolkit to see if it offers some valuable insights on promoting AI literacy, exploring opportunities and addressing risks, advancing academic integrity with AI, maintaining student and teacher agency, regular auditing and policy review, responsible use of AI tools, compliance with existing policies, and involving various stakeholders in the development of guidance and policies.

However, the big question remains: How do we implement these recommendations when professional development time is limited? How do we engage in this work to prevent issues like the one described above from happening?

The Role of Traditional Teaching Practices

While it’s important to address the use of AI in education, we must also consider the role of traditional teaching practices. As stated in “Shift Writing into the Classroom with UDL and Blended Learning” by Catlin Tucker and Katie Novak, the emergence of AI technology is highlighting the shortcomings and limitations of traditional teaching approaches[1].

Your Thoughts

As we all navigate the complexities of integrating AI into education, your thoughts and experiences are invaluable. How do you think we should address the issues raised in this post?

How can we ensure fairness and uphold academic integrity while leveraging the benefits of AI? Please share your thoughts in the comments below.

Additional Resources I Explored

Leave a Reply