(Image credit: Author using Claude (Anthropic) - see full prompt below)
“If you’ve lost a leg, you do not need Grammarly.”
The title of this blog is deliberately provocative and deliberately obvious. Nobody would argue that a hearing aid constitutes a breach of academic integrity, would they? Nobody would suggest a student using a wheelchair to access campus is gaining an unfair advantage over a student who might walk onto campus, would they? And yet, in 2026, we are actively investigating students for using text-to-speech software, spelling correction tools, and AI-assisted writing aids that are, in many cases, specifically recommended to them through their Disabled Students’ Allowance (DSA). We are, in effect, policing the very accommodations we prescribed. Has the idea of a level playing field really disappeared into an AI-corrosive quagmire?
I am not here to argue that academic integrity does not matter, it does, profoundly. I am here to suggest that how we protect it matters just as much. Right now, the tools we might be using to detect academic misconduct, including us as human detectors, might also be disproportionately harming the very students who most need our support. That is not integrity. That is inequity dressed up as enforcement.
Figure 1. The relationship between Trust & Fairness, Integrity of Knowledge, and Social Responsibility in academic integrity, prevention, policy and penalty sit at the intersection of all three (Oldham, 2025).
Academic integrity has always rested on three interconnected values: trust, fairness, and a commitment to honest scholarship. The Venn diagram above (Fig. 1) captures this well, prevention, not just penalty, sits at the heart of that framework, alongside policies that serve all students equitably. When an institution’s detection system treats a student using a DSA-funded screen reader with the same suspicion it applies to a student who purchased an essay, it has abandoned fairness entirely.
AI detection tools, whether part of Turnitin or independent applications, were developed without considering the unique needs of disabled or neurodivergent writers. Their algorithms are trained on patterns of “typical” writing: consistent sentence variation, natural vocabulary diversity, a certain rhythm of expression. Students whose writing deviates from those norms due to dyslexia, autism, ADHD, processing disorders, or the legitimate use of assistive technology could be disproportionately flagged as suspicious, not because their work is dishonest, but because their writing is different (Shipley, 2026). Recent reporting from the University of York documents exactly this: neurodivergent students who used academic language carefully, the very practice we in academia encourage, found their work flagged by AI detection systems as “too polished” or “too uniform.” As the York Students’ Union Academic Officer commented, it is deeply upsetting that a student’s genuine attempt to use academic-sounding language is working against them (Shipley, 2026).
The core problem: AI detection tools rely on markers like ‘burstiness’, variations in sentence length, and vocabulary predictability. Students who are neurodivergent, or who use assistive writing tools as part of their DSA support, frequently present writing patterns that deviate from these ‘typical’ norms, generating false positives. The harm falls most heavily on those who already face the greatest barriers (Eaton, 2022; Eaton, 2025).
This is not a theoretical concern. A growing body of empirical research confirms that AI detection tools produce significantly higher false positive rates for neurodivergent students, non-native English speakers, and students who rely on language assistance tools, including tools that are specifically funded and recommended by institutions to support disabled students. If technology cannot get this right, why do humans believe they can?
What does the Office of the Independent Adjudicator (OIA, the ombuds service for higher education students in England and Wales) say? “There is no need for a person to establish a medically diagnosed cause for their impairment. What it is important to consider is the effect of the impairment, not the cause” (OIA, n.d.).
Kofinas, Tsay and Pike (2025) found, in a study of human markers, that false positives occurred when markers incorrectly identified original, unaltered student submissions as AI, influenced, “implying that students who are not cheating have been penalised on suspicion alone.” Where that suspicion is triggered by a detection tool’s flag rather than by pedagogical judgement, the risk of injustice deepens further.
A detailed critical review of AI detection found that “AI detectors disproportionately target non, native English writers” and that “neurodiverse students are more likely to be falsely flagged for AI, generated writing”, with documented biases against a range of linguistic patterns and dialects (Hirsch, 2024). Another analysis by Guan and Han (2025) found that 30.8% of human, written essays were correctly identified as being written by a human, leaving 68.8% misidentified as having been written by AI. However, when fed AI, generated content, ChatGPT was able to correctly identify over 80% of content as having been written by itself (Bhattacharjee and Liu, 2024). These are not peripheral cases. They are systematic failures.
Research on the use of AI by students with disabilities underscores the complexity here. Many students in this group have legitimate, institution, sanctioned reasons to use AI-assisted writing tools, including those funded through DSA, and their use of these tools is often indistinguishable from unsanctioned use when viewed through the lens of text similarity alone (Zhao, Cox and Chen, 2025). The key concern reported by these students themselves was not that they were trying to cheat, but that they were afraid of being wrongly accused of using tools they had been actively encouraged to use (Zhao, Cox and Chen, 2025). This is precisely the concern of Open University students surveyed about their use of AI across a mixed group of students with and without access to software recommended via DSA, and we know that many students can access AI software, whether Copilot made available to all students or Read&Write that is bought by some students and provided to others via DSA (Oldham, 2025a; Oldham, 2025b).
Survey respondents reported that generative AI can function as effective assistive technology for students with disabilities, but the most helpful tools are often not institutionally approved. This creates fear of academic misconduct allegations, discouraging use and potentially worsening inequities in higher education, underscoring the need for targeted AI literacy training (Freeman, 2024; Zhao, Cox and Chen, 2025; Zhao, Cox and Cai, 2024).
Sarah Eaton’s work is worth repeating here: “There can be no integrity without equity” (Eaton, 2022). Academic integrity frameworks that apply identical surveillance mechanisms to all students, regardless of their starting position, their disability, or their institutionally sanctioned support, are not upholding integrity. They are reinforcing the very inequalities that widening participation agendas set out to dismantle.
Pagaling, Eaton and McDermott’s (2022) report, Academic Integrity: Considerations for Accessibility, Equity, and Inclusion, identified a significant gap in the field: the intersection of disability and academic integrity had been entirely neglected. Students with disabilities, the report found, may face disadvantages in academic integrity reporting, communication, and process, disadvantages that are compounded rather than remedied when detection tools are applied without consideration of accessibility.
Figure 2. The Educate, Enable, expect framework for managing spiralling misconduct and AI referrals (Oldham, 2025b). The answer to rising case numbers is not more surveillance, it is better education, delivered earlier.
The Educate, Enable, expect model shown above offers a response to spiralling misconduct cases that starts from a fundamentally different premise: that most students want to do the right thing, and that our job as educators is to make sure they know how.
For students with disabilities, this model is not just preferable, it is essential. Educating students about what academic integrity means in practice, including how their DSA tools interact with assessment expectations, removes the ambiguity that leads to accidental breaches. Enabling them through formative opportunities to practise and receive feedback before the high stakes moment means they arrive at final submission with confidence rather than anxiety. And expecting them to take ownership once they have been genuinely taught and genuinely supported is fair in a way that surveillance without education never can be.
Oldham (2025b) has argued that this represents a “blue ocean” approach to academic integrity, moving away from the contested, punitive “red ocean” of detection and penalty, towards an uncontested space where integrity is normalised and student, owned. For disabled students navigating already complex accommodation landscapes, this reframing is not just pedagogically sound, it is morally necessary.
This is not a call to lower standards. Disabled students can achieve high standards of original academic work, and most are doing exactly that. This is a call to ensure that the systems we use to uphold those standards do not punish students for the ways their minds and bodies work.
Specifically, what is needed:
If we want integrity, real integrity, the kind built on trust and fairness and social responsibility, we must build it for everyone. It is time we designed our systems accordingly.
Bhattacharjee, A. and Liu, H. (2024). Fighting fire with fire: can ChatGPT detect AI, generated text? ACM SIGKDD Explorations Newsletter. https://doi.org/10.1145/3655103.3655106
Eaton, S.E. (2022). New priorities for academic integrity: equity, diversity, inclusion, decolonization and Indigenization. International Journal for Educational Integrity, 18(10). https://doi.org/10.1007/s40979-022-00105-0
Eaton, S.E. (2025). Neurodiversity and academic integrity: Toward epistemic plurality in a postplagiarism era. Teaching in Higher Education. https://doi.org/10.1080/13562517.2025.2583456
Freeman, J. (2024). Provide or punish? Students’ views on generative AI in higher education. Higher Education Policy Institute (HEPI). https://www.hepi.ac.uk/wp-content/uploads/2024/01/HEPI-Policy-Note-51.pdf
Guan, Q. and Han, Y. (2025). From AI to authorship: Exploring the use of LLM detection tools for calling on “originality” of students in academic environments. Innovations in Education and Teaching International, 62(5), 1514–1528. https://doi.org/10.1080/14703297.2025.2511062
Hirsch, K. (2024). AI detectors: an ethical minefield. Center for Innovative Teaching and Learning, Northern Illinois University. https://citl.news.niu.edu/2024/12/12/ai-detectors-an-ethical-minefield/
Kofinas, A.K., Tsay, C.H., H. and Pike, D. (2025). The impact of generative AI on academic integrity of authentic assessments within a higher education context. British Journal of Educational Technology, 56, 2522–2549. https://doi.org/10.1111/bjet.13585
Office of the Independent Adjudicator (OIA) (n.d.). Good Practice Framework: Supporting Disabled Students, what does the law say? https://www.oiahe.org.uk/resources-and-publications/good-practice-framework/supporting-disabled-students/what-does-the-law-say/
Oldham, C. (2025a). Reframing Turnitin: From plagiarism detector to formative tool for academic writing and integrity. European Journal of Open Education and E, learning Studies, 10(3). https://doi.org/10.46827/ejoe.v10i3.6210
Oldham, C. (2025b). Blue ocean integrity: High expectations within structured learning opportunities. European Journal of Open Education and E, learning Studies, 10(3). https://doi.org/10.46827/ejoe.v10i3.6208
Pagaling, R., Eaton, S.E. and McDermott, B. (2022). Academic Integrity: Considerations for Accessibility, Equity, and Inclusion. University of Calgary. https://files.eric.ed.gov/fulltext/ED618700.pdf
Shipley, A. (2026). Neurodivergent students are being falsely accused of using AI. Nouse (University of York). https://nouse.co.uk/articles/2026/03/16/neurodivergent-students-are-being-falsely-accused-of-using-ai.
UK Government (n.d.). Disabled Students’ Allowance (DSA). https://www.gov.uk/disabled-students-allowance-dsa
Zhao, X., Cox, A. and Cai, L. (2024). ChatGPT and the digitisation of writing. Humanities and Social Sciences Communications, 11, 482. https://doi.org/10.1057/s41599-024-02904-x
Zhao, X., Cox, A. and Chen, X. (2025). The use of generative AI by students with disabilities in higher education. The Internet and Higher Education, 66, 101014. https://doi.org/10.1016/j.iheduc.2025.101014
Image prompt: Image generated by Claude (Anthropic) based on the full content of this blog. Two students sit at the same desk taking the same exam. On the left, a wheelchair user with a visible hearing aid works on a paper exam and is met with quiet acceptance, a green tick, the word “accommodation”. On the right, a student with no visible disability works on a laptop running assistive software (the highlighted line and small speaker icon indicate read, aloud or screen, reader use) and is met with an amber question mark, the word “suspicion” and a “FLAGGED?” sticky note clipped to the corner of their screen. Same exam, same effort, same focused expression on each face, only the means of access differs, and only the response to that means differs. The strapline beneath asks readers to confront the central provocation of the blog: a level playing field is not an unfair advantage.
Chelle Oldham is University Academic Integrity Co-Lead at The Open University, UK.
Thank you for being a member of ICAI. Not a member of ICAI yet? Check out the benefits of membership and consider joining us by visiting our membership page. Be part of something great!