AI Concerns for the Neurodivergent

Artificial Intelligence, or AI, is developing faster than it can be regulated. Generative AI can create essays and other written work and has become a controversial topic in higher education.

Colleges and universities generally include not cheating or plagiarizing as part of their honor codes. Part of an education is doing the work, including writing, as a way to learn.

Generative AI programs, such as ChatGPT, pose a threat to traditional academic standards and expectations. One of the issues is that it isn't necessarily easy to determine what is and is not AI-generated content. Detection software has proven unreliable, and people have also not proven consistent in being able to determine what is and is not AI-created writing.

Individuals who are neurodivergent are especially likely to be wrongly accused of using AI in their work. These accusations aren't harmless. Students can face disciplinary action, even with multiple students and AI companies recommending against using AI detection tools or other methods as a basis for disciplinary action.

If you've been wrongly accused of using AI in coursework, you face steep disciplinary actions, including suspension and possible loss of scholarships. Lack of understanding about AI, what it can do, and how quickly the technology is changing can make defending yourself against accusations challenging.

Neurodivergent students are already more likely to drop out of college before completing their degrees. The challenges with AI may only worsen the completion rate for students with autism and similar disabilities.

The Education Law Team at the Lento Law Firm works with students who have disabilities as well as students facing disciplinary action. We're uniquely positioned to help undergraduate and graduate students. Call us at 888-535-3686 or fill out an online form.

What is Generative AI?

In 2023, a Purdue University professor was accused of being an AI bot. Her response? “It's not an AI. I'm just autistic.”

The professor, Rua Williams, is well positioned to talk about the subject of false AI. A professor of user experience and design, her research focuses on the intersection of science and technology studies with critical disability studies.

She's unique among college faculty and administrators. Many of them lack the knowledge and experience needed to understand AI and how it generates texts. In comparison to Williams, a Texas A&M University-Commerce professor accused his entire class of using AI on a final assignment. Administrators at Texas A&M Commerce later clarified that no student would fail or be barred from graduation.

For Williams, the accusations are a funny story and help her explain the problems with AI detection software. She points out that the way AI software analyzes writing often disadvantages how neurodivergent individuals write and speak.

For students, as shown by the Texas A&M Commerce story, being accused of using AI isn't just a funny story. They face potential disciplinary action as well as a potential loss of reputation and trust with professors.

In 2023, a Central Methodist University student faced significant academic penalties after an AI detection tool wrongly flagged her work as AI-generated. The student, who has been diagnosed with autism, did manage to clear her name and avoid disciplinary action.

Inaccurate Technology

Colleges and universities understandably want to dissuade students from using ChatGPT and similar services to write their papers. Purdue was one of several colleges and universities that released guidance about AI policies in course syllabi. These initial guidelines often encouraged the use of AI detection software (Purdue has since updated its guidelines, including a warning against the use of AI detection software).

It didn't take long, however, for colleges to discover AI detection software is unreliable. One AI tool flagged the U.S. Constitution as potentially AI generated. OpenAI, which operates ChatGPT, discontinued its AI detection tool because of its low rate of accuracy.

False positives occur when AI tools flag human-generated content as AI created. While one study found the false positive rate to be around one percent, a study by the Washington Post found the false positive rate to be around 50 percent.

While one percent may not sound like much, consider Purdue's in-state rival, Indiana University. IU has an undergraduate population of 90,000 students. Assuming each student writes at least one paper a semester, that means potentially 900 students will be wrongly flagged for using AI to complete papers or other assignments.

Neurodivergent Students and False Positives

Anecdotal stories indicated that students who are neurodivergent were more likely to be accused of using AI to write papers. A Stanford University study confirmed that students who are neurodivergent or for whom English isn't their first language are more likely to have their writing flagged as AI than their classmates.

Students who are neurodivergent are more likely to be flagged as using AI tools due to linguistic or behavioral differences that relate to their disability. The lack of clear policies can also negatively impact neurodivergent students, who often struggle with social cues and do better with concrete, cut-and-dry guidance.

According to a University of Pennsylvania study, it's easier to fool AI detection tools with variations in spelling, symbol usage, and spacing. A member of the New York State Bar AI Task Force pointed out that she was able to fool AI detectors 80 to 90 percent of the time simply by including ‘cheeky' in a prompt.

To avoid accusations of AI-created work, some students are turning to humanizers, tech tools that edit writing to make it sound more human. In other words, to avoid being accused of using one technology, students are using another technology.

Disciplinary Action

Even with no shortage of sources and studies pointing out the inaccuracy of AI detection, colleges and universities continue to subject students to disciplinary action. The consequences can be steep.

A University of North Georgia student was accused of cheating using AI. The student had used Grammarly, which UNG listed as a resource for students, to edit her paper. She received a zero on the assignment and ended up going through the school's disciplinary and appeals process for the next six months.

The student would eventually be put on academic probation. She also lost her scholarship because the zero on the assignment tanked her GPA.

This is just one example of how AI detection tools and accusations of using AI can have serious repercussions on students. It also highlights one of the issues for neurodivergent students: unclear or confusing policies can undermine them. They often miss unclear or implicit instructions.

The confusion over AI tools calls into question their usefulness, especially in situations that allege student misconduct. The University of Pennsylvania study even recommended that colleges and universities not consider results from AI detection tools for disciplinary action.

AI Policies at Universities

One of the challenges with AI is that, because the technology keeps changing, school policies are either outdated or regularly updated. This can make it difficult for students to know what is and is not allowed.

For neurodivergent students who struggle with change or require clear, explicit guidance, constant revisions to AI policies can put a burden on their education. As demonstrated by the University of North Georgia story, especially with big universities, different departments or offices may have different guidelines or recommendations for navigating AI issues.

Shifting guidelines can be confusing, but static requirements can also be detrimental to student education. In 2022, it made sense for universities to encourage the use of AI detection tools. By 2024, policies that suggested the use of AI detection tools, especially when it might lead to disciplinary action, were out of date.

In comparison to K-12 schools, Colleges and universities generally have more leeway in setting school policies.

In Fall 2024, one report estimated that approximately 20 percent of universities have banned AI. Critics of these policies say outright bans aren't effective and fail to acknowledge that AI is likely here to stay.

Some schools have refrained from campus-wide policies. The University of Washington leaves decisions on AI use to professors. The university recommends that students consult each course's syllabus to learn what is and isn't allowed.

Rutgers University has adopted a more nuanced approach. The school has an AI@Rutgers steering committee and working groups to help guide and shape the university's policies and guidance. During the 2024-2025 school year, the university's policy on AI was that it was only allowed with explicit permission from a course instructor and could not violate any of the school's academic integrity policy.

This wide array of policies, even within a university, can make it challenging for students to determine what is and is not allowed. For students who are neurodivergent, the lack of clear guidance can be detrimental to their education.

Return of the Blue Books

One way some professors are preventing students from using AI is returning to handwritten, in-class assignments. Professors are opting for blue books to ensure students aren't missing out on gaining needed skills as well as not relying on AI.

Blue books also make it easier for professors to grade. Without having to worry about how to check if a student used AI tools, professors can focus on grading the essays.

Returning to handwritten assignments, however, poses unique issues to neurodivergent individuals. The blue book solution highlights one of the challenges for neurodivergent students. It solves one problem (professors incorrectly thinking they're using AI) but creates new problems (they may struggle with the alternative format).

Some of the challenges that these handwritten assignments pose to individuals with autism or similar disabilities:

  • Challenges with fine motor skills
  • Issues with sensory processing
  • Difficulties with executive function

In handwriting, autism can often manifest in inconsistent letter size, irregular spacing, fluctuating pressure, and overly particular letter formation.

The wide range of symptoms and manifestations of neurodivergency means that some students may not struggle with handwriting while others may find it creates a significant block in their learning process. Some students may appreciate how a written essay can help them focus. Others may find it's detrimental to their ability to do their best work.

The Education Law Team at the Lento Law Firm can help students work with professors and school staff to find accommodations.

Alternatives to AI Detection Tools

When facing disciplinary action, one question students may want to ask is how a school arrived at the conclusion they used AI. When even creators of a program are advising that it not be used for an accurate judgment, no college or university should be basing a student's future on that program.

MIT Sloan recommends not using AI detection tools at all. The business school instead released its own guidelines on how to navigate AI in the classroom and on campus. Neurodivergent students would benefit from MIT's recommendations, which encourage clear guidelines and inclusive teaching.

The University of California at Santa Barbara Writing Center has a policy with specific sections dedicated to guidance for students and teachers. The center highlights ways students should – and should not – use AI as part of coursework.

One suggestion from UCSB is to always identify when and how AI was used in work, which may be viewed as similar to citing a source. By acknowledging the use of AI, a student is no longer claiming that section as their own creation, and it may reduce the likelihood of being charged with plagiarism.

AI Accommodations and Alternatives

These types of guides – available for free online, meaning any college or university can adapt them – can be beneficial for neurodivergent students. Students can use them in meetings about accommodations and modifications. They can also use them in disciplinary hearings as a way to highlight where a school's AI policy is out of date or disadvantages students.

For college and university students with disabilities, one of the challenges is the minimal requirements for what schools must do to support students with disabilities. Many students with disabilities report inadequate support once they enter college.

A secondary problem is that the majority of students with disabilities also fail to inform their school of their disability. By one account, only one-third of students with disabilities make use of their college or university's disability services.

Failing to inform a college or university of one's disability can be especially challenging with topics like AI. With constantly changing guidelines and technology, students who are neurodivergent may struggle with this never-ending string of changes. Their writing is more likely to be flagged, but if they haven't informed their school of their disability, it can be challenging to explain how they don't require accommodations but do believe their disability affected their work.

Even if students do report their disability and receive modifications, the newness of AI means there's little support available. Professors or disability services offices may not factor in how AI may affect disability accommodations.

The result is that students who are neurodivergent face a steep road when accused of misusing AI in the classroom or on assignments. School staff may say their disability is irrelevant to accusations of misconduct.

AI and Academic Misconduct

College and graduate students are expected to do their own work. Most academic honor codes and codes of conduct ban ghostwriting, plagiarism, and hiring someone to complete their work.

AI-generated work may be considered plagiarism or technology-assisted ghostwriting. Rather than paying someone to complete an assignment, a student is using a program to do the work.

For any student, disproving the use of AI tools on an assignment can be challenging. Neurodivergent students face the additional hurdle of having to explain the parallels between AI content and the writing style of some neurodivergent individuals.

Ideally, students should keep track of their work – the old ‘show your work' adage from math class. A record of their work can operate as proof that they did complete the work on their own.

With hindsight, it can be easy to say what a student should have done. Saving a previous draft would be beneficial, but especially when it takes a student longer to complete an assignment, such measures may not always occur to them.

The Education Law Team at the Lento Law Firm supports our clients in building a case. We help our clients find evidence to push back against the allegations against them.

Accommodations During Disciplinary Action

Having a disability doesn't exempt a student from the disciplinary process. Students can, however, request modifications and accommodations during the disciplinary process.

It's a student's responsibility to request modifications. Colleges and universities have an obligation to provide reasonable accommodations that neither advantage nor disadvantage a student with disabilities compared to their nondisabled peers.

That an allegation of misconduct involves technology doesn't change the disciplinary process or a student's rights during that process. It may require a student to ensure those involved in the process understand the technology and its limitations.

Unlike other areas of academia, professors and administrators are learning about AI at the same time as students. Even well-meaning staff may be struggling to keep up to date with the technology and how to use it (and not use it) in the classroom.

This creates a double-barreled issue for students who are neurodivergent. They need to make sure professors and other staff not only understand the current technology but also how that technology may affect students with disabilties.

The Education Law Team at the Lento Law Firm works with our clients to navigate these issues. From advocating for support to ensuring disciplinary committees have the information they need to make informed decisions, we assist our clients throughout the disciplinary process.

The Disciplinary Process

Each college or university sets its policy for investigating and deciding on accusations of student misconduct.

In general, colleges and universities tend to use similar steps:

  • Accusation of misconduct
  • Investigation
  • Hearing
  • Decision

Students shouldn't assume that a school investigation will absolutely uncover proof of their innocence. Students have the most to lose during an investigation, and they should be proactive in supplying evidence and other information that supports them.

Hearings, while often conducted in a similar manner to a courtroom trial, aren't the same. For one, college and graduate students have fewer legal protections in place. Colleges and universities may or may not allow attorneys to be present during a hearing.

Even if an attorney cannot be present, they can still play a crucial role in preparing a student for the hearing. They can help explain the process, build a strong case, and ensure a student has someone advocating for them throughout the process.

Accusations of academic misconduct, such as misusing AI, can carry serious consequences. Disciplinary action can include:

  • Probation
  • Suspension
  • Expulsion
  • Loss of campus housing
  • Loss of a scholarship

It can also carry intangible losses, such as damage to a student's reputation and relationships. It may limit their ability to pursue certain internships or other opportunities.

Protect Your Education

Any new technology can disrupt society and education. Once upon a time, calculators were viewed as the destruction of math. Some have even drawn parallels between the use of calculators in school and the use of AI in education. If properly utilized, both can have a place in learning that benefits students.

For students who are neurodivergent, however, the promise of a future benefit matters little if they're accused of violating school policies or codes of conduct. They're more likely to face accusations of misusing AI, and too often, the outcome of any disciplinary action comes down to how much school staff and administrators understand a new technology and its shortcomings. Disciplinary boards may be unaware of how what individuals who are neurodivergent write can appear to be wrong, as AI.

Disciplinary action can have serious, long-ranging consequences for students. Accusations of academic misconduct can damage a student's reputation and affect their future opportunities. Neurodivergent students, already less likely to complete college, may drop out rather than face false accusations.

If you're facing allegations of academic dishonesty or plagiarism related to AI, the Education Law Team at the Lento Law Firm can help. We assist undergraduate and graduate students nationwide with issues related to disabilities and discipline.

Our team focuses on education-centered solutions. We stay on top of changes to the law and technology. If you're facing disciplinary action related to AI, call us at 888-535-3686 or fill out an online form.

Contact Us Today!

If you, or your student, are facing any kind of disciplinary action, or other negative academic sanction, and are having feelings of uncertainty and anxiety for what the future may hold, contact the Lento Law Firm today, and let us help secure your academic career.

This website was created only for general information purposes. It is not intended to be construed as legal advice for any situation. Only a direct consultation with a licensed Pennsylvania, New Jersey, and New York attorney can provide you with formal legal counsel based on the unique details surrounding your situation. The pages on this website may contain links and contact information for third party organizations - the Lento Law Firm does not necessarily endorse these organizations nor the materials contained on their website. In Pennsylvania, Attorney Joseph D. Lento represents clients throughout Pennsylvania's 67 counties, including, but not limited to Philadelphia, Allegheny, Berks, Bucks, Carbon, Chester, Dauphin, Delaware, Lancaster, Lehigh, Monroe, Montgomery, Northampton, Schuylkill, and York County. In New Jersey, attorney Joseph D. Lento represents clients throughout New Jersey's 21 counties: Atlantic, Bergen, Burlington, Camden, Cape May, Cumberland, Essex, Gloucester, Hudson, Hunterdon, Mercer, Middlesex, Monmouth, Morris, Ocean, Passaic, Salem, Somerset, Sussex, Union, and Warren County, In New York, Attorney Joseph D. Lento represents clients throughout New York's 62 counties. Outside of Pennsylvania, New Jersey, and New York, unless attorney Joseph D. Lento is admitted pro hac vice if needed, his assistance may not constitute legal advice or the practice of law. The decision to hire an attorney in Philadelphia, the Pennsylvania counties, New Jersey, New York, or nationwide should not be made solely on the strength of an advertisement. We invite you to contact the Lento Law Firm directly to inquire about our specific qualifications and experience. Communicating with the Lento Law Firm by email, phone, or fax does not create an attorney-client relationship. The Lento Law Firm will serve as your official legal counsel upon a formal agreement from both parties. Any information sent to the Lento Law Firm before an attorney-client relationship is made is done on a non-confidential basis.

Menu