In the past, artificial intelligence (AI) used to be largely found in the realm of science fiction. Today, it touches almost everyone's lives, especially students. Some of these changes are positive, in that students have access to powerful research tools. Unfortunately, not all changes in education due to AI are positive. Some of these changes have infringed on student rights and led to unfair academic and disciplinary actions against them.
Some students are being accused of cheating because of AI. Others are experiencing privacy violations or facing unwarranted disciplinary action due to AI implementation by educators. If this happens to you or your child, contact the Lento Law Firm Education Law Team as soon as you can. Schedule a consultation by calling 888-535-3686 or by filling out our online contact form.
Academic Due Process Concerns
There's little disagreement that cheating with AI is still cheating. But the difficult part is figuring out what actions constitute cheating.
For example, imagine a teacher saying, “Don't use any type of AI to help write the essay,” but the school's handbook or honor code is silent on the use of AI in the classroom. If a student uses ChatGPT to brainstorm ideas or Grammarly to edit the completed paper, did the student cheat even though the AI didn't write the paper for them?
It's hard to say, and it helps explain why at least one lawsuit has been filed where a similar situation occurred. In that case, a student alleged they were punished for using an AI tool to help write a paper as a result of unclear guidelines and rules concerning AI use.
Algorithmic Bias
Some teachers are now relying on special software to identify cheating, especially with written assignments. The goal of these special algorithms is to identify student work that could be the result of plagiarism or improper use of AI tools.
Using AI to catch AI seems like a logical proposition. The problem is that the AI detectors are only as good as their programming, and their programming isn't very good.
A 2023 study examined seven AI detectors to see how well they could identify AI-generated content among essays from U.S.-born writers and writers where English was not their native language. When analyzing the essays from U.S.-born writers, the AI detectors were very accurate in flagging AI-generated content.
However, all seven incorrectly identified 19% of essays written by non-native English speakers as AI-generated. Additionally, 97% of those essays written by non-native English speakers were improperly flagged as AI-generated by at least one of the seven AI detectors.
In other words, the AI detectors were heavily biased against writers where English was a second or third language. This could lead to minority students getting falsely accused of cheating at far higher rates than their peers. Even if they survived the student disciplinary process, their academic reputation could be damaged beyond repair.
Privacy Concerns
Even when an AI learning tool is functioning properly, it could be violating a student's privacy rights. The Family Educational Rights and Privacy Act (FERPA) gives parents of students the right to control educational records, including identifiable educational student data. This means a teacher or school's use of an AI-grading tool or learning management system could result in a FERPA violation if it collects and stores student data on third-party servers without parental consent.
Talk to an AI Education Lawyer
If your school or your child's school is using AI, it's understandable to be concerned. In worst-case scenarios, cheating accusations could destroy a student's academic career, denying them a rightfully earned opportunity to attend a particular school, enter a certain profession, or receive much-deserved financial aid. Prevent this from happening by contacting the Education Law Team from the Lento Law Firm. You can use our online contact form or call 888-535-3686.
Comments
There are no comments for this post. Be the first and Add your Comment below.
Leave a Comment
Comments have been disabled.