As a college or university student, you are very likely to have programs, departments, and professors use AI detection software to analyze your exam conduct and answers, as well as the essays and other academic work you submit for credit. When professors and programs prohibit students from using AI tools on their exams and in their academic work, they generally mean it and will use AI detection software to follow up on their prohibition. Disciplinary charges, school suspension, and program expulsion can result in students who are flagged as AI users and cheaters by AI detection software. However, AI detection software and its careless use can introduce significant bias against certain students and student populations.
Read below how our attorneys can help you fight that bias and preserve your higher education against AI detection software disciplinary charges. Retain the Lento Law Firm's premier Student Defense Team for your best disciplinary outcome to AI detection software charges. Call 888.535.3686 or use our contact form now for the highly qualified attorney defense you need to fight bias.
Student AI Use in Higher Education
You are likely familiar with the AI revolution going on in higher education. AI use among college and university students is not just a wave. It's a tsunami. One campus chronicle reports survey results showing that about 90% of students in higher education programs use AI tools to help with studies. Another campus chronicle reports other survey results showing that more than half of students use AI tools every week, and a quarter of students use AI tools every day. AI tools saturate college and university student studies. Students are using AI tools for just about everything relating to studies, from spelling and grammar checks and information searches, to transcription, summarizing, paraphrasing, and beyond, to voice, image, and text alteration, improvement, and generation. Popular AI apps for students in higher education include:
- Gemini (Bard), ALEKS, Doubtnut, and Cymath for AI-driven math and science tutoring and step-by-step problem solving;
- DALL-E, Animoto AI, Beautiful AI, and Canva Magic Write for AI image generation and AI designed presentation materials;
- ChatGPT, Essaybot, HyperWrite, Ink AI, Jasper, Jenni AI, DeepL Translator, and Anyword AI for essay text generation and improvement;
- Grammarly, Language Tool, and Hemingway Editor for grammar and style improvement;
- Chegg, CourseHero, Edgenuity AI, and Apex Learning AI for adaptive online coursework;
- Brainly AI, Socratic, and Edmodo AI for AI crowdsourced homework assistance;
- Animoto, Elai.io, and DeepBrain AI for AI-generated video;
- CiteThisForMe, Mendeley, and EasyBib for AI citation generation;
- ClickUp AI, Mem AI, Sonix AI, and Glasp AI for AI note taking;
- Consensus AI for AI extraction of research conclusions;
- Fireflies.ai for AI summarizing of class discussions;
- QuillBot for AI paraphrasing and summarizing of text;
- Kollegio AI for AI-generated college application content.
AI Cheating in Higher Education
AI isn't just aiding college and university students; it's enhancing their studies. AI is also doing college and university work for students, detracting from their studies. Professors may reasonably expect and outright require students to take and revise their own notes, attend class to make their own summaries of classroom discussions, research in libraries and academic databases rather than open online, write, edit, and improve their own essays, solve their own problems displaying their own step-by-step work, and take their own exams without unauthorized assistance. Yet you can see from the above list of AI apps that students commonly use that the temptation to use AI may continue past the point where professors, programs, departments, and schools outright prohibit it. Let our attorneys help you defend your school's disciplinary charges against you, alleging your unauthorized AI use. We know how colleges and universities err and inject discriminatory biases when flagging students and student work for unauthorized AI use.
Institutional AI Use in Higher Education
Professors, programs, departments, and technology offices are also using AI in higher education, and not just here and there but everywhere. Some individual professors resist learning and deploying AI tools, relying on traditional forms of instruction and work management. However, other professors are early adopters, and more departments and institutions are training and supporting professors in AI use, including AI detection software, even requiring AI detection software use or doing it for reluctant professors. Other college and university personnel have adopted AI software to assist with nearly every imaginable higher education function. Higher education programs use the following AI monitoring, grading, proctoring, plagiarism, admissions, enrollment, and AI detection programs, among many others:
- Gaggle, GoGuardian, Hapara Highlights, and Bark for Schools to monitor online activity for endangering behavior;
- ClassDojo AI, Presence AI, Smowl, and LiveSchool AI for AI powered monitoring of student classroom behavior and online engagement;
- GPTZero and Copyleaks AI Detector for AI detection of originality;
- E-HallPass AI and Lightspeed Systems AI for AI tracking of student movements;
- Proctorio, Proctortrack AI, ProctorU AI, Respondus Monitor, Examity AI, Honorlock, and ExamSoft ExamMonitor for AI online exam proctoring;
- PowerSchool AI for AI driven student performance analysis;
- Securly AI for AI monitoring of the safety of school devices;
- AdmissionPros, EDlumina Admissions, Element 451, Fullthrottle.ai, Salesforce Education Cloud, Embark, and SurveyMonkey for AI student admissions and enrollment management;
- SafeAssign AI, Originality.ai, and Sapling AI Detector for AI content identification and plagiarism detection.
AI Detection Software Errors
AI detection software makes mistakes. AI is, after all, a product of human invention. AI systems have the illusion of certitude, objectivity, and perfection. But AI systems are anything but perfect. AI systems can have deep flaws, hidden by their machine workings and excused because of their convenience. Higher education professors and disciplinary officials may know well that their AI systems aren't particularly reliable but may use them anyway for any number of reasons having to do with cost and time constraints, convenience, laziness, priorities, or other excuses and pressures. AI detection software can commit and introduce the following errors, among many others:
- AI detection software can flag original student work as if produced by AI;
- AI detection software may indicate machine proofreading, editing, and improvement of student work when no machine use occurred;
- AI detection software may indicate student collaboration when no collaboration occurred;
- AI detection software may flag exam attendance as involving an impostor substitute when none was involved;
- AI proctoring software may flag student eye movements as suspicious when they are innocent and ordinary to the particular student;
- AI proctoring software may indicate computer or other device misuse during the exam when no misuse occurred;
- AI proctoring software may indicate incorrect start and finish times or dates for an online exam taken on a proper date and finished in the allowed time;
- AI safety and security software may flag student items and movements as endangering when perfectly safe;
- AI online monitoring software may flag student communications as bullying when they are instead joking or otherwise innocent.
Bias in AI Detection Software
AI detection software and other AI systems don't just make random mistakes. They can also introduce patterned bias and prejudice, unlawfully discriminating against individual students and student groups having protected characteristics. Humans exhibit biases. Humans designed AI detection software and systems. It is thus logical and natural to conclude that those AI detection systems will preserve, harbor, and exhibit the biases that their inventors held. AI detection software bias and prejudice can arise in any of the following ways, among several other ways:
- misconstruing peculiar student physical appearance as impaired, threatening, endangering, or otherwise suspicious;
- misconstruing peculiar physical conditions, such as replacement joints or artificial limbs, as endangering or otherwise suspicious;
- misconstruing student disabilities, such as limps, hunching, altered speech or speech patterns, and frozen eye movements or other unusual demeanor, as inattentive, distracting, disruptive, or otherwise suspicious;
- misconstruing peculiar student demeanor, such as eye avoidance or repetitive motion related to autism or exaggerated emotion related to tourette syndrome, as endangering, disruptive, or otherwise suspicious;
- misconstruing student disability supports, such as wheelchairs, walkers, canes, and breathing apparatus, as endangering, inappropriate, or otherwise suspicious;
- misconstruing peculiar student behavior, such as frequent standing, sitting, and shifting, rocking back and forth, peculiar head positions, and loud or low voicing, as disruptive, disengaged, furtive, or otherwise suspicious;
- misconstruing peculiar student dress, such as head coverings, face coverings, and robes, as inappropriate, offensive, endangering, or otherwise suspicious;
- misconstruing peculiar student items, such as jewelry, icons, and religious swords or knives, as inappropriate, endangering, or otherwise suspicious;
- misconstruing peculiar student communication forms, such as profanity, ebonics, or other vernacular forms, as abusive, offensive, threatening, intimidating, inappropriate, or bullying.
Bias in AI Detection Use
The problem is not just that AI detection software can introduce bias into disciplinary charges and sanctions. AI detection software can also foster and facilitate biases that disciplinary officials already hold. Bias can, in other words, arise out of the improper use of unbiased AI software and systems. AI detection software accelerates and enhances human activity. It is thus natural and logical to conclude that AI detection software will accelerate and enhance pre-existing human biases, while introducing the AI software's own built-in biases.
AI detection software enhances pre-existing institutional biases in two main ways. First, AI detection software vastly increases the overall number of flagged students and flagged student behaviors. Second, AI detection software flags peculiar rather than ordinary behaviors, when peculiar customs, attributes, and behaviors are the identifying mark of individual minority students and discrete minority student groups. AI detection software can thus enhance the pre-existing bias of college and university officials in the following ways, among other ways.
AI Online Monitoring System Bias
AI online monitoring systems alert school officials to a continual stream of potential misconduct instances. Disciplinary officials, who have no time to continuously monitor online communications themselves, would have entirely missed that stream of flagged communications without the AI detection software. The AI detection software also marks non-customary communications peculiar to minority students and minority student groups. That minority group specific marking means that more minority students face online communication, disciplinary investigation charges, and discipline. When disciplinary officials harbor their own biases against those minority students and groups, the AI monitoring software produces the minority student fodder to feed their biases, producing an ugly bias cycle.
AI Proctoring Software Bias
AI proctoring software alerts proctors and, in turn, school disciplinary officials to many more potentially suspicious exam activities. Schools deploy AI proctoring tools because of the natural limitation of a human proctor's ability to continuously observe the whole exam room or the full spectrum of online exams. AI proctoring tools thus cause disciplinary officials to review the actions of many more students who exhibit anomalous behavior during exams, when minority students and groups are minorities because of their anomalous behaviors. Once again, the AI proctoring software produces a fodder of minority students to feed the pre-existing biases of school disciplinary officials, aggravating the discrimination.
AI Classroom Monitoring Software Bias
AI classroom monitoring and engagement software alerts professors and, in turn, school disciplinary officials to many more instances of potentially endangering, disruptive, or otherwise inappropriate classroom activities. Again, that's why schools use AI classroom monitoring software, because of the professor's natural inability to continuously monitor the whole classroom. With many more flagged behaviors, especially of minority students and groups, school disciplinary officials naturally target, charge, and discipline greater numbers of minority students. If those school officials already harbor biases, the AI classroom monitoring software feeds those biases.
AI Composition Detection Software Bias
AI composition detection software can also feed pre-existing biases of school officials. Minority students can exhibit different composition styles when using vernacular forms or peculiar cultural colloquialisms, or when academically underprepared to conform compositions to majority norms. They may accordingly show greater distinction between their natural composition style and the style they learn and adopt in their higher education studies. AI composition detection software may thus incorrectly flag those greater distinctions as if they involved misuse of AI content creation, grammar, and composition tools, when to the contrary the minority student was simply exhibiting a natural distinction between customary behavior and newly learned behavior. AI composition detection software flags greater numbers of minority students, who disciplinary officials then charge and discipline in greater numbers, aggravated at times by their own biases.
Individuals and Groups Affected by AI Bias
The above discussion should already suggest to you the protected minority groups that AI detection software can inappropriately flag, producing fodder feeding the disciplinary systems and the biases of school officials. Those groups can include the following, among others:
- racial, ethnic, and national origin groups exhibiting peculiar cultural norms involving dress, demeanor, and language;
- religious groups exhibiting peculiar language, dress, and behavior, exhibiting peculiar symbols, and carrying peculiar items;
- groups with minority sexual orientation or gender identities, exhibiting peculiar dress, demeanor, language, and behavior;
- groups of students with disabilities, exhibiting peculiar conditions, behaviors, language, and demeanor, carrying or using peculiar supports, and needing peculiar accommodations and services;
- military member and veteran groups, exhibiting peculiar dress, language, and demeanor, and having peculiar attendance, interpersonal address, personal posture, and movement;
- groups of students with different marital status, children, and family structure, exhibiting peculiar dress, language, demeanor, and behavior, and having peculiar attendance patterns, responses, and schedules;
- groups of students with different educational histories and profiles, such as working students and adult and senior learners, exhibiting different study patterns, habits, practices, and schedules, and peculiar dress, language, behavior, and demeanor;
- groups of students with different employment status and experience, exhibiting peculiar language, demeanor, responses, study schedules, and behaviors.
Defending Biased AI Detection Software Charges
Our attorneys know how to take on, defend, and defeat disciplinary charges involving AI detection software bias. Your AI detection software charges do not mean that your college or university has already determined that you have committed a conduct code violation. Disciplinary charges instead constitute the school's allegation that you broke AI use rules and prohibitions. School disciplinary officials may have based the allegation against you on their own guesses and speculation, or the conjecture of complaining witnesses who were themselves mistaken, had no factual foundation supporting their assertions, or were acting out of their own bias, retaliatory motives, and conflicts of interest. Our attorneys may be able to defend and defeat your AI detection software charges, exhibiting unlawful and unfair bias, on any of the following other grounds:
- you engaged in no impermissible use of AI tools, making the charges false and erroneous, as our forensic computer consultants can prove from their analysis of the relevant electronic files and records;
- you were not among the other students whom the school has properly accused of AI misuse based on AI detection software, and the school has misidentified you as involved in group AI use charges;
- you used AI tools but only according to your professor's permission, department policies, or the instruction and advice of teaching assistants, advisors, or other responsible school officials;
- the school's AI detection software flagged your innocent conduct because of peculiarities associated with your legally protected individual or group characteristics;
- the school's pursuit of AI detection software charges against you would constitute unlawful discrimination in violation of Titles II and IX of the Civil Rights Act of 1964 and other federal and state anti-discrimination laws;
- your use of AI tools, although a technical violation of your professor's instructions or department policies, was inadvertent, unintentional, accidental, and de minimus;
- you stopped, disclosed, and corrected your mistaken use of AI tools as soon as learning of the AI prohibition, and you engaged in no attempt to conceal your mistaken use of AI;
- your use of AI tools resulted from a one-time, extraordinary, exigent event involving your severe illness or injury, medication reaction, or family tragedy, since resolved, excusing your aberrant behavior in this one instance;
- you have a clean discipline record and strong academic record, making this one instance of a potential AI violation an anomaly that the school should excuse in the context of your authentic and reliable apology and assurances;
- you have already completed remedial education and training regarding permissible and impermissible AI use, and made other amends through school or community service;
- punitive sanctions would serve no purpose and would instead frustrate the school's educational mission while inflicting undue educational and career harm.
Unauthorized AI Use Disciplinary Codes
Colleges and universities maintain student honor codes, academic integrity policies, and codes of conduct regulating student behavior, including the use of AI tools. See, for example, the Ohio State University's Code of Student Conduct expressly prohibiting “[u]nauthorized use of generative artificial intelligence systems or similar technologies to complete academic activities.” Your college or university likely has a similar student conduct code prohibiting academic misconduct and dishonesty, whether it mentions AI misuse specifically or not. Violation of your school's student code of conduct may result in a range of disciplinary sanctions from probation and loss of credit or privileges all the way up to school suspension and expulsion. Beware the potentially severe impact of disciplinary charges.
Unauthorized AI Use Disciplinary Procedures
Our attorneys should have substantial procedural opportunities to invoke the above defenses and assert other defenses on your behalf, for your best outcome in biased AI detection software charges. Your school must provide you with constitutional due process when leveling disciplinary charges that could result in your school suspension or expulsion. See, for example, the above Ohio State University Code of Student Conduct offering elaborate protective procedures. Our attorneys can invoke your school's procedures to attempt early voluntary resolution, present evidence at the formal hearing, and appeal adverse decisions. We may also be able to obtain court review and reversal or negotiate alternative special relief through your school's general counsel.
Premier Defense of AI Detection Software Charges
If you face biased disciplinary charges resulting from your school's use of AI detection software, you can do no better than to promptly retain the Lento Law Firm's premier Student Defense Team to help you defend and defeat those charges. Our highly qualified attorneys have helped hundreds of students nationwide in successfully defending AI misuse and other disciplinary charges. Call 888.535.3686 or use our contact form now for our skilled and experienced representation.