The advent of AI technology is already revolutionizing the world, and we're only seeing the tip of the iceberg. Everyone from Hollywood actors to factory line workers seems to have an opinion, so it should come as no surprise that AI has become a major issue of controversy on college campuses as well.
We're not going to debate the relative merits and dangers of AI. The fact is, it exists, and it's only going to grow more sophisticated and more pervasive. We're far more interested in how it has disrupted business as usual for both students and faculty. AI has raised crucial questions about what constitutes learning, what counts as cheating, and just how much monitoring is reasonable. As long as those questions remain unanswered, AI is going to be a source of conflict.
When you find yourself on the wrong side of this conflict, you can't afford to try to handle the situation yourself. The issues are simply too complex, and navigating a university's judicial system is never easy, even in the best of cases. You need a Lento Law Firm attorney in your corner. Our Student Defense Team was founded to protect student rights. We're experienced when it comes to challenging university authority, and we understand the central issues at stake when it comes to how technologies like Grammarly, Perplexity, Turnitin, and GitHub are being used on campus.
To find out how we can help in your case, contact us today at 888-535-3686 or take time now and fill out our online questionnaire.
The Root of AI: The Problem
Let's start by looking at the big picture.
Schools—faculty and administrators—are dedicated gatekeepers. It's their job to ensure that anyone who graduates from their storied institutions is fully prepared for the job that awaits them. They teach students the time-honored principles of their disciplines and, through rigorous testing, ensure students have fully imbibed those principles.
The role of technology is to push boundaries, break rules, and smash precedents. Technology doesn't care how we've always done it. In the name of making our lives "better" or "more convenient," technology ignores tradition.
Schools sometimes fail to recognize just how useful a new technology can be; engineers and corporate CEOs don't always stop to consider what uses a student might come up with for their new technologies. When these two forces come into contact, the conflict is inevitable.
AI in particular promises to answer virtually any questions we may choose to ask. Not just "Who won the World Series in 1969?" but "How exactly did the New York Mets manage to win the World Series in 1969?" Not just “What are some sources I could use to talk about homelessness” but “What would a paper on homelessness actually look like?”
To some extent, the very essence of AI technology would seem at odds with "education," at least as we have known it for the past several centuries. With AI, we don't need to know the answers. We don't need to learn the skills. Viewed from a certain perspective, AI would seem to undermine education altogether. Is it any wonder your English professor is stressed about it?
What is Cheating, Exactly?
There are dozens of complicated issues related to AI technology on campus. One of the biggest, at least for students, is whether or not the use of certain programs constitutes cheating. So let's talk about cheating.
Cheating—what's officially known on college campuses as “academic misconduct” falls primarily into two categories.
- Cheating: The word "cheating" actually refers specifically to the act of using an unauthorized resource to complete your work. That unauthorized resource could be your textbook during a closed-book exam. It could be an advanced copy of the test you obtained through nefarious means. It could be another person. If information comes from something other than your brain and that something hasn't been approved, it's cheating.
- Plagiarism: The attempt to pass another person's work or ideas off as your own. Though often associated with text, it is possible to plagiarize art, music, and even computer code.
Simple enough, right? Only these definitions don't necessarily solve the problem of AI.
The trouble is, one person's "tool" is another person's "unauthorized resource." Once upon a time, slide rules were the new technology, and using them was cheating. In an age of calculators, though, does anyone actually need to know how to use the slide rule, or is that outdated technology? We don't know how car engines work, and if Google has its way, one day, we may not know how to drive cars.
At what point do we stop requiring high schoolers to take driver's ed?
In the context of AI, is using Grammarly to prevent spelling errors in a freshman comp paper going to handicap you? Or is it the next step forward in human evolution?
Different professors answer that question differently. Worse still, the answers can sometimes be so complex that it's impossible to know what to do. You should post your computer science projects on GitHub since they can attract prospective employers. You should never post your computer science projects on GitHub since other students might copy them instead of doing their own work.
What Can Happen?
Making the wrong choice when confronted with these complicated questions can sometimes carry an enormous cost.
Any time you decide to cheat--using AI or otherwise--you have to assume you might be caught. Even in the age of AI, instructors have methods of detection. They can compare your work throughout the semester to see if you make any significant, unexplained improvements. They can ask you to explain your work orally to see if you really know what your work says you know.
Increasingly, schools are also turning to their own AI programs as a means of limiting cheating. Programs like Turnitin claim to identify plagiarism, for instance. Often, these programs come with their own set of problems, though. They aren't fool-proof, for example, and when instructors treat them as though they are, students can wind up innocently accused of academic misconduct.
If you're found responsible for an offense, you could face anything from a warning to outright dismissal. Even the warning could have long-term effects on your academic and professional careers. If a cheating accusation should make it into your permanent record, you could lose scholarships, graduate school opportunities, and even job offers.
How Do You Protect Yourself?
There are ways to minimize the chance of an allegation.
- First, keep in mind the purpose behind each piece of technology you use. Perplexity was designed as a next-generation search engine. These days, though, it can write a paper using those sources if you ask it to. Don't ask it to.
- Check any work an AI does for you. AI still makes a lot of mistakes, and you don't want one to cost you. As a bonus, though, checking behind the AI ensures you'll know the subject if you're asked to talk about it.
- Don't be afraid to ask an instructor whether a given use of AI is acceptable or not. School policies and syllabus policies don't always keep up with the evolution of technology, and they can never cover every possibility. Plus, your instructor will take your question as a sign of your inherent honesty. That's never a bad thing.
Of course, sometimes avoiding cheating doesn't prevent an allegation. Misunderstandings happen. Honest mistakes happen. False allegations, including from so-called smart programs, are happening more and more. When they do, make sure you have a Lento Law Firm attorney on your side. We can make sure your side of the story gets told. We can demand that your school afford you every due process right to which you're entitled. We can ensure you get the best possible resolution to your case.
If you've been accused of cheating or you're dealing with some other technology issue on campus, don't wait. Contact the Lento Law Firm at 888-535-3686 or use our online form to tell us about your case.