Get on the AI Train or Get Left Behind

Published on 27 October 2025 at 09:34

But Research Your Destination - There Could Be Lasting Implications 

OPINION by Staff Editor | Somerset-Pulaski Advocate

Oxford University just announced it’s giving every student and staff member access to ChatGPT Edu, a secure education-specific version of the AI tool. They’re calling it a step toward “digital transformation,” and they’re right. This is the direction higher education is heading. And fast.

Here’s the truth

If teaching institutions and students here in the U.S. do not get on board with this technology, they’re going to be left behind. Many universities already allow ethical use of AI, but that word ethical is still fuzzy. No one seems to agree on where the lines are. -Sort of like other issues in our community-

But that truth really applies everywhere — in work, school, and everyday problem-solving. AI can help shape and sharpen our ideas, but only if we know how to guide it, not let it guide us. 


AI should serve as a catalyst for critical thought, not a replacement. Asking smart questions is a skill, but what really matters is what we do with the answers. 


Ethical Crossroads

A few weeks ago, I ran into a dilemma that sums it up the point about using AI smartly. I was in a scholarly environment and reading a discussion board that cited an academic journal article that turned out to be fake — wrong DOI, fake journal, made-up authors, everything but the title was a hallucination. I found the actual article and the post’s point was mostly valid, but now I was stuck.

Do I call it out publicly? Message the writer? Tell the moderator? In the end, I said nothing — and that didn’t feel "right."

Looking back, I don’t believe that was the ethical decision, but the person wasn’t my student, and the moderation wasn’t my responsibility.

I made the deliberate choice stay out of it and move on to another topic.

That moment showed me just how murky AI ethics really are. Since COVID and the rise of these tools, graduate enrollment has exploded. Some students who may not be ready for that level of academic work are now using AI to fill the gaps — and that has long-term consequences.

This topic is surfacing in professional settings more and more and the fear is that some in the emerging workforce are earning degrees in a world where research and writing can be outsourced to algorithms. 


Prediction: Detection - Flagged but Not Fault-Proof

AI detection tools are already out there, and they’re far from perfect. For instance, when I tested a piece I had written long before AI tools were available, the system flagged it as more than 40% AI-generated. That experience made one thing clear - the ground beneath us is shifting, and trust in these systems is not as solid as it seems - on all sides.

I think most of us would agree that AI guardrails are needed for the sake of integrity, liability and even safety. 

Right now, most policymakers are afraid to hold writers, researchers, students, etc. accountable for questionable work - even work that can be conclusively proven as misuse of AI. At least that was the feedback I received as part of an AI detection study group in 2024, which used the example above as a pre-AI test sample.  In fact, a study around the same time in the UK found almost 7,000 proven cases of cheating using AI. The outcome of that early research indicated that at least 5 out of every 1,000 students were cheating using AI tools. That number increased nearly four times since the results from the previous year. We can almost certainly say that number has risen during the current period. 

The mere fact that these studies are being done, and technology is currently being developed and refined, points to a change in AI rules, regulations, and policy.    

 

Here’s what I predict will happen in the very near future:  

Institutions will deploy broad-scale systems that crawl published works, dissertations, journal articles and academic theses that are being submitted and published now. The system will search for texts influenced by flawed AI returns and hallucinations or gross generation of content (if that is within the realm of possibility). These systems will claim to identify with high certainty when a work has been compromised by AI misuse. Currently universities that use AI detection realize there are flaws. Eventually, though, these flaws will be largely processed out and new, stricter policies developed. 

Then what happens? Degrees could be called into question, professional reputations destroyed, careers altered.

Potential impacts

To be specific:

  • Imagine a doctoral dissertation flagged years later because it incorporated AI-generated citations or paraphrased portions (beyond the threshold) that an AI detector now deems “likely machine-assisted” or "machine-generated."

  • Imagine the headline: “University Revokes Degree After AI Detector Finds 42 % Machine-Generated Writing.”

  • It won’t just be about students—it will be about faculty, researchers, institutions. 

  • Even if one deletes a paper, uploads to institutional repositories or databases may persist. “Deleted” doesn’t mean “gone.” If the tool crawls archives, internet caches, institutional dump files—it can resurrect old work in unexpected ways. Being aware of future implications for your work is imperative! 

Implications for everyone:

  • If our institutions lack the resources or policy infrastructure to handle contested AI-detection findings, our students, researchers, and publishers are vulnerable.

  • If the tools we adopt are poorly calibrated or biased, we risk penalizing honest students (faculty, researchers, professionals) while missing real issues.

  • We need to remember that detection is only one side of the coin—due process, clarity, policy, and education are all critical.

  • And we must ask if we want to build a system of suspicion, or one of support and preparedness? With the explosion of enrollment in higher-ed, building trust is as important as enforcing standards. And we need to realize that sometimes more is not better.

These are questions and concerns students, parents, and professionals need to be thinking about now, before ethically misusing AI to generate most of their work, especially if it is not validated by a human.


Human vs Large Language Models - You know it when you read it 

Many of us who dove headfirst into this new technology can spot an AI-generated piece from a thousand miles away. All of us experimented with it - anyone who says otherwise is not being completely honest.

Speaking of honesty, even seasoned AI enthusiasts are not perfect judges of what is or is not, either. My discussion board experience proved that. I genuinely believed the post I read was the writer’s own work AND it was in my field of expertise. Maybe it was authentic - except for the source - there is no way I know of to be sure.

Still, sometimes it’s painfully obvious when something was written by AI, especially when it’s driven by a weak or confusing prompt. On the flip side, if you’re skilled at crafting prompts, you’re less likely to get caught — and that’s not necessarily a bad thing. Great prompt-writing is a form of critical thinking.

If you can guide the AI with precision, interpret what it returns, and refine it through your own analysis, that’s what I consider an ethical embrace of the technology.

What are the signs that something is AI-generated?

Large language models (LLMs) are trained on massive amounts of text pulled from the vast systems. They don’t memorize information; they learn patterns, structures, and relationships between words and ideas. When you type a prompt, the model doesn’t understand in the human sense — it predicts what comes next, using probabilities based on all that data. In short, AI doesn’t think; it calculates and predicts what we to know based on our prompts.

 

Here are a few telltale traits of AI generated writing

  • Stylistic Uniformity:
    AI-generated writing often sounds too even. It lacks the natural rhythm of human writing, where sentences vary in tone, pacing, and emphasis.

  • Over-Polished or Over-Generalized:
    It tends to sound polished but vague, using filler transitions (“Furthermore,” “In conclusion,” “It is important to note”) and avoiding strong, distinctive opinions or lived experience. This is where it gets hard to distinguish AI generated and expert academic works. In fact, tools like Grammarly will return a very high percentage of AI generated detection in almost any scholarly piece submitted. It has not yet been trained on that distinction. 

 

  • Repetition of Themes:
    LLMs are pattern-based, so they often echo the same ideas in slightly different words — creating what feels like “circular” reasoning. 
  • Absence of Genuine Voice:
    Humans write with imperfections — a unique cadence, phrasing, or emotional nuance. AI often misses that personal fingerprint.

  • Fact Precision vs. Depth:
    AI responses can sound authoritative but often falter under close scrutiny. Humans, conversely, might hedge or contextualize nuance.

Conclusion (or as AI likes to say, "In sum")

I’ve always said, get on the AI train or get left behind. But that doesn’t mean jumping on without a map or thinking about your future. For AI users, it means thinking about the future and possible implications of misuse now. For institutions and professionals, it means building the right policies, teaching responsible use and critical thinking with AI, and making sure integrity doesn’t get lost in convenience. AI should be a tool that aids thought - not a substitute for it. Writing great prompts to spark creative ideas is part of learning, but that process must be followed by thoughtful analysis, synthesis, and work results in at least three-quarters one’s own.  Or at least that is my stance. 

 

Oxford is setting the pace by making AI available. Is this new platform fully baked? I'm willing to guess that it will set new trends in AI and academia globally. Only time will tell, but I'm excited to see the results.