Happy 3rd Birthday, ChatGPT: Turning a Wicked Problem into a Needed Disruption in Education

On November 30, 2022, OpenAI released a research preview of ChatGPT. In three years, it has gone from curiosity to infrastructure, and…

On November 30, 2022, OpenAI released a research preview of ChatGPT. In three years, it has gone from curiosity to infrastructure, and in education it now looks a lot like what Rittel and Webber called a wicked problem—a complex, shifting challenge with no clear definition, no final solution, and deeply entangled stakeholders. (Sympoetic)

Schools can’t simply ban it, can’t fully control it, and can’t responsibly ignore it. That is precisely why this disruption is needed: it is forcing the system to confront long-standing questions about assessment, authorship, workload, and what it really means for students to “do their own work” in an AI-rich world.


Happy 3rd Birthday, ChatGPT: Turning a Wicked Problem into a Needed Disruption in Education

Rejection vs. Adoption: A Three-Year Timeline

Year 1 (2022–2023): Cheating Panic and Bans

The first year was defined by rejection.

  • Within weeks of launch, major districts including New York City blocked ChatGPT on school networks over fears of plagiarism and “negative impacts on student learning.”(The Guardian)
  • Other large systems (Los Angeles, Seattle, and more) followed with restrictions or bans, and institutions rushed to adopt AI-detection tools, treating ChatGPT primarily as a threat to academic integrity rather than a learning tool.(Yellow Scene Magazine)

At the same time, early policy and ethics work was already warning that this was bigger than cheating. UNESCO’s 2023 Guidance for Generative AI in Education and Research framed generative AI as a powerful but risky accelerant, calling for rights-based, human-centered integration, national policies, and AI literacy—not just prohibition.(UNESCO)

Year 2 (2023–2024): Reversal and the Arms Race

By mid-2023, it was obvious bans were leaky and inequitable.

  • New York City reversed its ban in May 2023, shifting from “block it” to “figure out how to use it well,” and positioning itself to help shape AI policy for teaching and learning.(Forbes)
  • Surveys in 2024 found that 86% of students were already using AI tools in their studies, with ChatGPT the most common.(Campus Technology)

The narrative moved into an arms race:

  • Universities were urged to “stress-test” all assessments as AI use surged (92% of UK students using genAI; 88% using it for assessments).(The Guardian)
  • Systematic reviews began to appear, mapping both the opportunities and challenges of ChatGPT in education and higher education.(ScienceDirect)

Assignments were redesigned to be “AI-resistant” or “AI-proof,” while policy bodies like UNESCO and others pushed for AI literacy and clear norms around acceptable AI collaboration rather than simple bans.(UNESCO)

Year 3 (2024–2025): Uneasy Coexistence and Role Reversal

By late 2025, education had entered an era of uneasy coexistence.

  • Detection tools remained unreliable and prone to false positives, especially for multilingual students, pushing institutions toward policy and pedagogy rather than technical “gotcha” solutions.(MDPI)
  • A new trend emerged: students began pushing back when institutions or lecturers over-relied on AI for teaching and feedback, arguing they were paying tuition for human instruction—not for content “they could have asked ChatGPT” to generate.(The Guardian)

Adoption is now “slow and steady,” with many systems acting as local scientists—running pilots, collecting evidence, and iterating rather than issuing sweeping, once-and-for-all rules.(Education Week)

OpenAI’s release of ChatGPT for Teachers, a free, FERPA-aligned workspace for verified U.S. K–12 educators through June 2027, marks a symbolic role reversal: the product is now explicitly aimed at teachers as primary users, supporting lesson planning, differentiation, and communication, rather than being framed solely as a student shortcut.(Axios)


What ChatGPT Has Given Education (The “Good”)

1. Learning Performance and Higher-Order Thinking

Multiple syntheses now converge on a key point: used well, ChatGPT can meaningfully improve learning.

  • A 2025 meta-analysis of 51 studies found that ChatGPT use produced a large positive effect on learning performance and a moderate positive effect on higher-order thinking, aggregating work across disciplines and contexts.(Nature)
  • A 2024 systematic review similarly concluded that ChatGPT tends to improve academic performance and higher-order thinking, while reducing mental effort, when integrated thoughtfully into classroom practice.

Abdallah et al.’s 2025 systematic review on ChatGPT in higher education, spanning learning, wellbeing, and collaboration, reported generally positive effects on learning and engagement when AI is embedded in structured learning designs rather than used ad hoc.(ResearchGate)

Taken together, early evidence suggests that, under guided conditions, ChatGPT can behave less like a cheat-code and more like a learning partner.

2. 24/7 Tutoring and Accessible Explanation

Studies and policy analyses repeatedly highlight ChatGPT’s value as an always-available explainer:

  • Students use it to break down difficult concepts, generate examples, and get formative feedback in ways that feel personalized and immediate.(ScienceDirect)
  • UNESCO’s guidance explicitly notes the potential of generative AI to support personalized tutoring and remediation—especially for learners who lack access to private tutoring or small classes—if embedded in responsible, human-centered ecosystems.(UNESCO)

This is one of the core reasons students have adopted AI so quickly; it fills gaps that the current system often leaves open.

3. Time Back for Teachers

For educators, the promise is blunt: time.

  • Surveys and reviews find that teachers commonly use ChatGPT-style tools for lesson planning, drafting materials, and differentiating content across reading levels and languages.(Axios)
  • Systematic reviews of ChatGPT in education list teacher workload reduction and resource generation as primary “opportunities,” while stressing the need for professional judgment in curation and adaptation.(MDPI)

When used thoughtfully, generative AI can give back hours otherwise spent formatting handouts, drafting routine emails, or rewriting the same rubric yet again—freeing educators to focus on relational and high-value work.


What ChatGPT Has Taken or Put at Risk (The “Bad”)

The same three years have surfaced equally serious concerns.

1. Cognitive Engagement and “Cognitive Debt”

A 2025 MIT Media Lab study, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing, followed participants over several months as they wrote essays under different conditions (ChatGPT, search engine, no tool), tracking EEG data and performance.(ResearchGate)

The findings, echoed in media summaries:

  • Participants relying on ChatGPT showed lower brain engagement and consistent underperformance at neural, linguistic, and behavioral levels compared to those writing without AI or using search alone.(MIT News)
  • Over time, many became more passive, defaulting to copy-and-paste behavior and struggling to recall or re-create their own work when the tool was removed.(TIME)

The study is a preprint and has methodological limits, but it crystallizes a real worry: heavy reliance on AI for complex tasks may create “cognitive debt”—short-term convenience at the expense of long-term critical-thinking practice.

2. Self-Control, Wellbeing, and Dependency

Emerging work on psychological outcomes paints a mixed picture:

  • Some studies suggest that structured ChatGPT use can support academic performance and overall wellbeing by reducing stress and providing timely support.(ResearchGate)
  • At the same time, Besalti’s 2025 study on ChatGPT usage, self-control, and academic wellbeing found that higher ChatGPT use was significantly associated with lower self-control and lower academic wellbeing, with self-control partially mediating that relationship.

Review articles on ChatGPT in education note concerns about dependency, ethical use, and even “abuse” of the tool, especially when students treat it as a default driver of their work rather than a support.(MDPI)

In other words, AI can either scaffold students or hollow out their agency, depending on how it is framed and governed.

3. The Validity of At-Home Assessment

Generative AI has forced a reevaluation of take-home essays and exams as trustworthy measures of individual ability:

  • Reports from regulators and sector bodies urge universities to “stress-test” assessments and, in some cases, recommend oral exams, in-person defenses, or detailed AI-use declarations as guardrails.(The Guardian)
  • Concrete policy shifts are appearing: for example, a top high school in New York City recently scrapped summer take-home essays in favor of in-class handwritten writing to curb AI-assisted cheating.(New York Post)

The traditional “write it at home and upload a PDF” model is no longer a stable proxy for learning. ChatGPT has effectively devalued certain assessment formats, pushing institutions back toward in-person, oral, and process-visible modes of evaluation.


Why This Is a Wicked Problem, Not Just a Tool Choice

All of this—gains in performance, risks to cognition and wellbeing, breakdown of legacy assessments—fits the classic pattern of a wicked problem:

  • The problem keeps redefining itself as the technology and usage evolve.
  • Stakeholders (students, faculty, administrators, vendors, policymakers) have competing frames and incentives.
  • Every intervention (ban, detection, official platform, AI literacy push) changes the system and reveals new side-effects.(Wikipedia)

UNESCO’s guidance essentially treats generative AI in education as this kind of systemic challenge, arguing that countries must simultaneously tackle policy, capacity building, infrastructure, and ethics, rather than betting on any single technical fix.(UNESCO)

Systematic reviews echo that sense of entanglement: ChatGPT is found to enhance performance, alter effort, shape wellbeing, and reconfigure academic practices all at once. There is no single axis on which its impact can be labeled “good” or “bad.”(Teacher Task Force)

That is what makes this disruption both wicked and necessary. It forces education systems to re-articulate:

  • What kinds of thinking we actually value,
  • What counts as legitimate collaboration (human–AI and human–human), and
  • How we design environments that protect cognitive development and leverage new tools.

Turning Disruption into Design Work

So what do we do with a wicked problem on the chatbot’s third birthday?

Three design moves are emerging from research and practice.

1. Move from “Answer Engine” to “Thinking Partner”

Studies and policy documents alike argue that educators should reframe tools like ChatGPT away from “truth machines” and toward thinking partners:

  • Use them to generate drafts, options, and counter-arguments—not final answers.(MDPI)
  • Teach students that the learning lives in the Prompt → Probe → Prove loop:
    • Prompt the model with a well-framed question or task,
    • Probe the output by challenging, comparing, and checking it,
    • Prove their own thinking by synthesizing or extending it into work that is meaningfully theirs.

This directly addresses the “Google mindset” (ask once, click once, move on) by making the conversation with the model the object of literacy and assessment.

2. Make Process Visible (Not Just Product)

To counter both cheating and cognitive offloading, more instructors are experimenting with assignments that require:

  • A slice of the AI transcript,
  • Student annotations explaining what they accepted, changed, or rejected and why, and
  • In-class writing, oral defenses, or project-based work tied to that process.

This aligns with recommendations from integrity and assessment bodies and is consistent with calls from systematic reviews to focus on learning processes and collaboration patterns, not just final grades.(ResearchGate)

3. Use Official Tools to Support, Not Replace, Teaching

The launch of ChatGPT for Teachers and similar institutional platforms creates an opportunity and a warning:

  • Opportunity: give educators secure, policy-aligned tools to plan, differentiate, and communicate more effectively.(Axios)
  • Warning: if institutions lean on AI to plug structural gaps (understaffing, unpaid prep time, ballooning class sizes), they risk the scenario students are already protesting—courses that feel like they were “taught by ChatGPT” with human presence as an afterthought.(The Guardian)

Treating AI as a design partner rather than a cheap substitute keeps the focus on what humans do best: relationship, judgment, care, and the messy work of mentoring real learners.


Closing the Loop: A Wicked and Needed Disruption

Three years in, ChatGPT has:

  • Exposed fragile parts of the education system—especially assessment, workload, and authorship.
  • Provided genuine benefits in learning performance, access to explanation, and teacher efficiency when used with intention.
  • Raised credible concerns about cognitive engagement, self-control, and the validity of certain assessment formats.

That’s exactly what a wicked problem looks like: no clean fixes, just better and worse ways of living with it.

The opportunity now is to treat generative AI not as a temporary headache or a magic fix, but as a forcing function:

  • to redesign assessments around thinking, not typing
  • to build AI literacy as a baseline competency for students and staff
  • and to keep humans in the loop in ways that protect minds while embracing new capabilities.

If education can do that slow, steady design work, then ChatGPT’s third birthday won’t just mark the rise of a disruptive tool. It will mark the beginning of a more honest, more intentional conversation about what learning is for in an AI-saturated world.

Author

  • Kori Ashton

    Kori Ashton is a digital strategist, educator, and founder of Texans for AI. She is currently a doctoral student working in Learning Design & Technology at Johns Hopkins University School of Education. Kori brings over 25 years of experience in digital marketing and instructional design. She teaches AI integration for business and education, helping professionals harness emerging tech for real-world impact.

    View all posts

More Related Articles

Leave a Reply