White, State University of New York at New Paltz

Assessment in the Age of AI

Grading and Assessment: The Chronicle of Higher Education

The PDF above contains a special chapter on assessment in the age of AI. Even though The Chronicle published this back in June 2024, the material remains relevant.


The Wicked Problem of AI and Assessment

(Sept. 2025) This article argues that generative AI poses not a solvable technical challenge but a “wicked problem” for assessment in higher education. Drawing on interviews with 20 faculty members at an Australian university, the authors show that AI complicates assessment design in ways that resist definitive solutions: problems are framed differently across contexts, solutions carry trade-offs, and outcomes cannot be reliably tested. Teachers report feeling burdened by uncertainty, institutional pressure, and the risk of unintended consequences for students and universities. 

The authors propose shifting from solutionism to adaptive strategies. They outline three “permissions” institutions should support: the permission to compromise (acknowledging unavoidable trade-offs), to diverge (accepting context-specific approaches), and to iterate (treating assessment as ongoing adaptation). Rather than chasing the “right” answer, universities should build cultures that tolerate uncertainty, value professional judgment, and encourage evolving, locally relevant assessment practices in the age of AI. 

Pull Quote: Universities that continue to chase the elusive ‘right answer’ to AI in assessment will exhaust their educators while failing their students. Those that embrace the wicked nature of this problem can build cultures that support thoughtful professional judgment rather than punish imperfect solutions. 


Talk Is Cheap: Why Structural Assessment Changes Are Needed for a Time of GenAI

(May 2025) This article critiques current university responses to generative AI in assessment, such as traffic light models, AI assessment scales, and student declarations. The authors argue that these approaches are “discursive”—they rely on communicating rules to students but lack mechanisms to ensure compliance. Such strategies, they contend, create an “enforcement illusion” by appearing to regulate AI use without structurally preventing or integrating it. 

The authors introduce a conceptual distinction between discursive and structural assessment changes. Structural changes alter the mechanics of assessment tasks themselves (for example, supervised in-class writing, process-focused evaluation, or linked assessment chains) so that validity is preserved regardless of student choices. They conclude that only structural redesign, not rule-based guidance, can protect assessment validity and institutional credibility in an era where AI use is undetectable and widespread. 


The End of Assessment as We Know It: GenAI, Inequality and the Future of Knowing 

“The End of Assessment as We Know It,” by Mike Perkins and Jasper Roe, appears in UNESCO’s volume “AI and the future of education: Disruptions, dilemmas and directions” and sets out how GenAI destabilizes long-standing assessment practices in higher education. The authors argue that rapidly advancing, multimodal GenAI makes many take-home tasks trivial to complete and difficult to detect, which erodes assumptions about originality and what our current assessments actually measure. Retreating to tightly invigilated exams is no solution, since AI-enabled wearables and emerging brain-computer interfaces compromise in-person security and push us toward a “post-plagiarism” reality. They warn that the collapse of assessment will be uneven, with digitally advantaged systems facing different futures than digitally marginalized ones, raising equity concerns alongside validity and reliability. For college educators, the piece signals a need to redefine credible evidence of learning and design assessment that resists easy automation while still supporting authentic demonstration of knowledge and capability. 


The End of Tests: Possibilities for Transformative Assessment and Learning with Generative AI

The ends of tests: Possibilities for transformative assessment and learning with generative AI,” by Bill Cope, Mary Kalantzis, and Akash Kumar Saini, appears in UNESCO’s anthology “AI and the future of education: Disruptions, dilemmas and directions.” They argue that conventional tests such as multiple choice and timed essays no longer serve learning well in a GenAI context, and they call for AI-integrated, formative assessment that prioritizes understanding over recall. They propose replacing superficial summative routines with dynamic, dialogic, and multimodal evaluations embedded in everyday work, using AI to personalize feedback within each learner’s “zone of proximal knowledge.” 

The chapter frames GenAI not as a tool to automate old testing regimes but as a medium for epistemic justice and pedagogical renewal that can help address structural inequities. 

For college educators, the takeaway is to redesign assessments around process, iteration, and authentic knowledge production while aligning with UNESCO’s broader push for equity, human agency, and ethics in AI-mediated education.  

To read the entire UNESCO (FALL 2025) Anthology: AI and the Future of Education