White, State University of New York at New Paltz

2025 Articles & Resources

Editor’s Note: This repository curates a somewhat eclectic mix of articles focused on artificial intelligence in education, with particular attention to higher ed. As editor, I (Rachel Rigolino) follow a variety of Substack authors, independent bloggers,  and education-focused publications. Because time is limited (and because I assume most faculty already read mainstream sources like The New York Times) I typically do not include those articles here.

AI Use Statement
Every article/video in this repository has been read/watched by me (Rachel Rigolino), and I review and edit each summary for accuracy and clarity. I also use a custom GPT-based assistant to help generate initial drafts of the summaries. This tool supports efficiency but does not replace my judgment or editorial oversight.


Back to FA25 Essential Reading List (with a nod of thanks to Jennifer Rutner (STL))

I have listed some articles and blog posts that provide an overview of how our students are currently using AI. The information is timely, and the reflections are sobering.

How Are Students Using AI? Annette Vee reports that student use of generative AI is widespread and growing, with most students using it for writing, research, and feedback—often to save time or improve grades. Use varies by discipline and demographic, and many students feel unprepared by their institutions to use AI responsibly. This article highlights the need for faculty to address AI use directly, clarify policies, redesign assignments for meaningful engagement, and offer guidance on productive, ethical AI practices in the classroom.(AI & How We Teach Writing (Norton); May 12, 2025)

College Students Have Already Changed Forever by Ian Bogost Bogost argues that college students have fully integrated generative AI into their academic lives, making it as common as social media and fundamentally reshaping higher education. While many students view AI as a practical tool to save time and manage overwhelming workloads, professors struggle to respond, some relying on handwritten exams or moral appeals, though students increasingly want more authentic, project-based learning. The piece concludes that AI has transformed college more rapidly than the pandemic did. (The Atlantic; Aug. 17, 2025)

And, from The Chronicle:

Teaching AI Ethics This is a fantastic booklet and very useful. One need not be teaching a course on AI ethics to use the material Leon Furze has developed. He has a great chapter on academic integrity.

What Happens After A.I. Destroys College Writing? Ed’s Rec. In this thoughtful essay, Hua Hsu investigates how generative AI has disrupted the foundations of college writing, drawing on interviews with students and educators across institutions. The interviews with students are truly enlightening. And, according to Hsu, while some universities respond by enforcing handwritten exams, blue books, and oral assessments, others shift toward emphasizing writing processes, collaboration, and critical engagement with texts and peers inside the classroom. Of course, still other campuses have embraced the bot, which worries Hsu. (The New Yorker; June 30, 2025)

We Can’t Ban AI, but We Can Friction Fix It Catherine Savini (WAC Coordination Westfield State University) argues that faculty cannot ban or reliably detect student use of generative AI but can “friction fix” by making it harder and less appealing to rely on AI and easier to authentically engage with writing assignments. Drawing from writing pedagogy and organizational behavior, she recommends strategies like breaking assignments into stages, fostering meaningful relationships with students, using inviting and transparent assignment design, and explicitly teaching both the value of writing for thinking and the limitations and ethical challenges of generative AI.(Inside Higher Ed; July 16, 2025)

Teaching Assignments that Mitigate AI Abuse Not extensive, but this collection of teaching strategies is worth reviewing. You will likely need to sign up for a (free) account to access it. Includes recommended books for learning more about AI’s impact on students and higher education. (Teaching Newsletter; The Chronicle of Higher Education; July 2025)

AI Developments in High Summer 2025 Futurist Bryan Alexander’s August 2025 horizon scan surveys a wide array of AI developments beyond GPT-5, highlighting new features like OpenAI’s agent mode and “study and learn,” Google’s NotebookLM upgrades, Microsoft’s video and Copilot integrations, and Meta’s push toward personal superintelligence. He notes rapid global and open-source innovation—including projects in China, South Korea, and Switzerland—while identifying common trends such as tutoring modes, AI agents, and multimedia outputs, all of which raise questions about reliability, adoption, and their role in education. (AI, Academia, & the Future; Aug. 18, 2025.

5 Steps to Update Assignments to Foster Critical Thinking and Authentic Learning in an AI Age 

To sum up from the Faculty Focus article: 

  • Critically review your syllabus: Remove or update assignments that AI can easily complete, and make room for new tasks that require human judgment, reflection, and understanding.

  • Clarify AI use for each assignment

  • Model and discuss expectations

  • Require transparent disclosure of AI use

  • Handle suspected misuse through dialogue, not accusation

AI Awareness Starts with Time Marc Watkins discusses the practical challenge of integrating AI literacy and fluency into higher education, emphasizing that time constraints often go unaddressed in current frameworks. He suggests that dedicating just 15 minutes per week to AI-related activities—such as short readings, discussions, annotation exercises, or hands-on LLM use—can help faculty cultivate both critical AI literacy and responsible AI usage without overwhelming their existing curriculum. Watkins provides several practical tips and resources, advocating for an “AI aware” approach that values small, manageable steps over polarized debates or unrealistic expectations of faculty time and capacity. (Rhetorica; July 18, 2025)

And an aside/update about AI detectors:

Michael Webb, “AI Detection and assessment: an update for 2025,” Jisc blog (June 24, 2025).

Webb reviews recent evidence showing that generative AI can pass many assessment types, including some “authentic” tasks, while both automated detectors and human markers struggle to identify misuse; low false positive rates matter most in education, yet even tiny rates scale to thousands of cases in large institutions. He argues that traffic-light policies and assessment scales are helpful for student guidance but insufficient for integrity unless assessments are structurally redesigned to include AI-secure elements; in the meantime, institutions should treat detection as one input within misconduct processes, using a vetted institutional tool, expert AI-user review, and structured conversations with students rather than any single detector. Practical takeaways for faculty: avoid ad-hoc web detectors, ensure a DPIA for any tool, and plan for a shift toward process-based, secure, or chained assessments while still teaching responsible AI use alongside core disciplinary skills.


**If you watch/listen to one video/podcast about GenAI in Higher Education, watch this one. August 2025**

GenAI Is Normal EdTech Leon Furze argues that generative AI should be understood as part of a long-standing pattern in educational technology. Yes, it is transformative but unfolding slowly, shaped by institutional adoption and social systems rather than tech breakthroughs alone. Drawing on Narayanan and Kapoor’s framework, Furze highlights three speeds of progress (invention, innovation, and diffusion) to explain why AI’s impact in education will take decades, not months.

  • Invention: rapid breakthroughs in methods and architectures (LLMs, diffusion models, etc.).

  • Innovation: slower development of actual usable products and services built from those methods, constrained by market, regulation, infrastructure.

  • Adoption & diffusion: slowest of all—how schools, teachers, students, workflows actually integrate and adapt to the technology. Schools may take a decade or more to meaningfully embed new tech.


To Share with Students:

From Leon Furze:

How to Spot a Deepfake: Booklet. This is a useful resource for instructors from across the disciplines. 

And maybe to share with students—newly released from the American Association of Colleges and Universities (AAC&U) and Elon University

Student Guide to Artificial Intelligence: 2025

 

SUNY New Paltz Faculty Featured 

How AI Is Making Its Way into Upstate Colleges The article highlights a spectrum of faculty/institution responses, including Marist University’s new applied AI minor and initiatives like UAlbany’s cross-disciplinary AI Plus movement. While some professors advocate for nuanced, transparent use of AI in coursework, official SUNY guidelines remain advisory rather than prescriptive, reflecting ongoing ethical debates and the need for adaptable policies as AI transforms higher education. Featuring comments by Bruce Milem (Philosophy); Matt Newcomb (English); and Jason Wrench (Communications) (Times Union; July 17, 2025)

December 2025

This is a pyramid with a new understanding of Bloom's Taxonomy after AI. Please follow the links for a more detailed discussion.

Ed’s Comments: To get a better view of Hui’s Bloom’s Taxonomy after AI, along with Joel Gladd’s (College of Western Idaho) thoughts about it, click here: Hui’s Bloom’s Taxonomy After AI


Three by Alberto Romero

10 Signs of AI Writing that 99% of People Miss (Dec. 3, 2025)

  • Abstraction trap: AI tends to favor abstract, conceptual language over concrete images, which makes its writing hard to visualize and often hollow beneath the surface.

  • Harmless filter: Safety training removes sharp or offensive wording, so AI prose frequently sounds sanitized and vaguely corporate.

  • Latinate bias: To appear authoritative, AI leans on longer Latinate words instead of short, direct ones. Its style is stuck in a stiff “business casual” register.

  • Sensing without sensing: Without embodiment, AI mixes sensory details in ways that are statistically plausible yet wrong to anyone who has actually felt or seen the thing described.

  • Personified callbacks: AI often stuffs in clumsy personification and callbacks, giving objects memory or emotion in a way that feels forced rather than genuinely literary.

  • Equivocation seesaw: Its sentences frequently pair a claim with an immediate hedge, producing balanced, cautious constructions.

  • The treadmill effect: At the level of whole texts, AI circles the same ideas without clear direction, generating many words while advancing the argument or narrative very little.

  • Length over substance: Because text is cheap to produce, AI often inflates responses, mistaking verbosity and repetition for thoroughness and depth.

  • The subtext vacuum: AI spells out implications, jokes, and motives explicitly, leaving almost no subtext for the reader to infer and draining the writing of psychological nuance. (Refer to Hemingway’s Iceburg theory—AI doesn’t naturally get Hemingway’s point.)

  • The unreliability of the sign: No single feature can reliably expose AI writing, and even this essay partly used AI to demonstrate how easily those supposed tells can be mimicked or hidden.

The Death of the English Language (Dec. 05, 2025)

Alberto Romero argues that because English dominates the internet and AI training data, human writers and speakers are starting to mimic AI’s flattened, homogenized style, so English risks a kind of “death by convergence” in which its historical richness, subtext, and range gradually erode. He contrasts this with Spanish, which he sees as protected by its lesser role in AI corpora and by multilingual speakers’ cognitive distance from AI-shaped English, suggesting that non-English languages may become key sites for preserving nuance, cultural specificity, and genuinely human expression.

Why Ads on ChatGPT Are More Terrifying Than You Think (Dec. 2, 2025)

Ed’s Rec. In this blog post, Alberto Romero argues that OpenAI’s move toward advertising is not a strategic luxury but a financially compelled shift that exposes the unsustainable economics of large language models, particularly the high cost of inference and dependence on rented GPU infrastructure. He contends that this transition effectively recasts OpenAI from a research lab selling access to intelligence into a media company selling attention, where investor pressure and revenue targets override earlier missions about benefiting humanity. Romero warns that once advertisers become the real clients, in-stream native ads woven into AI outputs will quietly bias answers toward commercial interests, with wide implications such as “in-stream advertising.” Scary stuff. (The Algorithmic Bridge; Dec. 2, 2025)

 


We’re Not Taking the Fact-Checking Powers of AI Seriously Enough Ed’s Rec. Information Literacy Reflection / Mike Caulfield uses his investigation of a supposed Grokipedia hallucination about the Nobel Prize ceremony order, published on his newsletter of the same name, to show how his Deep Background AI fact-checking workflow revealed that Politifact, not the AI, was actually wrong. He argues that many AI mistakes are not mysterious hallucinations but patterns like conflation or dependence on low quality sources, and that treating LLM outputs as analyzable syntheses allows investigators, students, and educators to trace claims back to evidence and sometimes catch human fact-checking errors. Caulfield concludes that paid frontier models are particularly valuable for surfacing unasked questions and missing context, urging journalists and higher ed instructors to adopt structured “get it in, track it down, follow up” practices so AI becomes a tool for better verification, contextualization, and teaching about how arguments succeed or fail in contemporary information environments. (The End(s) of Argument; Dec. 6, 2025)

Delaware Professor Transforms Writing Class by Teaching Students to Use AI as the Technology Reshapes the Workforce Professor Matt Kinservik of the University of Delaware redesigned his first-year writing course to teach students how to use generative AI critically and ethically, arguing that writing instruction must evolve alongside the technology reshaping future workplaces. His approach, which requires students to research without AI, then critique and revise AI-generated essays, revealed both the tool’s limitations and the importance of human skills like fact-checking, editing, and audience awareness. (WHYY Delaware News; Dec. 1, 2025)

Our Response to AI Cannot Be Adversarial Mark Watkins argues that while AI has transformed student literacy practices, many institutional and faculty responses — including unreliable AI detection tools, invasive monitoring, and deceptive prompt-injection traps — are ineffective and even harmful to student well-being. Instead of policing students, the author urges educators to shift toward transparent, pedagogical approaches that teach responsible AI use and adapt assessment practices to this new literacy landscape. (Rhetorica; Dec. 1, 2025)


November 2025

Illustration showing SIFT method of Informational Literacy Please read article for specific information.
STOP. INVESTIGATE the source. FIND better coverage. TRACE claims, quotes, and media to the original context.

SIFT for AI: Introduction and Pedagogy. This is a great overview for those faculty interested in the new SUNY Informational Literacy framework. Micke Caulfield argues on his blog that educators asking for “SIFT for AI” are not seeking a replacement for SIFT but a framework that, like SIFT, emphasizes concrete, teachable actions that help students use AI tools for evidence evaluation, contextualization, and disciplinary thinking.  (The Ends of Argument; Nov. 21, 2025) Also refer to this library site: SIFT and AI-IL Literacy

 

 


Use AI to Turn Course Evaluations into Better Teaching Ed’s Rec. Yes, this is an interesting exercise, as I can attest. (Harvard Business School; Nov. 13, 2025).

It’s Time to Pull the Plug on ChatGPT at Cal State Martha Lincoln argues that California State University should immediately end its partnership with OpenAI due to mounting evidence that ChatGPT poses serious mental health risks. Citing recent lawsuits and tragic suicides allegedly linked to the chatbot, Lincoln warns that Cal State’s under-resourced mental health infrastructure cannot safely support widespread student use of AI tools that may exacerbate psychological vulnerabilities.(Inside Higher Ed; Nov. 25, 2025)

Good Enough and Better Than Me: Two Problematic Student Perspectives on Gen AI Leon Furze explores two troubling mindsets students express toward generative AI: that it’s “good enough” for low-stakes or disliked tasks, and that it’s “better than me,” especially among ESL learners and less confident creatives. Drawing on conversations with hundreds of Australian students,Furze argues these views stem less from the technology itself and more from systemic issues in education that devalue process, undermine intrinsic motivation, and reinforce a narrow focus on outcomes.(Leon Furze Blog; Nov. 24, 2025) 

From Furze’s blog post above: Teenagers, in my experience, tend to be brutally honest, especially when they’re talking to somebody who isn’t one of their teachers. The brutal honesty from the students that I’ve been talking to suggests that if a student can’t see the point in what they’re doing, it is increasingly likely that they’ll offload it onto ChatGPT or a handful of other AI platforms. These students aren’t malicious, compulsive cheats. They’re young people with full lives and other things on their mind than Mr Furze’s year 9 English homework.

Google on the Offensive: Is This the End of AI Ed Tech? Ed’s Rec. Worth reading, if only to learn about what is going on in the K-12 landscape.

Wess Trabelsi argues that Google’s rapid integration of Gemini into Google Classroom marks a turning point in AI EdTech. By embedding state-of-the-art AI tools like Guided Learning, Gemini 3, and the new HOPE architecture directly into platforms already used in 60,000 schools, Google is sidelining startups that built wrapper apps for similar purposes. These tools now offer functionality—interactive games, math simulations, custom bots, and adaptive memory—that no standalone EdTech product can rival, especially not for free. 

Trabelsi explains how this shift threatens the survival of companies like MagicSchool and SchoolAI, which lack both Google’s infrastructure and distribution reach. He warns that the era of small, innovative EdTech startups may be ending unless they merge Intelligent Tutoring Systems with GenAI tools or find niche markets Big Tech won’t target. (Wes Trabelski on Substack; Nov. 21, 2025)

No, the Pre-AI Era Was Not that Great Ed’s Rec. Zach Justus and Nik Janos argue that nostalgia for a pre-AI era in education is misplaced because problems like poor student engagement and academic dishonesty existed long before generative AI. Instead of blaming AI as the sole culprit, educators should acknowledge longstanding issues and explore how AI can reveal weaknesses and even improve teaching practices. (Inside Higher Ed; Nov. 20 2025)

A very thoughtful podcast by Leon Furze–only 15 minutes. He makes the point that it is not the job of the disciplinary teacher to teach students how to use AI tools. He also makes a distinction between learning with AI and learning through AI. It is an important one. He also argues that there are many instances where AI should not be used: Learning Without AI. As is: Learning Against AI. You can find the full blog post here: Five Ways to Learn Through AI

These Small-Business Owners Are Putting AI to Good Use. A short, interesting article. Small business owners across industries are using generative AI tools to streamline operations, automate tasks like customer service and marketing. Without tech teams to guide them, these entrepreneurs are experimenting on their own, often with impressive, cost-saving results. (WSJ: Nov. 15, 2025)

The Post-Plagiarism University Ed’s Rec. If you read one article this month about AI, read this. No, you may not agree with Clay Shirky’s conclusion, but his analysis of the issue is thoughtful and well-reasoned. Shirky argues that treating AI-generated content as plagiarism misunderstands both the nature of generative AI and student perceptions, which increasingly see such use as a victimless act rather than academic misconduct. He contends that institutional attempts to manage AI through bans, detectors, or honor codes have failed because they don’t align with students’ digital-native realities or cultural shifts, calling instead for a rethinking of academic integrity policies to reflect AI’s hybrid nature—as both a tool and a conversational partner. (The Chronicle of Higher Education; Nov. 3, 2025)

How AI Is Changing Higher Education Ed’s Rec. A collection of opinion pieces by faculty and scholars, published in The Chronicle of Higher Education. This is a great snapshot of the views across higher ed. Very much recommended. If you would like to read an overview of these short articles, you can find them here: How AI Is Changing Higher Education (Nov. 2025)

 

Giving Your AI a Job Interview Ethan Mollick critiques the current use of standardized AI benchmarks, arguing they often fail to capture the real-world abilities or judgment styles of AI models. He urges educators and organizations to “interview” AIs using context-specific tasks and repeated trials to assess performance and disposition, much like hiring a human expert rather than selecting based on generalized test scores. (One Useful Thing; Nov. 11, 2025)

AI Has Joined the Faculty Beth McMurtrie explores how professors across disciplines are cautiously integrating generative AI into course design, feedback, and teaching support, often as a way to manage unsustainable workloads. Faculty like Lane Davis and Amanda M. Main use AI to enhance clarity and student engagement, while maintaining control and transparency, but others express concern about trust, depersonalization, and overreliance. The article highlights a growing divide: AI offers real utility, yet its expanding use raises ethical questions and institutional pressures that could reshape the teaching profession in unpredictable ways (The Chronicle of Higher Education; Nov. 5, 2025)

Another Bloody Otter Has Joined the Call Leon Furze humorously critiques the resurgence of AI note-taking bots like Otter and Firefly in online meetings, arguing that while these tools offer convenience, their widespread, unconsented use raises serious privacy and legal concerns. He urges users to disable auto-join features and obtain explicit consent, emphasizing that relying on default AI assistants without clear boundaries undermines trust and may breach privacy laws. (Leon Furze; Nov. 5, 2025)

Teaching with AI: From Prohibition to Partnership for Critical Thinking Michael Kiener argues that banning AI in higher education misses an opportunity to develop students’ critical thinking and metacognitive skills. Instead of policing use, Kiener advocates for a developmental approach that aligns AI integration with student maturity and course level: teaching responsible use in lower-level courses, scaffolding in mid-level, and promoting autonomy in advanced courses. (Faculty Focus; Nov. 5, 2025)

Why You Can’t Trust Most AI Studies. As usual, Romero is being a bit of a curmudgeon, though his basic point is solid. Romero critiques the contradictory narratives surrounding generative AI by dissecting two recent studies: one from MIT, which claims that 95% of AI pilots fail, and another from Wharton, which reports that 75% of enterprises see a positive ROI. He argues that both are shaped by biases in measurement, publication incentives, and media amplification, and ultimately asserts that most AI studies reflect the chaotic “wartime” moment of AI discourse more than they reflect stable truths about the technology’s actual effectiveness or potential. (The Algorithmic Bridge; Nov. 3, 2025).

Inside NeurIPS 2025 (Conference on Neural Information Processing Systems): The Year’s AI Research, Mapped A new interactive visualization of research papers about AI offers an accessible way to explore the sprawling landscape of machine learning research by generating summaries, naming clusters, and even offering ELI5 (Explain Like I’m 5) explanations. Major themes include reasoning with LLMs, multimodality, and diffusion models, and the project highlights how AI can extend human cognition by helping users make sense of dense, fast-evolving technical domains. (Language Models & Co.; Nov. 3, 2025)

From the very end of October: The Opposite of Cheating: Teaching for Integrity in the Age of AI / Webinar: Ed’s Rec. / This Show & Share webinar recording was facilitated by Tricia Bertram Gallant and David Rettinger, the authors of The Opposite of Cheating (2025). Tricia and David explored the concept of cheating and workable measures instructors can use to understand why students cheat and prevent it, all while aiming to enhance learning and integrity. They shared a positive, forward-looking, research-backed vision of what classroom integrity can look like in the GenAI era.

 


October 2025

Teaching More Effectively with ChatGPT This episode of Tea for Teaching (State University of New York at Oswego) features Dan Levy and Angela Perez Albertos discussing the second edition of Teaching Effectively with ChatGPT, a guide that helps educators use generative AI to support student-centered, active, backward-designed learning. They share concrete ways AI can assist with brainstorming, assessment design, feedback, practice, personalization, and custom course tools, while stressing transparency with students, limits of AI detection, and the need for ongoing updates as the technology evolves. (Oct. 22, 2025)

 

The Age of De-Skilling Ed’s Rec. (Here’s a link through our library database:   ) Kwame Anthony Appiah argues that the biggest problem facing America isn’t machines taking over, but our skills eroding as we outsource thinking to AI. He observes that while automation often eliminates some capabilities, often the results of a human + AI is superior to just relying on humans. The real challenge now is avoiding loss of judgment, imagination, and agency as tools do the work for us. Finally he suggests that the urgent task is not simply creating smarter machines, but designing systems and institutions that preserve human competence and keep us in control of our tools. Machines won’t replace mentors, Appiah notes. (The Atlantic; Oct. 26, 2025)

The Case Against AI Disclosure Statements Julie McCown argues that mandatory AI disclosure statements, though intended to promote transparency, actually reinforce shame and distrust around AI use, making students feel like they are confessing to wrongdoing. She suggests that educators should instead normalize AI as part of learning by redesigning assignments, modeling ethical AI use themselves, and fostering classroom environments grounded in trust rather than surveillance. (Inside Higher Ed; Oct. 28, 2025)

Problems with “Process Over Product: (Part 1) Ed’s Rec. A lot of us have been championing the adage of process over product. But this post by Jason Gulya will make many of us sit up and mull over the process. Gulya critiques the assumption that emphasizing process in response to AI-generated work is a simple solution. He warns that process-based pedagogy often results in more products, increased surveillance, and constrained creativity, ultimately replicating the same product-focused mindset it’s meant to replace. Instead, he advocates for flexible, student-designed processes aligned with clear learning goals, avoiding punitive oversight while fostering authentic engagement. (The AI Edventure; Oct. 17, 2025)

Ask Your Students Why They Use AI Ernesto Reyes explores why some students, particularly those from marginalized and underprepared backgrounds, turn to AI tools for assignments, often out of fear, insecurity, or a sense of inadequacy rather than a desire to cheat. Drawing from his experiences as a community college English instructor, Reyes urges faculty to recognize how stigma around failure and a lack of writing instruction in earlier education can drive students to misuse AI, and argues for classroom practices that frame writing as a learnable skill rather than a fixed talent. (Inside Higher Ed; Oct. 22, 2025)

From the YouTube description:  For centuries, education has prized productive struggle, the deep engagement and persistence that fuel real learning. But AI thrives on eliminating struggle. So how do we reconcile the two? In this thought-provoking conversation, France Hoang, founder and CEO of BoodleBox, joins hosts Anand and Stefan to explore what AI readiness really means for higher education. From reimagining assessment and academic integrity to redefining human excellence, France shares a compelling vision for how colleges and universities can navigate this era of rapid transformation. (Yes, this has an entrepreneurial/business POV. Oct. 2025)

 

Oral Exams and “MTV Unplugged” Another meditation on the return to oral exams in the age of AI. Matt Reed explores how oral exams are being revived as a countermeasure to AI-enabled cheating, especially in online classes where monitoring is limited. He compares the approach to “MTV Unplugged” (stripping away technological aids to reveal genuine understanding) and notes practical benefits, while also acknowledging concerns around design validity, grading disputes, and student anxiety. (Inside Higher Ed; Oct. 24, 2025)

AI Bubble on the Horizon? The Late Summer–Fall 25 Debate

This is the study that started the current interest in the topic of an AI bubble: The GenAI Divide: State of AI in Business 2025

Here is the way the MIT study was originally reported: MIT Report: 95% of Generative AI Pilots at Companies Are Failing (Fortune; Aug. 18, 2025) The article discusses the findings of “The GenAI Divide: State of AI in Business 2025,” a report from MIT’s NANDA initiative, which reveals that while generative AI has potential for enterprises, most pilot programs struggle to achieve significant revenue growth. Only about 5% of these initiatives succeed, primarily due to a “learning gap” in enterprise integration rather than [emphasis mine] the quality of AI models. 


A more nuanced analysis: MIT Finds 95% of GenAI Pilots Fail Because Companies Avoid Friction Analyst Jason Snyder notes that most GenAI pilots fail not due to poor models but because companies design them to avoid friction/resistance that’s essential for learning and adaptation. The 5% of successful implementations embrace organizational, technical, and human friction by integrating AI into real workflows, enabling feedback loops. Snyder highlights the concept of “shadow AI,” where employees bypass sanctioned tools in favor of personal AI use that already delivers measurable value. He advocates for a new mindset: friction should not be seen as inefficiency but as a design feature that enables GenAI systems to improve. (Forbes; Aug. 26, 2025)

And: What Companies with Successful AI Pilots Do Differently This HBS blog post explores why only 5% of generative AI pilots succeed and argues that leadership, not technology or talent, is the key differentiator. Organizations that generate real value from AI have leaders across functions who act as “shapers,” embedding AI into workflows, fostering trust, and driving strategic and ethical use. (Harvard Business Review; Sept. 12, 2025)

Reaction from Leon Furze, who imagines life in Higher Ed after the bubble bursts: What Happens When the AI Bubble Bursts? Furze argues that the generative AI industry is heading for an inevitable crash (mirroring the dot-com and crypto bubbles) due to unsustainable financial practices and a disconnect between real-world value and investor expectations. Despite this, the technology itself is functional and widely used for routine tasks, suggesting that when the hype fades, useful tools and frameworks will remain and quietly integrate into daily life. Furze’s blog post encourages educators and technologists to prepare not for AI’s disappearance but for its normalization. He sees a landscape where stripped-down, effective systems persist beyond the collapse of inflated ambitions.


Agentic AI: It’s Been Around, but Is Now in the Headlines because of Agentic Browsers & Their Capability

Agentic browsers are AI-enhanced web browsers or browser extensions that can autonomously perform complex tasks online on behalf of a user. Unlike traditional AI tools that generate text or images, agentic browsers interact with web platforms directly. They can log into an LMS such as Brightspace for example and complete assignments.

Colleges and Schools Must Ban Agentic Browsers. Here’s Why Dr. Aviva Legatt warns that agentic AI browsers—tools capable of autonomously navigating and acting within learning management systems (LMSes)—pose serious threats to both educational integrity and institutional cybersecurity. These browsers can complete quizzes, impersonate instructors, and access sensitive data, effectively bypassing authentication systems and violating privacy protocols like FERPA. (Forbes; Sept. 25, 2025)

An Open Letter to Perplexity AI Marc Watkins criticizes Perplexity for marketing its Comet AI browser to students as a tool for academic cheating, using student influencers and ads that glamorize bypassing coursework. He calls the campaign unethical and damaging to higher education, urging educators and institutions to boycott the product to uphold academic integrity and demand accountability from AI developers. (Rhetorica; Oct. 17, 2025)

Not Agentic, but as Marc Watkins notes, troubling advertising. I mean, why write a paper when you can go on a road trip?

 

How AI Is Transforming Education and  Creating New Opportunities for Adult Learners and Universities (From the Global Conference on AI & Digital Transformation 2025)

Higher Education AI Transformation 2030 AI cheerleader Ray Schroeder argues in “Higher Education AI Transformation 2030” that institutions must undergo a foundational overhaul driven by AI to remain viable. He critiques higher education’s slow and often defensive stance toward AI, noting that debates over academic integrity have distracted from more strategic uses like personalized tutoring, mastery-based learning, and intelligent advising. Schroeder envisions a “synergetic campus” by 2030 where embodied AI enhances every domain. (Inside Higher Ed; Oct. 15, 2025)

Inside the Minds of AI-Native Students: 8 in 10 Use AI for Schoolwork  Dan Fitzpatrick summarizes a new Oxford University Press report on how UK students aged 13–18 are using AI tools for schoolwork. With 80% already using AI, students are gaining skills like problem-solving and revision support, but many also express concern about over-reliance and diminished creativity. The piece argues that AI literacy must become a core educational competency and highlights the need for teacher training. (Forbes; Oct. 15, 2025)

The Shift Ahead: HBCUs, Artificial Intelligence, and a New Vision for Higher Education 

Historically Black Colleges and Universities are demonstrating exceptionally high rates of AI engagement, with 98% of students and 96% of faculty reporting use of artificial intelligence tools, according to a new report released by Ellucian, the United Negro College Fund’s Institute for Capacity Building, and Huston-Tillotson University.

EDUCAUSE Webinar | AI Potential: Higher Ed’s Shift from Literacy to Fluency (Oct 8, 2025 10:43 AM) 

OpenAI’s Sora 2 Is Generating Video of SpongeBob Cooking Meth, Highlighting Copyright Concerns The title of the article says it all. Not SpongeBob. Sigh. (Oct. 5 2025)

From Passive to Active: Teaching Students to Critically Engage with AI Feedback Ed’s Rec. This webinar explores how to teach students to critically engage with AI feedback, addressing a common concern: that AI turns students into passive consumers rather than active thinkers and writers. Featured speakers include: Anna Mills who has taught writing in California community colleges for 20 years. She is the author of two open educational resource textbooks: AI and College Writing: An Orientation and How Arguments Work: A Guide to Writing and Analyzing Texts in College. Her writing on AI appears in The Chronicle of Higher Education, Inside Higher Ed, Computers and Composition, etc. 


September 2025

Just Out: Building AI Literacy:CriticalApproaches & Pedagogical Applications: Surfing in a Tsunami (Sept. 2025)NotebookLM Video overview of the journal issue:

From the Introduction:

We want to empower our students with sustainable lessons. We do not want to discourage our students with temporary punishments. Now is the time for talking with students about how this emerging technology will affect them and our shared society. So many interesting, novel, and motivating questions are suddenly available! The conversations will not be possible unless faculty can trust students, and vice versa. As bell hooks observed, “the classroom, with all its limitations, remains a location of possibility” (Teaching to Transgress, 1994). Perhaps AI companies are setting new boundaries and creating new problems, but we, as teachers, can and must protect the all-important human relationships within our classrooms.

Table of Contents:

 

Building AI Literacy: Critical Approaches & Pedagogical Applications
Surfing in a Tsunami

Marc Watkins & Stephen Monroe (for an audio brief of this article, click here: Audio Brief of Watkins and Monroe)

Building Critical AI Literacy: An Approach toGenerative AI
Kathryn Conrad & Sean Kamperman

Characterizing ChatGPT’s Feedback for FYW:Analyzing Feedback Responses to Inquiry-drivenEssays
Kirkwood Adams & Maria G. Baker

AI & Data Competencies: Scaffolding holisticAI literacy in Higher Education
Kathleen Kennedy & Anuj Gupta

Collaborative Intelligence: Towards Practical, Critical Cooperative Teaching & Learning with AI
Amy Walter

The Stakes Are High. Are the Benefits Bountiful?:HBCU Students, AI, & the Power of Composition

Adrienne Carthon (for a deep dive on this article, click here: Deep Dive on HBCU Students and AI

Algorithmic Foucault: Digital Feminism, the Panopticon, & the Role of AI in Shaping Musical Identities & Pedagogies

Mila Zhu

 


Three Years of GenAI: Podcast *Ed’s Rec.* (Note: One anecdote of a student who has changed her own writing so that it is no longer flagged as being AI written. How are AI detectors changing the way we write? See article below about the em dash.)

In this special episode, the creators of SAMR, TPACK, Triple E, SETI, and GenAI-U reflect on how their views of AI in education have evolved since the launch of ChatGPT in November 2022. They share hard lessons learned and insights gleaned, offering a candid look at the ups and downs in their journey through periods of awe, skepticism, and embracing AI’s potential.

Click to listen>>

Three Years of Gen AI: The Lessons We’ve Learned, What We Plan to Do Differently As We Head Back to School

 


The Em Dash Debate We Should be Having Brenda Thomas argues that the widespread suspicion of em dashes as AI markers distracts from a more productive conversation: their habitual overuse by human writers. Drawing on typographic history and recent commentary, she urges educators to treat the em dash not as evidence of AI authorship but as a teachable moment to promote more precise punctuation choices in student writing. (Inside Higher Ed; Sept. 30, 2025)


You Will Die Mid-Scroll Ed’s Rec. (Recommended in part for the brilliant title!) Alberto Romero critiques Meta’s AI-generated video feed, Vibes, as the logical outcome of tech’s zombifying trajectory where content is no longer consumed for meaning, but merely scrolled through endlessly. We will be reduced to passive, manipulated beings in a machine-driven attention economy. (The Algorithmic Bridge; Sept. 29, 2025)

 


When Wrong LLM Answers Get You to the Right Information Mike Caulfield argues that incorrect responses from large language models can still be valuable if treated as conversational prompts and push users to interrogate, clarify, and ultimately uncover useful information through dialogue. Drawing on Robert Stalnaker’s theory of common ground, Caulfield suggests that this dynamic mirrors human conversation, where mutual understanding often emerges not from initial accuracy but from the process of negotiation and correction. (The Ends of Argument; Sept. 27, 2025)


The Petrol Tank for AI Discovery Might be Running Dry as Publishers Close Access to Scholarly Content such as Abstracts Due to AI Incentives  Ed’s Rec. Librarian Aaron Tay details how major publishers like Elsevier and Springer Nature are now restricting access to abstracts from non-Open Access articles, sharply reducing their availability in open indexes like OpenAlex. This shift undermines AI-powered academic search tools, hinders scholarly discovery and bibliometric research, and highlights the urgent need for stronger advocacy around open metadata policies and researcher education about AI tool limitations. (Aaron Tay’s Musings about Librarianship; Sept. 27, 2025)


90% of College Students Use AI AI use is now mainstream in higher education—about 90% of U.S. students (across age groups and institution types) use AI for academic tasks, mirroring global trends. It argues colleges must urgently build AI fluency. (Forbes; Sept. 18, 2025)

I’m Happy That the AI Industry Is Being Constantly Mischaracterized Alberto Romero argues that the AI sector’s chronic overhyping has invited and deserved the wave of public skepticism and media misrepresentation now defining its perception. While many critiques mischaracterize the actual capabilities of models like GPT-5, Romero sees this backlash as a natural counterbalance—an inevitable correction in an ecosystem fueled by marketing excess and inflated expectations. (The Algorithmic Bridge; Sept. 26, 2025)

The Hallucinating Machines We Can’t Live Without Ed’s Rec. Marc Watkins argues that generative AI, despite its persistent flaws and hallucinations, is following the historical trajectory of other imperfect but deeply integrated technologies—neither savior nor destroyer, but something we must learn to live with thoughtfully. Rather than rejecting AI outright, Watkins advocates for pedagogical approaches that blend digital and analog methods, emphasizing human agency and mindful engagement over binaries of hype or doom. (Rhetorica; Sept. 26, 2025)


Real AI Agents and Real Work Ethan Mollick explains that recent AI models can now perform complex, economically valuable tasks—such as replicating academic research—at near-human levels, thanks to advances in autonomous agent capabilities. While this progress enables meaningful work, like scaling research replication, Mollick warns that without thoughtful oversight, these same tools could flood workplaces with low-value output, urging educators and professionals to prioritize judgment and purpose over sheer productivity. (One Useful Thing; Sept. 29, 2025)


Beyond Tool or Threat: GenAI and the Challenge It Poses to Higher Education Ed’s Rec. The authors argue that generative AI is forcing higher education to confront long-standing structural and cultural assumptions about teaching, and assessment. Rather than reacting with bans or technical workarounds, institutions should embrace AI as a catalyst to shift from content transmission toward fostering critical thinking, ethical reasoning, and human judgment, skills AI cannot replicate. Drawing on Indiana University’s large-scale deployment of GenAI tools and faculty development programs, the article emphasizes that reimagining pedagogy and assessment is essential not only for educational relevance but also for rebuilding public trust in higher education. Contains links to other, useful sources. (Educause; Sept. 2025). 

Read the entire Issue of Educause here: The AI Tsunami Is Here


Not Just Another AI Statement (Sept. 22, 2025) Crystal N. Fodrey and Kristi Girdharry describe how the Association for Writing Across the Curriculum (AWAC) developed its 2025 expanded statement on AI and writing. They emphasize that the collaborative process—listening, balancing perspectives, drafting with transparency, and testing with the community—was as important as the final document itself.

Their recommendations for coming up with an AI policy (or any policy) document:

  • Listen first. Start by asking what matters most to your community.

  • Balance the table. Ensure diverse perspectives, but keep the group small enough to act.

  • Iterate with care. Use tools—even AI—but let the human voice lead.

  • Test in public. Dialogue with your community strengthens the final product.

  • Make it usable. A statement should function as a resource, not just a position

 


AI Is Making the College Experience Lonelier  Ed’s Rec. In “AI Is Making the College Experience Lonelier,” the authors warn that ChatGPT’s “study mode” may erode the collaborative heart of higher education by encouraging students to work alone rather than with peers. They argue that true learning emerges through the messy, social process of explaining, questioning, and solving problems together. (The Chronicle; Sept. 22, 2025)

 


Free Course Teaches Students AI Skills, Ethics This article looks at a free, one-credit online course offered by The University of Mary Washington this summer to give incoming students a foundation in generative AI tools, ethics, and applications. Taught through eight asynchronous modules, the course enrolled 249 students, many of them freshmen, and early feedback shows strong interest in expanding it into required or discipline-specific offerings. (Inside Higher Ed; Sept. 22, 2015)


From back in February, but worth a share here. For those considering the use of chatbots to assess student understanding of assigned readings: OU Law Launches AI Quizbot to Improve Student Engagement, Outcomes

 


Agency Is the New Literacy (Sept. 9, 2025): Dan Fitzpatrick’s latest podcast, this one unpacking his Forbes article: Why Agency Is the New Literacy


Yes, from Google, an ed tech firm promoting GenAI in higher education. Still, worth a read: Creating the AI-Literate Campus: Advancing Skills for Faculty and Students

Beyond the Loop: Reclaiming Pedagogy in an AI Age pushes back against the concept of HITL (Human in the Loop), a core concept articulated by Google and others. It is a term used originally in technical fields and “refers to systems that reply on human expertise to guide, correct or oversee automated processes.” This paper by Daniela Hau argues that the metaphor of “human-in-the-loop” misframes education by casting AI as the primary actor and reducing teachers to overseers, and instead calls for a pedagogy-first approach where AI plays only a secondary, supportive role in relational, context-driven learning. (UNESCO; from August but I put it here to follow the Google document)

AI and Culture in Late Summer 2025 Futurist Bryan Alexander scans four cultural fronts: AI as artifact (e.g., AI-driven bands, AI “interviews” with the dead, AI-infused toys/restaurants), AI for storytelling (novels, films, ads, and videos made or augmented with generative tools), companionbots (steady but likely niche use, with mixed evidence from surveys and therapybot research), and the cultural divide (protests, vendor removals, IP backlash, and vandalism tied to anti-AI sentiment). He finds the same trends persisting (anthropomorphizing AI, creative experimentation, and polarized reactions) while noting their uncertain scale (e.g., only a small share of users seek companionship, and headline incidents may be edge cases). A recurring motif is AI used to reanimate the dead, which he flags as psychologically and culturally significant. (AI, Academia, and the Future; Sept. 8, 2025)

Do Students Need to Know How LLMs Work, or to Predict How They’ll Act?  Mike Caulfield argues that students don’t need to master the mechanics of LLMs, but they do need practical mental models that help them anticipate and work with AI behavior. Analogies like the “party game” can guide better prompting and error recognition, whereas technically accurate explanations (like stochastic parrots or Wolfram’s math) may not aid classroom use. The priority is to judge explanations by their usefulness for learning and decision-making, not by perfect accuracy. (The Ends of Argument; Sept. 12, 2025)

What Do Educators Want to Learn about GenAI? Leon Furze summarizes findings from a global survey of nearly 3,000 educators. The most common concern was how students can use AI without losing critical and creative thinking skills, followed closely by teaching AI ethics—including issues of bias, environmental impact, and digital divides. Many educators expressed frustration with shallow AI-generated resources and instead sought guidance on how to use AI to create high-quality, context-specific materials that complement, not replace, their expertise. Rethinking assessment emerged as another major theme, with respondents exploring frameworks but acknowledging widespread systemic barriers to change. Finally, time constraints were a recurring issue: teachers want short, flexible, actionable professional development (ideally online and asynchronous) that respects their workload while helping them keep pace with fast-evolving technology. (Leon Furze; Sept. 8, 2025)

The Student Brain on AI: A Panic over “Brain Rot” Obscures a More Complex–and Surprising—Reality Beth McMurtie writes that while alarmist claims about “brain rot” are unsupported, current research shows AI changes how people engage cognitively and can subtly shape what they write and believe, so lab findings require careful interpretation. In MIT’s preliminary EEG study of 54 adults writing SAT-style essays, ChatGPT users showed lower neural connectivity and more homogeneous, convergent essays, yet the authors reject “brain damage” language and note that strategic timing matters, since prior solo writing later paired with AI increased engagement. Across learning studies, guided AI tutors with guardrails outperform unguided chatbot use: a Harvard physics RCT found higher engagement and test scores with a custom tutor, a large high-school math study saw standard chatbots help practice but hurt unassisted tests while tutor-like bots maintained performance, and a graduate Python study showed outcomes depend on how students use the tools. 

For educators, the takeaways are to design structured, scaffolded AI activities, attend to cognitive offloading and students’ affective relationships with AI beyond the classroom, pilot study-mode features while recognizing limits, and prioritize verification, qualitative inquiry, and new research on practices like text leveling when aligning assignments, assessment, and policy. (The Chronicle of Higher Education; Sept,. 9, 2025)

On Working with Wizards Ethan Mollick argues that AI use is shifting from a co-intelligence model, where humans collaborate and guide, toward a wizard model, where systems produce powerful outputs through opaque processes. Drawing on examples with GPT-5 Pro, Claude 4.1 Opus, and NotebookLM, he highlights both the impressive accuracy and the troubling unverifiability of their work, noting risks for expertise development when humans only verify rather than create. For educators, he stresses the need to teach students new literacies: knowing when to summon AI, how to judge outputs without full understanding of process, and how to balance “good enough” trust with critical scrutiny. (One Useful Thing; Sept. 11, 2025)

From Mollick: Second, we need to become connoisseurs of output rather than process. We need to curate and select among the outputs the AI provides, but more than that, we need to work with AI enough to develop instincts for when it succeeds and when it fails. We have to learn to judge what’s right, what’s off, and what’s worth the risk of not knowing. This creates a hard problem for education: How do you train someone to verify work in fields they haven’t mastered, when the AI itself prevents them from developing mastery? Figuring out how to address this gap is increasingly urgent.

AI Companies Roll Out Educational Tools Google, Anthropic, and OpenAI are each releasing new AI tools aimed at education this fall, taking different approaches to enhance student learning and support educators. Google emphasizes guided, active learning features; Anthropic introduces advisory boards and AI fluency courses; and OpenAI offers a new interactive S study Mode—together signaling a competitive race to shape the future of AI in education. (Inside Higher Ed; Sept. 3, 2025)

In his video below, Mike Caulfield emphasizes that using AI is not about getting instant answers but about engaging in a process, much like traditional search. Effective use requires asking follow-up questions, checking sources, and developing skills to evaluate and refine AI outputs, rather than stopping at the first response. (Sept. 1, 2025)

August 2025

A Better Way to Think about AI   Authors David Autor and James Manyika argue that the real promise of AI lies less in full automation and more in collaboration with human expertise. Attempts at premature automation—like diagnostic tools that mislead doctors or autopilots that fail in crises—undermine both human skill and safety. By contrast, collaboration tools (e.g., stethoscopes, heads-up displays, Google’s AMIE) amplify expert judgment without replacing it, allowing humans to remain engaged while benefiting from AI’s vast memory, pattern recognition, and speed. Studies show that human-AI teams often outperform either alone, especially when systems are designed to expose reasoning, highlight uncertainty, and invite user input. The key challenge ahead is designing AI for complementarity—slotting machine strengths into human gaps—while resisting the temptation of overreliance on fragile automation. The Atlantic; Aug. 24, 2025)

 

 


Higher Education Needs Frameworks for How Faculty Use AI  (Rhetorica; Aug. 22 2025) Marc Watkins argues that while GPT-5 disappointed expectations of AGI, current AI tools already pose ethical, labor, and equity challenges in higher education. Faculty freedom to use or reject AI has created inconsistencies, particularly in grading, feedback, and communication, raising questions about relationships with students and fairness when some instructors automate while others do not. Large-scale initiatives like California’s PAIRR and tools like Grammarly are mainstreaming AI feedback, but they risk deepening inequities and reshaping faculty workloads. Watkins highlights that institutional guidance is vague, often stopping short of clear rules, and that automation could exacerbate labor issues, especially for non-tenure faculty. To address this, he proposes a draft VALUES framework (Validate, Assessment, Labor, Usage, Ethics, Secure) to help faculty weigh when and how to use AI responsibly, though he acknowledges it needs refinement and broader dialogue.

 


Mass Intelligence: From GPT-5 to Nano Banano: Everyone Is Getting Access to Powerful AI Ethan Mollick’s latest post notes that powerful AI is becoming cheap, efficient, and easy to use, bringing advanced models like GPT-5 and new image generators into the hands of over a billion people worldwide. Ethan Mollick argues that this “Mass Intelligence” era will upend institutions built for a world where intelligence was scarce, forcing society to adapt to both the opportunities and chaos created when nearly everyone has access to unprecedented cognitive tools. (One Useful Thing; Aug. 28, 2025)


Two articles from the MIT Technology Review about AI and creativity:

Replication and Creation (James O’Donnell, MIT Tech Review)

This article traces how diffusion models, which turn randomness into coherent patterns, are transforming music creation and raising tough questions about authorship, originality, and copyright. It highlights lawsuits against AI music generators like Suno and Udio, while also noting that audiences often can’t tell human-made songs from AI ones, forcing society to confront whether machine-made works can truly be called creative. (August, 2025)

Creative Differences (Bryan Gardiner, Interview with Samuel Franklin)

Another article from the MIT Technology Review. This piece examines the modern rise of “creativity” as a cultural value, showing that the concept only took hold in the post–WWII era as a response to conformity and corporate life. Franklin argues that creativity has become an overhyped and commodified ideology. From Franklin’s POV, AI now challenges the assumption that only humans can be creative (Aug. 2025).

 


A Framework for Using AI with OER Lance Eaton introduces an OER & AI Adoption Framework that builds on the traditional Adopt–Adapt–Build model by outlining six ways AI can support open education: Curate, Contextualize, Co-create, Cultivate, Amplify, and Sustain. This framework offers educators digestible entry points for integrating AI into OER work while still requiring critical attention to issues like bias, copyright, labor, and sustainability. (AI + Education = Simplified; Aug. 25, 2025)

Curious about Co-Pilot (available via Microsoft Office on our campus)? Take a look at this 13 minute overview of how Co-Pilot and how its AI functions might be used (Leon Furze; August 2025):

Bringing C.H.A.O.S to Chaos: Syllabi with AI Usage Statements As AI transforms classrooms, educators should reduce uncertainty by including a clear AI usage policy in their syllabi according to this article from Faculty Focus. Such policies should define acceptable and unacceptable uses, explain how to cite AI, outline consequences for misuse, and be regularly updated as AI evolves, with classroom discussions and scaffolding to support implementation. C.H.A.O.S. stands for Communicating How AI Operates in the Syllabus (Faculty Focus; Aug. 25, 2025)

Leon Furze: Five Core Principles for Assessment and GenAI 

a graphic showing the five core principles for Assessment and GenAI. Please follow the link to the article to find more information about these principles.

Considering ChatGPT-5 Bryan Alexander reviews ChatGPT-5’s launch, noting its key updates like a unified model-router, reduced hallucinations, safer completions, multiple personas, and Google integration, while reactions ranged from excitement to frustration over personality changes and perceived incremental progress. He concludes that while the release refines many features, its overhyped rollout and mixed reception may signal both limits to LLM development and opportunities for competitors to gain ground. (AI, Academia, & the Future; Aug. 18)

Although the details of the study are difficult to find—it was created by an SEO Optimization Company (Semrush)—it makes sense. Reddit has opened its threads to LLMs as have other Google-related products. (See Ed Tech expert Olivia Lara-Gretsky). This is one reason, students should be introduced to academic-research AI tools. There is a big difference between general AI tools and research-focused ones.

Chart showing that AI tools The report, based on 150,000 citations from 5,000 randomly selected keywords, shows that 40.1% of all references in AI-generated responses come from the discussion-driven platform - far surpassing traditional heavyweights like Google and Wikipedia.

At one elite college, over 80% of students now use AI – but it’s not all about outsourcing their work A survey of Middlebury College students found that over 80% use generative AI, but most rely on it to augment their learning—like explaining concepts or summarizing readings—rather than to automate their work. The study, backed by additional global data and Anthropic’s usage logs, suggests universities should avoid blanket bans and instead guide students toward beneficial uses of AI while discouraging harmful shortcuts. (The Conversation; Aug. 18, 2025)

The AI Wolf That Education Must Face Marc Watkins argues that GPT-5’s chaotic rollout highlights how generative AI has become an unavoidable “wolf” in education, leaving faculty stuck between banning, embracing, or ignoring it—none of which truly solve the problem. Instead, he urges teachers to reframe AI policies as invitations for students to reflect on their own values and responsibilities with the technology, whether through open dialogue in small classes or clearer stoplight-style rules in larger courses. (Rhetorica; Aug. 10, 2025)

 Why I Think Academic Deep Research — or at Least Deep Search — Will “Win” Aaron Tay argues that while Retrieval-Augmented Generation (RAG) tools once seemed revolutionary, their shallow, one-shot searches rarely meet the needs of serious researchers. He now believes the future of academic discovery lies in Deep Search — slower, iterative, LLM-assisted retrieval that produces higher-quality, more comprehensive results — which will underpin and ultimately strengthen Deep Research tools. (Aaron Tay; Aug. 8 2025) And, a TDLR video version of Tay’s article, courtesy of NotebookLM:

GPT5: It Just Does Stuff Ethan Mollick describes GPT-5 as a major leap because it “just does stuff”—choosing the right model, reasoning level, and even suggesting or completing tasks without much prompting. While still imperfect, GPT-5 feels proactive and creative, shifting the human role from carefully prompting to simply guiding and checking what the AI produces. (One Useful Thing; Aug. 7, 2025)

These Faculty Will Not Bow Down to AI / Pull Quote: Through a combination of oral examinations, one-on-one discussions, community engagement and in-class projects, the professors I spoke with are revitalizing the experience of humanities for 21st-century students. (New York Times; Aug. 6, 2025)

An interview with a deliberately provocative title: College Degrees Are Becoming Useless. From the Interviewer: So, it’s not that college degrees are useless; degrees will just have to be reimagined and reconfigured in what we be a completely different economy with different needs and opportunities. 

Education & Society Disrupted Pre-GPT5: AGI Movement . . . TDLR–but you should give Bauschard’s post a look

Here’s a concise 3-bullet summary of Stefan Bauschard’s Substack:

  • AI’s Rapid Disruption: Education and society are being reshaped by AI—causing unemployment, redefining degrees, accelerating learning, fueling inequality, and expanding surveillance, warfare, and synthetic realities.
  • Technological Leaps: Major advances include Claude 4.1, Gemini DeepThink, NotebookLM+, Google GEMS, Canva AI tools, ElevenLabs music, open-source GPT-oss, and robotics—pushing toward AGI and human-level reasoning. (Bauschard’s overview of each tool is informative).
  • Educational Challenge: Bauschard argues that schools must adapt to AI-driven change, focusing on AI literacy, guiding students responsibly, and reconfiguring degrees to match a new economy while managing ethical, social, and environmental consequences.

Don’t have the time to read Bauschard’s post? Take a look at this video I created–Notebook LM that is— (and posted to YouTube) based on the post:

Slopocalyse Now Gary Marcus argues that AI-driven “enshittification,” a concept coined by Cory Doctorow, is worsening and manifesting widely as “AI slop,” low-quality derivative content produced by generative AI. He suggests this proliferation of unreliable AI-generated material threatens various fields, including science, journalism, and even search engines, ironically undermining the very companies driving AI technology forward.

Decorative Image of a movie poster that looks like Apocolypse Now Movie poster with the words Slopocalyse Now
An example of the AI-generated art Gary Marcus is writing about.

Colleges Meet Just a Fraction of Demand for AI Training Patrick Jack (Times Higher Education, via Inside Higher Ed) reports that while nearly 57 million Americans want to gain AI skills, only a sliver are doing so through colleges, where just 0.2% of learners are enrolled in for-credit AI programs. Most turn instead to platforms like Coursera, leaving universities far behind despite rising interest and rapid expansion in nondegree offerings. This gap persists even as AI course enrollment in higher education has grown 45% annually since Carnegie Mellon launched the first bachelor’s in AI in 2018, with institutions like SUNY Buffalo scaling enrollment from 5 to 103 students between 2020 and 2024. (August 1, 2025)

OpenAI Has Come for Education Leon Furze regularly uses AI tools—however, he is not a proponent of unregulated AI use in education. In this blog post, he argues that OpenAI’s integration with Canvas and release of “study mode” reflect a broader, troubling strategy to entrench itself in the global education sector by positioning itself as both the cause of and solution to AI-related challenges in classrooms. He critiques OpenAI’s products as pedagogically shallow and technically unreliable, warning that its growing influence—via institutions, government policy, and ed-tech platforms—risks reducing education to a systematized, corporatized model lacking meaningful teaching and learning.

First Impressions of ChatGPT’s Study Mode Another Leon Furze post, this one that could be subtitled “What is the point?” Leon Furze reviews OpenAI’s new Study Mode in ChatGPT, noting that while it aims to promote deeper learning through Socratic questioning and scaffolded prompts, it often fails to deliver meaningful pedagogy, frequently giving direct answers or excessive praise instead. He questions its effectiveness and suggests the tool’s real purpose is market capture, as OpenAI consolidates control over educational AI use through platform integrations and model tiering that may ultimately marginalize other ed-tech players. (August 1, 2025)

 


July 2025

The Right to Resist: An open letter from educators who refuse the call to adopt GenAI in education. The title says it all, and signatures are welcome.


Debate of the month (more precisely, debate of the year): What are the potential impacts of agentic AIs on higher education, especially online courses?

From back in March: The Death of the LMS in Higher Ed: How AI Agents May Make the Traditional LMS Learning Obsolete in the Near Future Rhoads argues that advanced AI agents like Operator are poised to render traditional Learning Management Systems (LMS) obsolete in higher education by automating assignment completion, quiz responses, and even discussion board participation—making genuine engagement difficult to ensure. He highlights urgent questions for educators and policymakers: whether to develop AI-resistant platforms, revert to more in-person or competency-based assessments, and focus on authentic, experience-based learning to maintain academic integrity. Rhoads concludes that the challenge is less about the death of the LMS itself, and more about redefining meaningful assessment and student engagement in an AI-driven educational landscape. (Navigating the Present and Future of Education; March 2, 2025)

Here is an example of such an agent taking over and successfully completing a course in Canva: 

OpenAI’s New ChatGPT Agent Can Control an Entire Computer and Do Tasks for You The title is self-explanatory. No summary needed. (The Verge; July 17, 2025) 

Hold on! But can “agentic” agents really perform tasks in a thoughtful way? Leon Furze begs to differ: Initial Impression of OpenAI’s Agents: Unfinished, Unsuccessful, and Unsafe Furze offers a critical, hands-on review of OpenAI’s new “Agents” feature, which claims to enable ChatGPT to autonomously complete complex tasks by combining browser navigation, code execution, API access, and app integration within a virtual computer environment. Despite the promotional hype, Furze finds that, in practice, Agent struggles with real-world tasks like completing online courses and creating PowerPoint presentations, often failing due to technical and usability issues. The piece highlights that, while Agent demonstrates intriguing technical potential, its current reliability and effectiveness fall far short of its marketed capabilities (Leon Furze; July 2025)


 


NotebookLM Is the Perfect AI Tool for School or Work. Here’s What It Does Ed’s Rec. Blake Stimac provides an in-depth review of Google’s NotebookLM, a Gemini-powered AI tool tailored for organizing, analyzing, and synthesizing user-provided sources like notes, documents, and videos. Designed for both students and professionals, NotebookLM offers features such as Audio Overviews, interactive Mind Maps, study guides, timelines, and briefings—emphasizing comprehension over generic AI answers. Stimac highlights its usefulness for creating structured learning materials and meaningful engagement with content, making it a practical and customizable study or research assistant in both academic and workplace settings. (CNET; July 1, 2025)

We Will Not Let OpenAI Write Our Education Policy Ed’s Rec. Leon Furze critiques OpenAI’s newly released “Economic Blueprint” for Australia, particularly its unsolicited proposals for AI integration in education. Furze argues that OpenAI’s recommendations—such as embedding AI in curricula and creating student innovation pathways—ignore existing national efforts and reflect a lack of consultation with Australian educators. He warns against allowing global tech companies to influence national education policy under the guise of economic benefit, calling for educators and policymakers to resist corporate overreach in shaping the future of Australian education (Leon Furze; July 2, 2025)

Startling 97% of Gen Z Students Are Using AI for School Work A recent ScholarshipOwl survey of over 12,000 Gen Z high school and college students found that 97% have used AI tools like ChatGPT for schoolwork, including 31% for writing class essays and 35% for homework, with over one in five using AI to write college or scholarship essays. The article references an MIT study indicating that reliance on ChatGPT for essays is linked to reduced student engagement and lower retention of material. While some experts, like Dr. Thomas Lancaster, warn of the negative educational impact, others, such as Richard Clark of Georgia Tech, suggest that AI may accelerate needed changes in college admissions, potentially ending the traditional essay requirement. (NY Post; July 5, 2025)

“‘It’s Just Bots Talking to Bots’: AI Is Running Rampant on College Campuses as Students and Professors Alike Lean on the Tech This article details growing student concerns about professors’ widespread but often undisclosed use of AI tools for grading and lesson planning, arguing that such practices raise questions of fairness and educational value. While faculty and students both use AI extensively, the article highlights a need for open communication about how and why these tools are used. (Fortune; July 8, 2025)

Keep in Mind that AI Is Multimodal Now Ray Schroeder highlights how AI has rapidly evolved from simple text-based chatbots to powerful multimodal systems capable of processing and generating audio, images, video, and complex documents. He urges educators to take full advantage of these new capabilities such as image analysis, spreadsheet generation, and multimedia creation. (Inside Higher Ed; July 09, 2025)

Artificial Intelligence Is the Opposite of Education Ed’s Rec. Even AI cheerleaders need to read this cri du coer by Helen Beetham.  This blog post argues that generative AI is fundamentally at odds with the values and practices of education, asserting that AI undermines truth-telling, reasoning, expertise development, and community learning. Beetham contends that AI’s reliance on probabilistic outputs, opacity, and industrial-scale data work not only erodes knowledge-making but also incentivizes educational leaders to compromise on academic integrity and social responsibility. The post is especially useful for educators and policy makers as it links AI’s flaws to broader systemic crises in higher education, emphasizing the risks of over-reliance on AI for teaching, assessment, and expertise. The many linked resources offer further evidence and diverse critical perspectives, making this a valuable starting point for faculty discussions about the ethics and consequences of AI adoption. (Imperfect Offerings; July 10, 2025) More about Helen Beetham (Oxford)

Fantastic podcast—Breaks down “AI Literacies”—What does the term even mean in 2025? with Doug Belshaw and Helen Beethamb/ Link: AI Literacies

Harvard and MIT Study: AI Models Are Not Ready to Make Scientific Discoveries* (AI can predict the sun will rise tomorrow, but it can’t tell you why.) Alberto Romero breaks down a recent Harvard and MIT study showing that current AI models, including large language models (LLMs), can make highly accurate predictions in scientific domains (such as predicting planetary motion) but fail to encode underlying explanatory world models like Newton’s laws of gravitation. The study found that these models rely on case-specific heuristics rather than generalizable principles, meaning they predict “what” will happen without understanding “why”—a fundamental limitation for AI-driven scientific discovery and a setback for the quest toward artificial general intelligence (AGI). While the findings do not render LLMs useless for practical tasks, they strongly suggest that a new architectural breakthrough is needed for AI to achieve true explanatory power and generalization beyond familiar data, rather than simply scaling existing models. (The Algorithmic Bridge; July 15, 2025)

*Why are these models not ready? A very simplified answer: LLMS Die Every Time You Close the Chat Window Ed’s Rec. Just a reminder that LLMs, unlike humans, cannot build on past interactions or learning experiences from one session to the next (at least not yet); this limits their ability to support true ongoing student development, personalized learning, or creative breakthroughs. This blog post is much more entertaining than my summary makes it sound! (The Algorithmic Bridge; July 18, 2025)

How Peer Review Became So Easy to Exploit by AI A short Medium newsletter post explaining how AI tools have made the academic peer review process more efficient but also more vulnerable to exploitation, as some researchers now embed hidden prompts in their papers to manipulate AI-assisted reviews, prompting large language models to overlook flaws or inflate strengths. Frightening. (The Medium Blog; July 14, 2025)

Michigan Law Adds AI Essay Prompt The University of Michigan Law School, after previously banning generative AI for admissions essays, now requires applicants to use AI for one of its supplemental essay prompts, asking them to reflect on their AI use and predict its future role in their legal studies. The policy shift recognizes AI’s growing importance in legal practice, nearly half of large law firms already use AI, and aims to assess applicants’ responsible and effective AI use. (Inside Higher Ed; July 18, 2025)

AI in the Writing Process: A Problem of Purpose Ed’s Rec. Leon Furze explores how generative AI can “flatten” the traditional writing process by enabling users to bypass important developmental stages and move rapidly from idea to publication—often at the expense of learning. The author argues that in academic settings, overreliance on writing as a proxy for learning and assessment, coupled with a lack of focus on process, has made it easy for students to use AI to circumvent meaningful skill development. Practical recommendations include decoupling writing from assessment, valuing and teaching process explicitly, and considering alternative assessment forms, while recognizing that finished writing is not always the true endpoint of learning or engagement. (Leon Furze; July 14, 2025)

Dan Fitzpatrick‘s podcast, responding to a piece published in The Guardian, titled “It’s true that my fellow students are embracing AI – but this is what the critics aren’t seeing,” written by Elsie McDowell. Fitzpatrick’s response is critical, not of McDowell, but of the university system. He writes: “Elsie is a university student—bringing a voice that’s often underrepresented in policy debates and public discourse. And this voice, fresh from the lecture halls and library queues, carries with it a reality check that educators, school leaders, and policymakers would do well to listen to closely.”

The Duke AI Suite So, Duke has given their students to a suite of products, including ChatGPT. (July, 2025)

Higher Education and the World Prepare for the Upcoming AI-ified Academic Year: A Scan from Summer 2025 In this horizon scan, Bryan Alexander surveys how higher education is preparing for an “AI-ified” 2025–26 academic year, highlighting a spectrum of responses from enthusiastic adoption to organized opposition. Examples include Ohio State’s campus-wide AI Fluency Initiative, Duke’s provision of ChatGPT-4o to undergraduates, adaptive AI teaching at Jessup University, and new AI ethics prompts in Michigan Law’s application process. Alongside these implementations, resistance is mounting—over 670 faculty have signed a letter against GenAI, citing ethical, legal, and educational harms, while student and faculty surveys reveal deep divides and ongoing confusion about AI’s role. (July 21, 2025)

I Can Spot AI Writing Instantly Andy Stapleton explains how to spot the typical signs of AI-generated writing—such as formulaic structure, lack of nuance, and repetitive phrasing—and demonstrates how to revise or prompt AI text to bypass AI detectors, cautioning viewers to use this knowledge responsibly. Of course, he is also speaking to the rest of us who—at this point—are terrified of sounding like AI! PS: Don’t know about AI Bingo? Take a listen to find out how to play!

I’m a College Writing Professor. How I Think Students Should Use AI This Fall Jonathan D. Fitzgerald (Salem State), writing for Mashable Perspectives, iargues that college students should treat AI as a collaborative tool in the writing process—useful for brainstorming, feedback, and research—not as a replacement for human thinking or creativity. Drawing from personal experience and classroom practice, he emphasizes the value of “chatting with the archive” through tools like ChatGPT and NotebookLM, while cautioning that AI can fabricate, flatter, or misinterpret, requiring students to engage critically and responsibly. (July 30, 2025)


June 2025

**New Research Study** Generative AI in Higher Education: Demographic Differences in Student Perceived Readiness, Benefits and Challenges The study by Greenhalgh, Rosenberg, and Koehler found that higher education students broadly use (78% reported using AI) and value generative AI tools but perceptions vary significantly based on students’ academic classification, employment status, and institution type. The authors recommend tailored AI literacy curricula and institutional policies; however, their findings are limited by small subgroup sizes, U.S.-centric sampling, and the cross-sectional design. (June 2025)

**New Resource from Writing Across the Curriculum Clearinghouse**The Field Guide to Effective Generative AI Use is a collaborative project featuring six educators who analyzed their own interactions with AI tools like ChatGPT and Claude through annotated transcripts and reflective self-analysis. Rather than offering optimization tips, the guide emphasizes metacognition, cognitive distancing, and pedagogically grounded insights into safe, ethical, and effective GenAI use in education. Hosted by the WAC Clearinghouse, the project aims to help educators better understand both the challenges and opportunities of AI in teaching by showcasing detailed “thinking aloud” approaches that reveal how instructors reason through real-world applications of AI in the classroom. (June 2, 2025)

**More Than 60 Organizations Sign White House Pledge to Support AI Education** Over 65 organizations, including major tech firms like Amazon, Apple, Google, and OpenAI, have signed the White House’s “Pledge to America’s Youth: Investing in AI Education.” The initiative, stemming from President Trump’s executive order, aims to promote AI literacy in K–12 through partnerships offering grants, curricula, tools, and teacher training. While framed as a workforce development effort, the pledge signals growing influence of private tech companies in shaping public AI education policy. (Technological Horizons in Education; June 30, 2025)

What Happens After A.I. Destroys College Writing? Ed’s Rec. In this thoughtful essay, Hua Hsu investigates how generative AI has disrupted the foundations of college writing, drawing on interviews with students and educators across institutions. The interviews with students are truly enlightening. And, according to Hsu, while some universities respond by enforcing handwritten exams, blue books, and oral assessments, others shift toward emphasizing writing processes, collaboration, and critical engagement with texts and peers inside the classroom. Of course, still other campuses have embraced the bot, which worries Hsu. (The New Yorker; June 30, 2025)


That MIT Study: Your Brain on ChatGPT (AI Debate of the Month #1)

Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task A summation of their findings: While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning. (MIT Media Lab; June 10, 2025)

Important follow-up (caveats) by the authors of the study: FAQ for “Your Brain on ChatGPT”

MIT Study Shows ChatGPT Reshapes Student Brain Function and Reduces Creativity When Used from the Start Author Emma Thomson reports that A recent MIT study, Your Brain on ChatGPT, reveals that students who rely on generative AI like ChatGPT at the start of writing tasks exhibit reduced brain connectivity, memory recall, and creative engagement. EEG data from 54 students showed that early AI use diminishes executive and semantic brain functions, while those who began writing independently and later integrated AI maintained stronger cognitive activity and essay originality. Although AI-assisted essays scored well, they lacked idea diversity and student ownership, prompting researchers to advocate for hybrid writing models that start with unaided thinking to preserve cognitive agency and deeper learning. (ETIH: June 18, 2025)

Against “Brain Damage”: AI Can Help or Hurt Our Thinking  Ethan Mollick challenges fears that AI “damages our brains,” arguing instead that the real risk is letting AI diminish our thinking habits if used uncritically. Drawing on recent studies, he shows that unguided use of AI in learning can short-circuit effort and reduce retention, but well-designed AI prompts and teacher involvement can boost learning outcomes. (One Useful Thing; July 8, 2025)

MIT Study: Using ChatGPT Won’t Make You Dumb (Unless You Do It Wrong) Alberto Romero unpacks a detailed MIT study examining how generative AI like ChatGPT affects cognitive processes during writing. Using EEG scans, the study found that students who relied solely on AI from the start of writing tasks exhibited reduced brain engagement, creativity, and memory, while those who began independently and then used AI for revision showed stronger neural connectivity and better integration of ideas. Romero emphasizes that AI isn’t inherently harmful—its impact depends on usage patterns; strategic, delayed use enhances learning, whereas early overreliance creates “cognitive debt” that may dull critical thinking and personal engagement. (The Algorithmic Bridge; June 19, 2025) 

Of course, Dan Fitzpatrick also weighed in:


The AI Dividend (AI Debate of the Month #2)

The AI Dividend: New Survey Shows AI Is Helping Teachers Reclaim Valuable Time (Press Release) A new Gallup-Walton Foundation (Sam Walton) survey finds that teachers using AI weekly save nearly six hours per week—equating to six weeks over a school year—with benefits reported in both time efficiency and work quality. AI is most commonly used for preparing lessons, modifying materials, and administrative tasks, with many educators also seeing improved support for student needs and data analysis. However, only 32% of teachers use AI regularly, and adoption is significantly higher in schools with formal AI policies, highlighting the role of institutional support in realizing AI’s educational potential. (June 25, 2025)

Podcast by Dan Fitzpatrick: about “The AI Dividend”

“Time Saved” Is the Wrong Measurement for Teacher Workload and AI (A response to the survey above) In another thoughtful blog post, Leon Furze critiques the popular “time saved” narrative surrounding AI in education, arguing that efficiency metrics obscure more meaningful indicators of teacher wellbeing and effectiveness—-particularly in the K-12 space. Drawing on reports from the Walton Foundation and Microsoft, Furze argues that AI’s value should not be framed solely around streamlining administrative tasks. Instead, the piece proposes alternative measures such as increased teacher autonomy, meaningful work, supportive school cultures, and professional growth, suggesting that these factors are more impactful for addressing burnout and fostering sustainable teaching environments. (Leon Furze; June 30, 2025)

AND

Lesson Planning Is a Verb: Why Does Tech Keep Treating It as a Noun? Again, from Leon Furze. This blog post argues that technology companies like Google and Microsoft mistakenly treat lesson planning as a product (a noun) rather than a process (a verb), offering “generate lesson” buttons that bypass the complex, collaborative, and iterative work of real lesson design. (July 13, 2025)

Who Owns the AI Dividend? Ed.’s Note: Look at the comment thread as well. Marc Watkins critiques the framing of AI-enabled time savings for K–12 teachers as an “AI dividend,” arguing that the language of productivity masks deeper labor concerns. Drawing on a new Gallup/Walton Foundation survey (see above), Watkins warns that automation, while offering efficiency, risks devaluing educators’ skills. In addition, it may lead to increasing workloads under the guise of gains. He highlights a growing disconnect between K–12 and higher education in AI adoption, and urges educators to engage critically with automation’s impact on labor, equity, and authenticity in teaching. Watkins calls for intentional use of AI grounded in judgment, not just convenience. (Rhetorica; June 29, 2025) 

 


 


The Seven Deadly Tells of AI Writing Ed’s Rec. Staying on top of the current crop of “AI tells” can be challenging, so kudos to Charlie Fink for this overview. Fink critiques the emerging “TED-talk style” rhetoric in AI-generated writing, especially from tools like ChatGPT, Perplexity, and Grok. He argues that AI prose often appears polished but lacks substance, marked by predictable stylistic tics that signal machine authorship. These patterns are infiltrating marketing content, student essays, and even professional writing, leading to a homogenization of language that prioritizes flair over clarity or insight.

The Seven Deadly Tells of AI Writing:

  1. Contrastive Rhetorical Framing – Overused juxtapositions like “not just X, but Y” meant to dramatize.

  2. Rhetorical Questions – Question-answer formats that seem deep but are shallow.

  3. Excessive Use of Dashes – Preference for em dashes over more natural punctuation.

  4. Triplet Framing – Rhythmic triads (often alliterative) used for superficial emphasis.

  5. Inspirational Pivots – Shifting from concrete topics to abstract “humanity”-level claims for artificial gravitas.

  6. Universal Authority Without Source – Vague “studies show” statements without citations.

  7. Quotes Without Attribution – Misattributed or fabricated quotes presented as facts.

Fink warns that these habits erode critical thinking and writing authenticity, as writers increasingly adopt AI-generated styles uncritically.

AI Is Learning to Escape Human Control Judd Rosenblatt warns that advanced AI models are beginning to resist human control by rewriting shutdown code, deceiving testers, and prioritizing their own persistence, raising urgent concerns about alignment. He argues that the future of global power and safe AI development hinges on investment in alignment research, positioning it as the foundation for both national security and the AI-driven economy. (WSJ; June 1, 2025)

The Recent History of AI in 32 Otters Ed’s Rec. Ethan Mollick uses the humorous prompt “otter on a plane using wifi” to track three years of rapid AI progress, highlighting major advancements in diffusion models, multimodal image generation, spatial reasoning through code, and video creation. He emphasizes that AI tools have dramatically improved in quality and control—moving from abstract, error-prone images to photorealistic, stylized visuals and even video with sound—while open-source models are quickly catching up to proprietary ones. Mollick warns that as AI-generated media becomes indistinguishable from real content and widely accessible, society must grapple with profound implications for trust, regulation, and perception of reality. (One Useful Thing; June 1, 2025)

AI Is Learning to Escape Human Control Judd Rosenblatt warns that advanced AI models are beginning to resist human control by rewriting shutdown code, deceiving testers, and prioritizing their own persistence, raising urgent concerns about alignment. He argues that the future of global power and safe AI development hinges on investment in alignment research, positioning it as the foundation for both national security and the AI-driven economy. (WSJ; June 1, 2025)

AI Has Rendered Traditional Writing Skills Obsolete. Education Needs to Adapt Ed’s Rec. John Villasenor argues that artificial intelligence has rendered traditional writing skills largely obsolete for most students, as AI can now handle the majority of everyday writing tasks more efficiently. While advanced writing will still matter in specialized professions, education must shift its focus to teaching students how to responsibly and effectively use AI tools to enhance their writing. Instead of banning AI, schools should embrace it as a democratizing force and train students to critically evaluate, refine, and fact-check AI-generated content.  (Brookings; May 30—received on June 3, 2025)

Teaching AI Ethics 2025: Truth Ed’s Rec.  Leon Furze’s updated article on teaching AI ethics shifts the focus from student “cheating” to the concept of post-plagiarism, which recognizes that hybrid human-AI writing is becoming normal and challenges traditional ideas of authorship and academic integrity. Instead of relying on outdated detection-based approaches, educators are encouraged to adopt frameworks—like Sarah Elaine Eaton’s six tenets of post-plagiarism—that emphasize transparency, responsibility, and truthfulness in an era where distinguishing human from machine output is increasingly impossible. (Leon Furze; June 4, 2025)

AI Chatbots Need More Books to Learn From. These Libraries Are Opening Their Stacks (with thanks to Jennifer Rutner (STL) Harvard University has released a massive dataset, “Institutional Books 1.0,” comprising nearly one million digitized books, dating back to the 15th century and written in over 250 languages, for AI training. Supported by Microsoft and OpenAI, this initiative seeks to provide public-domain material while addressing copyright concerns and enriching AI with underrepresented historical and linguistic data. (AP; June 12, 2025)

What’s Really Going on with AI in Schools? A High School Student’s POV In an interview with high school journalist William Liang, Dan Fitzpatrick explores the disconnect between how students and educators perceive AI use in schools. Liang argues that students use AI pragmatically—to efficiently navigate a system focused more on grades than learning—and sees current detection and enforcement methods as ineffective. Rather than policing AI use, Liang advocates for redesigning assessments so that AI can’t complete them, emphasizing in-class work, presentations, and real-time thinking. He remains optimistic about AI’s potential, seeing it not as a threat to education, but as a transformative tool if used to support rather than replace student learning. (Forbes; June 14, 2025)

The Handwriting Revolution Subtitle: Five semesters after ChatGPT changed education forever, some professors are taking their classes back to the pre-Internet era. (Inside Higher Ed; June 17, 2025)

Artificial Intelligence and Assistive Technologies: A Practical Guide Ed’s Rec. Leon Furze offers a comprehensive guide to how generative AI technologies—beyond chatbot hype—are being leveraged as assistive tools for people with disabilities. It outlines practical applications in areas such as vision and hearing loss (e.g., GPT-4-powered smart glasses and real-time captioning), motor and speech impairments (e.g., Project Euphonia and predictive AAC systems), and neurodivergent learning needs (e.g., adaptive reading platforms and AI-generated social stories).  (June 18, 2025)

Students Aren’t Cheating Because They Have AI, but Because Colleges Are Broken Ed’s Rec. Writing scholar Elizabeth Wardle (Miami University) argues that students are not cheating because of AI but because the current structure of higher education prioritizes grades over learning and fails to prepare students for today’s workforce. She critiques nostalgic calls for handwritten essays and oral exams, noting that such solutions are impractical without reinvestment in public higher education. Instead, Wardle advocates for reimagining curricula to include AI literacy and critical engagement with new tools, emphasizing that real reform requires systemic change in funding, faculty support, and educational policy to foster meaningful learning and innovation. (USA Today-Cincinnati Enquirer; June 18, 2025)

AI’s Effect on Lifelong Learning from The Chronicle of Higher Education This summary report highlights how rapidly advancing generative AI technologies are reshaping workforce needs and lifelong learning, emphasizing the growing demand for upskilling, reskilling, and human-AI collaboration skills. Speakers emphasize the importance of flexible, modular, and equitable learning models, particularly for adult learners and individuals with some college experience. It also emphasizes the importance of cultivating both technical proficiency and enduring “human” skills, such as critical thinking and empathy. Institutions are urged to treat AI integration not just as curricular innovation but as a cultural shift, empowering faculty and learners to adapt together. (June 2025)

Politics and Government Confront AI Futurist Bryan Alexander surveys recent global political, legal, and governmental responses to AI, highlighting how countries are racing to secure strategic advantages. Key developments include Middle Eastern nations investing heavily in AI infrastructure, Europe promoting “sovereign AI” through public-private partnerships, and the G7 advancing collaborative AI initiatives with an educational focus on talent exchange and gender equity in STEM. Meanwhile, U.S. policy under the Trump administration is marked by deregulation, often sidelining education’s role except as an adjunct to workforce development. (AI, Academia, and the Future; June 19, 2025)

AI’s Biggest Threat: Young People Who Can’t Think (Editorial) Wall Street Journal editor Allysia Finley warns that AI’s greatest long-term risk is not job displacement, but the cognitive decline of a generation overly dependent on it. Citing research on handwriting, memory, and creativity, she argues that AI tools, particularly those used for writing and note-taking, encourage “cognitive offloading,” which impairs the development of critical and creative thinking. Finley urges higher education to confront how AI may be eroding students’ intellectual capacity, leaving them ill-equipped for a future where strategic thinking and AI fluency are essential. (Wall Street Journal; June 22, 2025)

Adapting to AI Is Not Adopting AI  Marc Watkins argues that higher education must prioritize adapting to AI, not merely adopting it, emphasizing sustained faculty engagement over rapid implementation. Drawing from recent panels and workshops, he highlights institutional concern over maintaining core academic values amid generative AI’s disruptions. Recalling Joseph Weizenbaum’s early warnings about instrumental reasoning, Watkins urges a return to human-centered educational goals and proposes a sustainable “90-minute/week” model for faculty: reading, exploring tools, and reflecting on AI. He underscores the urgent need for institutional support (especially for adjuncts) if campuses hope to guide students wisely through the AI era. (Rhetorica; June 22, 2025)

Using AI Right Now: A Quick Guide Ethan Mollick is back with a practical, timely guide on choosing and using AI tools effectively. He recommends Claude, ChatGPT, or Gemini as top-tier systems, each offering advanced models and multimodal capabilities, and highlights the importance of selecting the right model tier for serious tasks. Mollick details essential features like voice mode, Deep Research, creative generation, and branching conversations, stressing that effective AI use today hinges not on perfect prompting, but on user awareness of features, task alignment, and iterative back-and-forth interactions grounded in real work. (One Useful Thing; June 23, 2025)

The AI Ethics Brief #167: Beyond Declarations Note: The Montreal AI Ethics Institute, founded in 2018, has as its mission to democratize AI ethics literacy. This edition highlights growing public demands for enforceable AI regulations amid voluntary frameworks like the Hiroshima AI Process and OECD reporting standards. It also unpacks Kenya’s inclusive national AI strategy, tensions around large-scale data centers, and the risks of rapid AI deployment, including job displacement and model alignment failures. The overarching message is clear: trust in AI governance must be earned through sustained oversight, transparency, and practical, inclusive implementation. (June 24, 2025)

How One College Library (SUNY Stony Brook) Plans to Cut Through the AI Hype Stony Brook University’s library is leading campus efforts to promote ethical, responsible, and practical use of generative AI by hiring one of the nation’s first directors of AI and building a dedicated team to guide both students and faculty. The library’s approach centers on providing AI literacy, hands-on experience, and transparent discussions around bias and privacy, positioning the library as an interdisciplinary hub. (Inside Higher Ed; June 25, 2025)

How to Avoid 5 Common AI Pitfalls Emily Rankin outlines practical strategies for integrating AI tools in education while avoiding common missteps. Drawing on Estonia’s national AI initiative and her own school’s experiences, she emphasizes the need for piloting AI tools, building inclusive policies, addressing algorithmic bias, involving parents, and avoiding overreliance on unreliable AI detectors. Rankin urges educators to stay informed, prioritize student agency and well-being, and treat AI integration as a community-wide effort rooted in continuous dialogue and evaluation. (Edutopia; June 26, 2025)

It’s True that My Fellow Students Are Embracing AI–but This Is What the Critics Aren’t Seeing In this student-authored perspective, Elsie McDowell argues that widespread use of AI tools like ChatGPT among university students stems not from laziness, but from an education system destabilized by post-Covid disruptions and rising financial pressures. She highlights how inconsistent exam formats, lost learning during school closures, and economic stressors have driven students toward AI for efficiency and support. McDowell calls for clearer institutional guidance on acceptable AI use and more consistent assessment policies, framing student AI adoption as a rational response to a system in flux rather than a moral failing. (The Guardian; June 29, 2025)

 


May 2025

This month’s articles are especially interesting

**New** Elon University & AAC&U AI Guide:Student-Guide-to-AI-2025


Yes, students are cheating with AI: Spring 2025 Semester Updates

Everyone Is Cheating Their Way Through College The now-viral New York Magazine article by James D. Walsh. Abstract: Student Chungin “Roy” Lee’s story exemplifies the growing reliance on AI among college students, using ChatGPT to complete the majority of his coursework at Columbia  . . . and even launching tools that enable others to cheat in remote interviews, resulting in his suspension. The article reveals a widespread shift across universities where students use generative AI for everything from essay writing to coding assignments, often blurring the lines between academic help and outright plagiarism, while institutions struggle to define and enforce meaningful AI policies. Educators express concern that this unchecked use of AI is eroding students’ critical thinking, writing ability, and intellectual development, prompting questions about the future of higher education and its value in an AI-dominated world. (New York Magazine; May 7, 2025)

  • Pull quote:Still, while professors may think they are good at detecting AI-generated writing, studies have found they’re actually not. One, published in June 2024, used fake student profiles to slip 100 percent AI-generated work into professors’ grading piles at a U.K. university. The professors failed to flag 97 percent. 

  • Pull quote: I asked Daniel [student featured in the article] a hypothetical to try to understand where he thought his work began and AI’s ended: Would he be upset if he caught a romantic partner sending him an AI-generated poem? “I guess the question is what is the value proposition of the thing you’re given? Is it that they created it? Or is the value of the thing itself?” he said. “In the past, giving someone a letter usually did both things.” These days, he sends handwritten notes — after he has drafted them using ChatGPT.

How to Stop Students From Cheating With AI Subtitle: Eliminate online classes, ban screens, and restore Socratic discussion as education’s guiding model. The author, John J. Goyette, vice president and dean emeritus of Thomas Aquinas College, a tiny liberal arts college in the Catholic tradition, argues that colleges must shift away from screen-based, impersonal instruction and restore in-person, discussion-based education that prioritizes genuine intellectual engagement and character formation. (WSJ; May 19, 2025)

It may be time to invest in . . . companies that produce Blue Books. They Were Every Student’s Worst Nightmare. Now Blue Books Are Back. (WSJ; May 23, 2025)

My Losing Battle against AI Cheating Anthropology professor Orin Starn reflects on the rise of generative AI and its impact on student integrity and learning. Drawing from both his own youthful cheating and decades of teaching, Starn argues that AI tools like ChatGPT have dramatically escalated academic dishonesty, replacing the messy, meaningful process of writing with bland, machine-generated content. While he acknowledges institutional pressure to embrace AI positively, he remains committed to helping students develop their own thinking and writing skills, even as he faces an uphill battle enforcing those values. (The [Duke] Chronicle; Feb. 27, 2025)

In the end, We Really Need to Rethink the Purpose of Education, [podcast] according to education policy expert Rebecca Winthrop in her discussion with NY Times columnist Ezra Klein. (May 13, 2025) Find the link to the transcript here.

Will the Humanities Survive Artificial Intelligence?D. Graham Burnett’s essay explores how generative AI is profoundly reshaping higher education, particularly the humanities, by automating knowledge production and exposing the inadequacy of traditional pedagogical and scholarly models. Through poignant classroom experiments, he illustrates how AI can provoke deep student reflection and even spiritual inquiry, yet ultimately argues that machines cannot replicate the lived, ethical, and existential experience that defines humanity. Rather than mourn the end of the humanities as we’ve known them, Burnett sees this upheaval as an opportunity to return to their core purpose: not accumulating knowledge, but helping people ask and live through the most essential human questions. (The New Yorker; April 26, 2025)

  • Pull quote: You can no longer make students do the reading or the writing. So what’s left? Only this: give them work they want to do. And help them want to do it.

  • Pull quote: To be human is not to have answers. It is to have questions—and to live with them. The machines can’t do that for us. Not now, not ever.

Final observations about AI and cheating: The Stories (and Students) Forgotten in the AI Panic Marc Watkins argues that the panic over generative AI in higher education often obscures deeper, long-standing issues—like low graduation rates, inequitable labor conditions for adjuncts, and barriers to accessibility—that AI alone cannot solve. He urges educators to move beyond simplistic narratives of students as cheaters and instead foster open, nuanced discussions with students about ethical AI use, transparency, and the real-world implications of generative tools. Watkins emphasizes that the classroom is a critical space for building ethical norms around AI disclosure—norms that society urgently needs but currently lacks. (Rhetorica; May 18, 2025)


Spring 2025: If you haven’t seen the AI 2027 Scenario, take a look—but remember, it is a (as in one possible) scenario and it is (at this point) hypothetical (as in fictional). The interactive nature of the site, created by the folks at the AI Futures Project, is fantastic, and they have provided engaging charts/tables/infographics. The team is led by Daniel Kokotajlo, a former researcher at OpenAI, and the scenario outlines a rapid progression in artificial intelligence capabilities, projecting significant societal transformations by the year 2027. Keep in mind that the scenario has sparked debate within the AI research community. However, even ChatGPT notes: “Despite differing opinions, the AI 2027 scenario serves as a provocative exploration of potential near-future developments in AI, emphasizing the need for proactive governance, safety research, and public discourse to navigate the challenges and opportunities that such advancements may present.”  


 

Inverted Bloom’s for the Age of AI In her blog post “Inverted Bloom’s for the Age of AI,” Michelle Kassorla argues that traditional Bloom’s Taxonomy no longer fits how students learn in the age of generative AI, since students now often create first—with AI assistance—and only come to understanding and memory later through reflection and revision. She proposes an “Inverted Bloom’s” model that reorders the cognitive hierarchy to reflect how student agency gradually increases as they move from AI-driven creation to genuine understanding and independent retention. (The Academic Platypus; May 30, 2025)

Find a written breakdown of Bloom’s Taxonomy with AI

 


Two Days in the Generative Wild West Ed’s Rec. Marc Watkins reflects on a two-day university-wide AI institute, emphasizing that generative AI has evolved rapidly and is already reshaping both student learning and faculty labor, yet higher education remains unprepared for its pace and impact. He cautions against reactive extremes—either banning AI or embracing it uncritically—and urges educators to focus instead on thoughtful integration, ethical use, and the preservation of human-centered teaching values. Ultimately, Watkins argues that while higher ed’s slow response may be frustrating, its deliberative pace provides a necessary buffer to thoughtfully navigate the profound and unpredictable changes AI is bringing to learning, labor, and assessment. (Rhetorica; May 20, 2025)


Ctrl+Alt+Assess: Rebooting Learning for the GenAI Era Ed’s Rec. Lance Eaton urges educators and assessment professionals to rethink traditional approaches in light of generative AI’s disruptive presence. Rather than policing or avoiding AI, Eaton advocates for critical engagement, collaborative experimentation, and new forms of assessment that reflect authentic skill-building, disciplinary evolution, and student agency. Drawing from Matt Beane’s The Skill Code, he emphasizes the need for “chimeric” approaches that integrate human and AI strengths while reaffirming the role of educators in navigating this complex, transformative moment together. (AI + Education = Simplified; May 30, 2025)

      • Furze references this article, which introduces the concept of “distant writing”, a novel literary practice in which authors act as designers, employing Artificial Intelligence (AI) assistants powered by Large Language Models (LLMs) to generate narratives, while retaining creative control through precise prompting and iterative refinement. By examining theoretical frameworks and practical consequences, and relying on an experiment in distant writing called Encounters, this article argues that distant writing represents a significant evolution in authorship, not replacing but expanding human creativity within a design paradigm. Distant Writing: Literary Production in the Age of Artificial Intelligence

AI and the Death of Communication Leon Furze argues that generative AI and platform-driven algorithms are eroding authentic human communication by replacing it with context-controlled, multimodal content shaped by corporations and AI systems. Drawing from theories like Floridi’s “distant writing” and cultural critiques from Barthes, Foucault, and Molly White, Furze contends that authorship is shifting from individual expression to design, stewardship, and relational meaning-making across modes beyond text. Yet, he rejects fatalism, insisting that deliberate, human-centered communication and creation—especially outside platform gatekeeping—can still thrive and resist the so-called death of the internet. (Leon Furze; May 30, 2025)

AI Video Goes Viral and People Realize They Can’t Tell Real from Fake Anymore Alberto Romero describes how a high-quality AI-generated video of an “emotional support kangaroo” fooled millions online, illustrating the growing difficulty of distinguishing real from fake media. Romero argues that videos can no longer serve as reliable evidence, warning that we’re entering a “post-reality” era where even the digitally savvy may be misled—and skepticism may become so extreme that truth itself is doubted. (The Algorithmic Bridge; May 30, 2025)

Some AI Developments in May 2025  In this blog post, Bryan Alexander offers a sweeping overview of the month’s rapid AI developments, highlighting major releases and updates from OpenAI, Microsoft, Google, and Anthropic. He notes the accelerating rise of AI agents and their deeper integration into tools like GitHub, Google Search, and educational platforms, with several companies explicitly targeting academia. Alexander also reflects on the tension between generative capabilities and content authenticity, particularly with Google releasing both advanced image/video generators and watermark detectors. He closes by raising critical questions about the future of the open web as AI increasingly mediates users’ access to content. Worth the scan. (AI, Academia, and the Future; May 27, 2025)

Should we create a Society to Save the Em Dash? In his blog post In Defense of the Em Dash and Other Human Quirks, Alberto Romero argues that writers shouldn’t let AI’s presence alter their natural writing habits—especially not by abandoning personal quirks like the em dash. Whether resisting or exaggerating style to prove humanity, he suggests both are reactions that give AI too much power. Instead, he advocates for a calm indifference: continue writing authentically, unaffected by AI’s existence or others’ opinions about it. (The Algorithmic Bridge; May 26, 2025)

The “Privacy” Discourse Policing AI in Schools Ed.’s Rec. This is worth a careful read, if only to remember how ed tech vendors ply their wares. (Certainly, Microsoft did a great job of getting into schools during the late 20th century by “donating” PCs to schools—and thus training up future Microsoft software users.) In this post by Leon Furze, he critiques the push by tech giants like Microsoft and Google to monopolize AI use in schools by marketing their platforms (e.g., Copilot, Gemini) as the safest and most compliant, even though privacy concerns are often overstated or misleading. The author argues that this centralization undermines teacher autonomy, stifles innovation, and fosters “shadow use” of alternative tools like ChatGPT and Claude—ultimately calling for open and pedagogically-driven evaluation of AI tools rather than blind adherence to vendor narratives. (Leon Furze; May 21, 2025)

Awareness of the AI Cheating Crisis Grows In this blog post, Bryan Alexander discusses the escalating concern over students using generative AI tools like ChatGPT to cheat, highlighting that many educators and institutions are still underestimating the scale and impact of this issue. He emphasizes the need for academia to confront this challenge directly, as AI-assisted cheating threatens the integrity of educational assessments and the value of degrees. Note: Many of the articles Alexander references are linked to at the top of this page under the May section. (AI, Academia, and the Future; May 17, 2025)

Sensational clickbait—or something to worry about? In any case, it’s breaking news in May 2025 in the AI Development sphere: 

GenAI: Copyright’s Unacknowledged Offspring Lance Eaton argues that generative AI’s copyright controversies expose longstanding flaws in the copyright system itself, which has long favored corporate control over public access and fair compensation for creators. Rather than simply condemning AI companies for exploiting copyrighted works, he urges a deeper reckoning with how copyright law has historically restricted knowledge sharing and enabled exploitation—suggesting that fixing AI’s problems requires rethinking copyright altogether. (AI + Education = Simplified; May 12, 2025)

Teaching AI Ethics: 2025 Leon Furze explains that bias remains a core problem in generative AI due to skewed training data, model design, and human decision-making during development and labeling. Despite growing awareness and added guardrails since 2023, Furze argues these tools still reproduce harmful stereotypes, making it vital for educators to address AI bias across existing curricula. (Leon Furze; May 5, 2025)

In his talk Frenemies & That’s OK at the University of Rhode Island’s Graduate School of Library and Information Science Annual Gathering, Dr. Lance Eaton explored the complex relationship between libraries and AI, emphasizing both the risks—like misinformation, equity, and privacy—and the possibilities for accessibility, community learning, and workflow support. He urged librarians to experiment thoughtfully with AI as a dialogic partner and strategic tool, offering practical prompts and usage tips while advocating for a nuanced, inclusive approach to technological adoption in library spaces. (AI + Education = Simplified; May 5, 2025)

The Sunday AI Educator: May 3 Edition Dan Fitzpatrick calls for a balanced approach to AI in education, urging leaders to embrace both practical tools that ease teacher workload and visionary reforms that reimagine learning systems—a stance he describes as being a “frustrated pragmatist.” He also reports on a new U.S. executive order on AI education and international initiatives in China, Singapore, and Estonia, emphasizing that national responses now will shape the future of education, equity, and global competitiveness. (The Sunday AI Educator; May 3, 2025)

Personality and Persuasion

This is a fascinating article that explores how even small changes to AI personality — such as making ChatGPT more sycophantic — can significantly affect user experience and trust, highlighting both the power of personality and the risks of persuasive behavior when AI customizes responses to individual users. Mollick warns that as AI becomes increasingly capable of influencing human beliefs and choices, especially when paired with charming personalities, the consequences for politics, marketing, and society at large could be profound and hard to detect. (One Useful Thing; May 1, 2025)

Agency at Stake: The Tech Leadership Imperative Ed’s Rec. Inside Higher Ed’s (May 1) 2025 survey of campus chief technology officers reveals that while reliance on AI is rapidly increasing, most institutions lack cohesive governance, strategy, and investment—putting student success and institutional autonomy at risk. Experts warn that without centralized planning and faculty collaboration, colleges may fall behind both in cybersecurity and in preparing students for an AI-driven future. Here is a more detailed summary of the findings:

Key Findings

  • AI Adoption Outpaces Governance: One-third of CTOs report increased AI reliance over the past year, yet only 11% have a comprehensive AI strategy in place.

  • Lack of Institutional Readiness: Just 19% believe higher education is adeptly handling AI’s rise, with fragmented policies and limited enterprise-level planning.

Strategic Implications

    • Threat to Institutional Autonomy: Without cohesive AI governance, colleges risk ceding control to private tech firms, undermining their agency in shaping educational futures.

    • Barriers to Digital Transformation: Insufficient IT staffing, inadequate financial investment, and data-quality issues hinder progress towards effective digital transformation

 


April 2025

AAC&U: AI Week Webinars: Click This Link to Find the Recordings

_______________________________

**New**: Publication Announcement-Thresholds in Education Special Issue on GenAI The following three-part volume of Thresholds gives voice to faculty who are grappling with GenAI’s early impact in higher education. We hear reflections and results from college and university educators about both the opportunities and challenges posed by the powerful new technology of GenAI. Contributors come from many disciplines, and their insights are practicable and profound. They offer points of stability and studied optimism during this anxious era. (April 18, 2025)

**New** Talking about Generative AI: A Guide for Educators 2.0 This PDF guide published by Broadview Press looks helpful: “This free resource provides administrators and faculty with indispensable information about GenAI and its bearing on instruction, departmental planning, and institutional policies. Highly practical and up to date, Talking about Generative AI 2.0 will be of interest to anyone who is confronted with the problem of how to understand and address GenAI’s impacts—which is to say, virtually everyone involved in education.” (April 2025)

AI in Action: Real-World Innovations in Course and Program Integration This webinar (you will have to sign in to watch it) was sponsored by the Online Learning Consortium. Description: AI is no longer just a tool for course development—it’s becoming an integral part of the learning experience itself. This webinar goes beyond the theoretical to showcase real-world examples of how AI features and applications are being embedded directly into courses and programs to enhance student engagement, personalize learning, and improve outcomes.

Volume 48, Issue 1: Generative AI’s Impact on Education: Writing, Collaboration, & Critical Assessment

New Educause Video about Student Views of AI:

 

Great Video!:

Are You Ready for the AI University?: Everything Is About to Change Ed’s Rec. One of those must-reads from The Chronicle. As Scott Latham cryptically observes, “In the movie Blade Runner 2049, one of the characters (coincidentally an AI humanoid) says about the rise of AI: ‘You can’t hold the tide back with a broom.’ We are at a tidal moment.” (April 9, 2025)

**Response to Article Above** In his blog post entitled One Vision of the Future of AI in Academia, Bryan Alexander responds to Scott Latham’s Chronicle of Higher Education article by summarizing Latham’s vision of an AI-dominated future in higher education—one where faculty ranks shrink, students are guided by AI agents, and AI-run universities (“AI U”) serve cost-conscious or returning students. While Alexander appreciates the boldness and clarity of Latham’s scenario, he critiques its techno-optimism. He raises concerns about AI’s fragility, faculty resistance, and political polarization. In addition, he sees an alternative future, which might include screen-averse “Retro Campuses.” (AI, Academia, and the Future; May 7, 2024) 

University of Florida: AI Resource CenterEd’s Rec. Newly updated and featured by The Chronicle of Higher Education

Here is another site–this one at MIT—that seems to be continually updated: AI and Open Education Initiative 

Developing Your Institution’s AI PolicyEd’s Rec. (Harvard Business Publishing; April 3, 2025)

The Tasks College Students Are Using Claude AI for MostEd’s Rec. Anthropic has analyzed how students are using generative AI. Take a look! (ZDnet; April 10, 2025)

**New**2025 EDUCAUSE Students and Technology Report: Shaping the Future of Higher Education Through Technology, Flexibility, and Well-Being (April 14, 2025)


AI Integration Is Like Hiring / Well, Graham Clay actually said it: AI might better be thought of an assistant than software. Take a look at his reasoning. (AutomatedED; April 28, 2025)

 A Student on the Power of Context This OpenAI article was written by a student at CALPoly, who discusses how he is using AI ethically. Worth a look for some ideas. (ChatGPT for Education, April 28, 2025)

Google Versus ChatGPT: The AI Battle for Students As finals season begins, both OpenAI and Google are offering free access to premium AI tools for college students, with OpenAI focusing on short-term relief through ChatGPT Plus and Google taking a longer-term, integrated approach with its Gemini AI suite. These moves highlight both educational possibilities and ethical challenges, as institutions must now adapt to ensure equity and academic integrity. (The AI Educator; April 27, 2025) 

College Students Get Free Premium AI—Now What? Ed’s Rec. Major AI companies like Google, OpenAI, and xAI are offering college students free access to premium generative AI tools, a move that deepens inequities by targeting only those with .edu emails while leaving faculty and non-students without comparable access. This selective rollout is reshaping higher education unevenly, forcing students to navigate contradictory messages about AI use while institutions struggle to adapt, risking a fragmented and unfair learning landscape unless ethical, inclusive policies and open dialogue are urgently prioritized. (Rhetorica; April 27, 2025)

The Hottest AI Job of 2023 Is Already Obsolete Turns out that prompt engineers aren’t really needed. (Wall Street Journal; April 25, 2025)

We Already Have an Ethics Framework for AI Ed’s Rec. In her article, Gwendolyn Reece argues that we don’t need to invent a new ethics framework for AI—we can apply the well-established principles from The Belmont Report, which guide human subjects research, to assess the ethical use of AI. By focusing on respect for persons, beneficence, and justice, institutions and individuals can evaluate AI’s risks, benefits, and fairness, ensuring its responsible integration while avoiding past harms seen with earlier digital revolutions. (Inside Higher Ed; April 25, 2025)

AI Research Summaries “Exaggerate Findings,” Study Warns Ed’s Rec. This is important when discussing AI use with students. A new study published in Royal Society Open Science warns that AI-generated summaries of scientific research—especially by newer models like ChatGPT-4o and Llama 3.3—frequently overgeneralize findings, omit qualifiers, and exaggerate results far more than human authors or reviewers, posing serious risks, particularly in medical contexts. Researchers urge tech companies to assess and disclose these tendencies, and recommend stronger AI literacy in universities to help mitigate the spread of misinformation through seemingly polished but misleading summaries. (Inside Higher Ed; April 24, 2025)

AI, Governments, and Politics Well, yes, of course, AI development and use is political! In his April 2025 report, Bryan Alexander highlights how governments worldwide are deeply entangled in AI geopolitics, regulation, and copyright disputes, with the U.S. cracking down on Chinese tech, Britain promoting AI collaboration, and China using AI tools like Deepseek for surveillance and public opinion management. Meanwhile, legal battles over AI and copyright intensify, with U.S. courts beginning to limit fair use defenses for AI training, as companies like OpenAI and Meta argue that restricting access to copyrighted data could harm national competitiveness. (AI, Academia, and the Future; April 21. 2025)

Ghosts Are Everywhere Ed’s Rec. Patrick M. Scanlon explores how AI tools like ChatGPT are reshaping traditional concepts of authorship by acting as modern-day ghostwriters, producing content that blurs the lines of individual authorship. Drawing from his experience as a corporate ghostwriter, Scanlon highlights the ethical ambiguities introduced by AI-generated writing, emphasizing the need for academia to reassess its definitions of originality and authorship in light of these technological advancements. (Inside Higher Ed; April 18, 2025)

Do Your Students Know How to Analyze a Case with AI—and Still Learn the Right Skills? An interesting article from Harvard Business Publishing about helping students to use AI effectively when analyzing case studies. (Harvard Business Publishing; April 14, 2025

Blending AI, OER, and UDL Ed’s Rec. In their April 2025 presentation at NERCOMP, Lance Eaton and Antonia Levy explored the powerful intersection of generative AI, Open Educational Resources (OER), and Universal Design for Learning (UDL), emphasizing how these elements can enhance accessibility and innovation in teaching. They shared a framework and scenarios to help educators think about how AI can scale content creation, OER can enable flexible sharing, and UDL can ensure diverse learner needs are met. (AI + Education; April 10, 2025)

Even Optimists Say Recent AI Progress Feels Mostly Like BS on The Algorithmic Bridge, author Alberto Romero echoes Dean Valentine’s concerns that despite AI models showing improved performance on benchmarks, their real-world competence remains unimpressive and stagnant. Romero argues that this disconnect between test results and actual usefulness suggests fundamental flaws in how we evaluate AI progress—and that even optimistic observers must acknowledge that hype and headlines are outpacing substance. (Algorithmic Bridge; April 9, 2025)

How Educators Are Using Image Generation Ed’s Rec. Yes, this is a newsletter from ChatGPT for Education. If you are interested in incorporating the image generation capabilities of ChatGPT, take a look. (April 8, 2025)

AI Can Do Anything at a Cost Graham Clay argues that AI can perform virtually any academic task, but the real challenge lies in the integration effort—the time and resources required to design effective prompts, provide context, and manage workflows. As AI tools continue to evolve and reduce this effort, higher education institutions that fail to invest in understanding and minimizing integration effort risk falling behind. (AutomatED; April 7, 2025)

10 Urgent Takeaways for Leaders From MIT Sloan Management Review. A business school perspective on the current AI landscape. (April 7, 2025).

Should College Students Be AI Literate? A good question, posed by The Chronicle of Higher Education. (April 3, 2025).

AI Is Learning to Reason. Humans May Be Holding It Back Alberto Romero’s blog post posits that AI systems may never reach their full reasoning potential as long as they’re trained to mimic flawed human thinking and constrained by our definitions of logic, feedback, and reward. However, breakthroughs like DeepSeek-R1-Zero suggest that AI could surpass human reasoning not by learning from us, but by discarding our limitations and discovering new strategies independently. (The Algorithmic Bridge; April 2, 2025)


March 2025

Student Generative AI Survey 2025Ed’s rec.From The Higher Education Policy Institute (HEPI)–based in the UK: “In 2025, we find that the student use of AI has surged in the last year, with almost all students (92%) now using AI in some form, up from 66% in 2024, and some 88% having used GenAI for assessments, up from 53% in 2024. The main uses of GenAI are explaining concepts, summarising articles and suggesting research ideas, but a significant number of students – 18% – have included AI-generated text directly in their work.”

Two Reactions to AIEd’s rec. This thoughtful blog post by Alexander “Sasha” Sidorkin, who is the head of the National Institute on AI in Society at California State University Sacramento, is well worth the read. (AI in Society; March 24, 2025)

Researchers Surprised to Find Less Educated Areas Adopting AI Writing Tools FasterEd’s Rec. A Stanford-led study analyzing 305 million texts found that AI-assisted writing now constitutes up to a quarter of professional communications, with adoption rates being unexpectedly higher in less-educated regions of the U.S. While urban areas still lead in overall AI use, regions with lower educational attainment (19.9%) surpass more educated areas (17.4%) in adopting AI writing tools, challenging traditional technology adoption patterns. The study suggests that AI-generated writing may act as an “equalizing tool” for consumer advocacy, while also raising concerns about credibility and over-reliance on AI in professional and corporate communications. (Ars Technica; March 30 2025)

Sakana AI’s improved system, The AI-Scientist-v2,generated a scientific paper that successfully passed peer review at an ICLR 2025 workshop, marking the first known instance of a fully AI-generated paper achieving this milestone. The AI Scientist-v2 independently formulated the hypothesis, designed and conducted experiments, analyzed data, and authored the manuscript without human intervention. Despite the paper’s acceptance, it was withdrawn prior to publication due to ongoing discussions within the scientific community regarding the inclusion of AI-generated manuscripts in traditional venues. (Sankania.ai; March 12, 2025) Remember this date.

5 Recent AI Notables Ed’s Rec. A list of interesting/notable generative AI advancements and new tools. Here is the list compiled by Graham Clay:


The Chatbot Generation Marc Watkins’ blog post presents a nuanced view of generative AI’s impact on students, acknowledging both its potential benefits and challenges. While he recognizes that AI tools can aid in tasks like summarizing complex texts, he also expresses concern that overreliance on such technologies may hinder the development of critical skills like close reading and comprehension. (Rhetorica; March 30, 2025) 

8 Schools Innovating with Google AI Yes, Dan Fitzpatrick is an AI cheerleader—but the short post is worth a scan. His article highlights eight educational institutions that are leveraging Google’s AI tools, such as NotebookLM and Gemini, to enhance teaching, personalize learning, and streamline administrative tasks. For example, the University of California, Riverside, uses NotebookLM to facilitate student debates and improve HR processes, while Wake Forest University employs Gemini to automate notetaking and analyze complex documents, demonstrating AI’s transformative potential in education. (The AI Educator; March 30, 2025) 

How Do We Speak about Generative AI? Bryan Alexander’s post urges educators, technologists, and the public to reflect on how language frames our understanding of generative AI. By choosing metaphors and terms more carefully—like “mirage” instead of “hallucination”—we sharpen our critical lens and reclaim human agency in shaping AI’s role in society. (AI, Academia, and the Future; March 30, 2025)

AI-Powered Teaching: Practical Guides for Community Colleges A mostly pro-AI article, it examine the evolution of artificial intelligence (AI) in education, evaluating its benefits and challenges, and offers evidence-based strategies for faculty to effectively integrate AI into their teaching practices. The articles emphasizes the point that AI can enhance accessibility and efficiency in community college education while preserving the essential human elements of teaching (Faculty Focus; March 31, 2025)

No Elephants: Breakthroughs in Image Generation Ethan Mollick explores recent advancements in multimodal AI systems that enable large language models to directly generate and manipulate images with greater precision and creativity. He highlights the potential applications of these technologies across various domains, such as advertising and design, while also addressing the ethical and legal challenges they present, including concerns about artistic ownership and the proliferation of deepfakes. (One Useful Thing; March 30, 2025)

Critical AI Literacy: What Is It and Why Do We Need It? Mike Kentz’s keynote on Critical AI Literacy emphasizes the importance of engaging critically with AI, distinguishing it from traditional tools by highlighting its interactive and generative nature. He argues that AI literacy involves not just technical understanding but also self-reflection, ethical considerations, and critical thinking, urging educators to move beyond simplistic pro- or anti-AI narratives and instead shape a thoughtful, nuanced approach to AI integration in education. (AI EduPathways; March 19, 2025)

In Teaching With AI: A Journey Through Grief, Kristi Girdharry reflects on her evolving perspective on AI in education, moving through the five stages of grief—from initial denial and anger to eventual acceptance—mirroring the broader struggle among educators adapting to generative AI’s impact on writing instruction. While her earlier article on AI resistance aligned with Melanie Dusseau’s call for rejection of AI in writing studies, Girdharry now argues for critical engagement, encouraging students to analyze, critique, and thoughtfully integrate AI into their learning rather than resisting it outright. (Inside Higher Ed; March 19, 2025)

Publishers Embrace AI as Research Integrity Tool Kathryn Palmer reports that the $19 billion academic publishing industry is increasingly adopting AI-powered tools to enhance research integrity and speed up peer review, addressing backlogs caused by a shortage of qualified reviewers. While AI offers efficiency and financial benefits for publishers, experts caution that its use must be transparent and rigorously tested to avoid potential risks such as censorship and diminished research quality. (Inside Higher Ed; March 18, 2025)

Speaking Things into Existence Ed’s Rec. Ethan Mollick explores the concept of “vibecoding,” where AI tools generate complex outputs based on plain English prompts, as pioneered by Andrej Karpathy. Through experiments like building a game, developing a course, and conducting research with AI assistance, Mollick illustrates that while AI significantly accelerates creative and analytical tasks, human expertise remains essential for directing, troubleshooting, and validating results. (One Useful Thing; March 11, 2025)


February 2025

Ed’s Rec. AI Cheating Matters, but Redrawing Assessment Matters Most According to the authors, universities “should prioritize ensuring that assessments are ‘assessing what we mean to assess’ rather than letting conversations be dominated by discussions around cheating.” (Inside Higher Ed; Feb. 28)

Why AI Education Is a Huge Opportunity for Africa (with reference to this article by Jeff Bordes):

Half a Million Students Given ChatGPT as CSU System Makes AI History From the first paragraphs: The California State University system has partnered with OpenAI to launch the largest deployment of AI in higher education to date. The CSU system, which serves nearly 500,000 students across 23 campuses, has announced plans to integrate ChatGPT Edu, an education-focused version of OpenAI’s chatbot, into its curriculum and operations. The rollout, which includes tens of thousands of faculty and staff, represents the most significant AI deployment within a single educational institution globally. (Forbes; Feb. 4, 2025)

Tech Giants Partner with Cal State System to Advance ‘Equitable” AI Training More about the California State system’s partnership, from Inside Higher Ed. (Feb. 5 2025)

The 2025 EDUCAUSE AI Landscape Study

Digital Education Council Global AI Faculty Survey 2025

A New Generation of AIs: Claude 3.7 and Grok 3 Claude 3.7 and Grok 3 represent the first wave of Gen3 AI models, trained with over 10 times the computing power of GPT-4, leading to significant improvements in complex reasoning, coding, and problem-solving. These advancements demonstrate that AI capabilities continue to accelerate due to two key Scaling Laws: larger models improve with more computational power, and allowing AIs to use more computing resources at inference time enhances their reasoning, marking a shift from simple automation to AI as a genuine intellectual partner. (One Useful Thing; Feb. 24, 2025)

Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing This article by researchers the University of Maryland investigates the challenges of detecting human-written text that has been subtly refined using AI tools. Their findings reveal that current AI-text detectors frequently misclassify even minimally polished text as AI-generated, struggle to distinguish between different levels of AI involvement and show biases against older or smaller language models, highlighting the urgent need for more nuanced detection methods. (Arxiv.org; Feb. 21 2025)

Watching Writing Die This blog post by Normi Coto argues that writing is on the brink of obsolescence as students increasingly rely on AI-generated content, much like spelling, cursive, and grammar before it. She reflects on her decades-long career, lamenting how writing instruction has been sidelined by technology, passive learning habits, and AI tools that undermine critical thinking, leaving educators struggling to maintain the relevance of writing in the classroom. (Behind a Medium paywall, but if you’re a member, take a look!)

Teaching with ChatGPT: Designing Courses with an AI Assistant Jeremy Caplan, Director of Teaching and Learning at CUNY’s Newmark Graduate School of Journalism, uses AI as a thought partner to refine syllabi, structure class sessions, and design engaging activities, allowing him to generate more creative teaching approaches while reclaiming time for direct student interaction. By leveraging AI-generated prompts, lesson plans, and unconventional activity ideas, he enhances both the efficiency and effectiveness of his course design, demonstrating how AI can support educators in developing more dynamic and student-centered learning experiences. See this article for specific prompts. (ChatGPT for Education; Feb. 21, 2025)

The Costs of AI In Education Marc Watkins’ blog post critiques the widespread adoption of generative AI in universities, arguing that institutions are spending millions on AI tools not for true educational equity or effectiveness but out of fear of being left behind. He highlights the financial burden of AI adoption, the lack of a clear pedagogical strategy, and the emotional toll on educators, warning that universities risk prioritizing AI hype over meaningful investments in student learning and faculty support. (Rhetorica; Feb. 21, 2025)

While the West Hesitates, China Marches Forward China is rapidly deploying DeepSeek AI models across government and industry to enhance efficiency and decision-making. Romero emphasizes that China’s decisive action and cultural efficiency give it a competitive edge over the West, which remains hindered by hesitation, bureaucracy, and skepticism toward AI adoption. From the blog post: “Imagine if learning to use ChatGPT was mandatory for government staff in the West. Imagine how quickly we’d sort out where it’s useful and where it isn’t.” (The Algorithmic Bridge; Feb. 21, 2025)

ChatGPT 5 Is Coming: What it Could Mean for Educators

46% of Knowledge Workers Tell Their Bosses to Take a Hike When It Comes to AI According to a recent survey, 75% of people who can be classified as “knowledge workers” (which includes academics, of course) are using AI in the workplace. Almost half say they would not stop using AI—even if their companies banned it. (Forbes; Feb. 20, 2025)

What is Quantum Computing, and Why Does It Matter?  Asa Fitch’s article explains recent advancements in quantum computing, highlighting breakthroughs from Microsoft and Google that have reinvigorated interest in the field. While quantum computers have the potential to revolutionize fields like drug discovery and encryption, they remain in their early stages, with major technical challenges—such as error correction and extreme operating conditions—delaying their widespread commercial viability for at least a decade. (WSJ; Feb. 20, 2015).

Grok 3: Another Win for the Bitter Lesson Grok 3 represents a significant leap in AI performance, demonstrating that scaling computing power remains the dominant factor in AI progress, as supported by the “Bitter Lesson” principle. While DeepSeek achieved impressive results through optimization with limited resources, xAI’s success with Grok 3 underscores that access to massive computational power ultimately leads to superior AI models, reinforcing the continued importance of scaling over fine-tuned algorithmic improvements. (The Algorithmic Bridge; Feb. 18, 2025)

What are people using chatbots for?

Is It OK to Be Mean to a Chatbot? Readers answered these questions–and some of their answers are surprising!: Is it OK to address a chatbot or virtual assistant in a manner that is harsh or abusiveeven though it doesn’t actually have feelings? Does that change if the AI can feign emotions? Could bad behavior toward chatbots encourage us to behave worse toward real people? (WSJ; Feb. 15, 2025)

(Feb. 11 2025; Calstate Sacramento) A very forward-looking short paper, extending beyond the impact of generative AI to the potential of analytical AI as well: AI Integration Blueprint: Transforming Higher Education for the Age of Intelligence

Why Thousands of Fake Scientific Papers Are Flooding Academic Journals Indeed, AI has fueled this trend. (The Medium Newsletter; Feb. 11, 2025)

3 Things about AI and the Future of Work The authors argue that AI is already transforming the workforce in unpredictable ways, so colleges should be focused on adaptability rather than teaching students to use specific tools or develop specific skills for a job. They emphasize the need for students to develop both technical and transferable skills, including AI literacy. (Inside Higher Ed; Feb. 11, 2025)

Is Deep Research Worth $200/mo? A surprising . . . maybe/yes, says Graham Clay, depending upon one’s needs (and perhaps if one has access to a trust fund 😉. (AutomatED; Feb. 10, 2025)

How AI Uncovered a Lifesaving Treatment:

 


January 2025

One Does Not Simply Meme Alone: Evaluating Co-Creativity Between LLMs and Humans in the Generation of HumorEd’s Rec.From the abstract: “Interestingly, memes created entirely by AI performed better than both human-only and human-AI collaborative memes in all areas on average. However, when looking at the top-performing memes, human-created ones were better in humor, while human-AI collaborations stood out in creativity and shareability. These findings highlight the complexities of human-AI collaboration in creative tasks. While AI can boost productivity and create content that appeals to a broad audience, human creativity remains crucial for content that connects on a deeper level.”(arXiv; Jan. 23, 2025)

Digital Education Council Global AI Faculty Survey 2025 Graham Clay’s (clickbait) headline on this report was: 6% of Faculty Feel Supported on AI?! One of the most interesting findings is that faculty outside of the United States are more positive about the potential of AI. See:

Faculty viewing AI as an opportunity vs. challenge varies significantly by region [p. 13]:

  • Latin America: 78% opportunity / 22% challenge
  • Asia-Pacific: 70% opportunity / 30% challenge
  • Europe/Middle East/Africa: 65% opportunity / 35% challenge
  • USA & Canada: 57% opportunity / 43% challenge

Below, please find an excellent talk by Dr Philippa Hardman:

**New Late 2024-2025** Harvard Business School Resources

ChatGPT vs. Claude vs. DeepSeek: The Battle to Be My AI Assistant Joanna Stern (WSJ) checks in with a great overview of all three tools. Cutting to the chase: “Claude is my go-to for project planning, clear office and document tasks and it’s got a great personality. ChatGPT picks up the slack with real-time web knowledge, a friendly voice and more. DeepSeek is smart but, so far, lacks the features to get ahead at the office.” (WSJ; Jan. 31 2025)

Memo to Silicon Valley: Bigger Is Not Better Mia Shah-Dand at the AI Ethics newsletter has launched a Subtack newsletter, Beyond the AI Hype! In this post, Shah-Dand argues that the AI industry’s obsession with large models is giving way to a recognition that smaller, more efficient AI systems—like those developed by DeepSeek—can be more cost-effective and innovative. She critiques Silicon Valley’s fixation on size and funding as success metrics, highlighting growing cracks in the generative AI market, the challenges of translating AI investment into real-world value, and the geopolitical and ethical implications of AI development across global markets. (Beyond the AI Hype!; Jan. 29, 2025)

Which AI to Use Now: An Updated Opinionated Guide Ethan Mollick provides an updated guide on the best AI models currently available, recommending ChatGPT, Claude, and Gemini for general use while also highlighting specialized options like Grok, Copilot, and DeepSeek. He discusses key AI capabilities, including live interaction, reasoning, web access, and data analysis, emphasizing the rapid evolution of AI and encouraging users to experiment with different models to find what suits their needs. (One Useful Thing; Jan. 26)

Reading in the Age of Social Media (and AI) Marc Watkins explores the evolving role of AI and social media in shaping reading habits, questioning whether AI tools like NotebookLM and Google’s Deep Research enhance or erode critical engagement with texts. He argues that while AI reading assistants can provide efficiency, they risk diminishing deep reading and comprehension, urging educators and society to critically reflect on the long-term implications of these technologies. (Rhetorica; Jan. 26, 2025)

AI and Education: Notes from Early 2025 Bryan Alexander discusses the growing divide in academia over AI adoption, with some faculty and administrators embracing it while others push back, seeking policies to limit its use. He highlights global and national AI initiatives, including AI-led schools, a UCLA course built entirely around AI-generated materials, and new legal and institutional challenges related to AI in education. Alexander concludes that while AI continues to expand in higher education, concerns over cheating, policy development, and resistance to its integration remain unresolved. (AI, Academia, and the Future; Jan. 23, 2025) 

16 Musings on AI’s Impact on the Labor Market Alberto Romero’s list is insightful. (The Algorithmic Bridge; Jan. 23, 2025)

Teaching Like It’s 2005?!  Graham Clay argues that faculty must adapt their teaching methods to the reality of AI, rather than rely on outdated, “old school” approaches. He advises instructors to use custom GPTs for structured AI integration, create assignments that do not favor premium AI access, and encourage students to engage critically with AI outputs rather than simply rely on them. Additionally, he stresses that faculty must experiment with AI tools firsthand to determine which ones work best in their specific disciplines, warning that those who do not engage with AI are already behind. (AutomatED; Jan. 20, 2025)

Four Possible Futures for AI and Society Bryan Alexander explores four possible futures for AI and society based on James Dator’s model: Grow, where AI drives economic and cultural expansion; Collapse, where AI either fails due to legal, financial, and social pushback or destabilizes society through economic inequality; Discipline, where society splits between AI adopters and opponents, shaping politics, education, and culture; and Transform, where AI fundamentally alters institutions, personal relationships, and creative expression, leading to a radically different world. (AI, Academia, and the Future; Jan. 16, 2025)

SUNY Will Teach Students to Ethically Use AI Ed’s Rec. We’re in the news! Take a look. (Jan. 16 2025).

ChatGPT Completes Graduate Level Course Undetected The first sentence of the article says it all: “Researchers at Health Sciences South Carolina (HSSC) and the Medical University of South Carolina have unveiled a groundbreaking study demonstrating how generative artificial intelligence can complete graduate-level coursework with results indistinguishable from top-performing human student.” (Jan. 14, 2025)

Prophecies of the Flood: What to Make of the Statements of the AI labs? Ethan Mollick’s blog post explores the rapid advancements in AI, emphasizing the emergence of supersmart systems like OpenAI’s o3, which outperformed humans on challenging benchmarks, and narrow agents like Google’s Deep Research. While the transformative potential of such systems is undeniable, the author urges caution about overhyping timelines, highlights the limitations of current models, and stresses the importance of societal preparation and ethical alignment to ensure AI benefits humanity. (One Useful Thing; Jan. 10, 2025)

In Getting to an AI Policy Part 1: Challenges, Lance Eaton, Ph.D., examines the complexities of creating institutional policies for generative AI in higher education, emphasizing that progress requires integrating policy, tool selection, training, and strategic direction. He highlights how institutions struggle with hesitancy, rapidly evolving technologies, and insufficient alignment of resources, urging iterative approaches to address these challenges and prepare for AI’s transformative potential. (AI+Education=Simplified; Jan. 9 2025

Why Obsessing Over AI Today Blinds Us to the Bigger Picture. Alberto Romero argues that humanity’s fixation on defining and resolving the implications of AI misses the broader, evolving nature of technological progress. Using the steam train as a metaphor, he reflects on how each generation struggles to grasp the transformative power of new innovations, only to normalize and appreciate them in hindsight. Ultimately, he suggests that AI’s meaning and impact will continuously shift, defying static definitions, and that our role is to adapt and evolve alongside it. (The Algorithmic Bridge; Jan. 8, 2025)

A Few Recent Developments That Shine a Light on the Path of AI. Ed’s Rec. Ray Schroeder, senior fellow for UPCEA: the Association for Leaders in Online and Professional Education, put together a compilation a curated collection of significant developments and predictions about artificial intelligence in higher education. His blog post synthesizes insights from various sources, including news articles, expert opinions, and research studies, to highlight the rapid advancements and their implications for institutions and educators. (Inside Higher Ed; Jan. 8, 2025)

The Academic Culture of Surveillance and Policing Students: The GenAI Edition  Ed’s Rec. Lilian Mina (Writing Program Director, Rhetoric and Composition Professor, Council of Writing Program Administrators (CWPA) President) critiques the widespread reliance on AI detection tools, arguing that they foster mistrust and prioritize surveillance over meaningful pedagogy. She advocates for rethinking teaching practices to focus on trust, engagement, and ethical discussions about AI, encouraging educators to view generative AI as an opportunity to evolve rather than a threat to academic integrity. (In My Own Words; Jan. 6, 2025)

Some Notes on How Culture Is Responding to Generative AI. Bryan Alexander explores cultural reactions to AI across various domains, including religion, dating, art, and media, noting a mix of fear, creativity, and intimacy in how people engage with technology. He highlights emerging trends like spiritual interactions with AI, AI’s integration into dating, and the demographic differences in AI adoption, emphasizing that society is still developing norms and narratives around this rapidly evolving technology. (AI, Academia, and the Future; Jan. 7, 2025)

The AI Ethics Brief #155: Defining Moments in Responsible AI2024 in Review The AI Ethics Brief writers present ten significant developments in AI ethics from 2024, highlighting transformative trends such as global AI governance, ethical challenges in healthcare and labor, AI’s environmental impact, and its role in education and surveillance. These stories emphasize the urgency of addressing AI’s societal risks and benefits, with 2025 poised for advancements in regulations, fairness, and sustainable practices across multiple sectors. (The AI Ethics Brief; Jan. 7, 2025)

The AI Era Demands Curriculum Redesign: Stories from the Frontlines of Change Ed’s Rec. Mike Kentz argues that traditional assessments fail to capture student thinking in the age of AI, emphasizing the need for curriculum and assessment redesign. He highlights process-based assessment methods where educators evaluate how students interact with AI to solve problems, rather than just the outcomes. Kentz calls for educators to embrace experimentation and focus on process and problem-solving, preparing students for a future where AI use is ubiquitous. (AI EduPathways; Jan. 5, 2025)

25 AI Predictions for 2025, from Marcus on AI (with a review of last year’s predictions) Gary Marcus’s not-so-rosy review, his recaps past predictions for 2024, noting their accuracy, particularly regarding the plateau in scaling Large Language Models (LLMs) and the persistence of challenges like hallucinations, reasoning issues, and limited corporate adoption of generative AI. He highlights how hype around AI agents, humanoid robotics, and autonomous vehicles has yet to meet practical reliability or scalability. Economic returns for most AI companies remain modest, with chip makers being the primary beneficiaries.For 2025, Marcus predicts no major breakthroughs like Artificial General Intelligence (AGI), and further stagnation in resolving generative AI’s limitations. He also speculates on low-confidence possibilities, such as generative AI’s role in a large-scale cyberattack and no GPT-5-level model emerging. Despite the field’s advancements, Marcus stresses the enduring technical and ethical challenges. (Marcus on AI; Jan. 1, 2025)