June 2025
**New Resource from Writing Across the Curriculum Clearinghouse** The Field Guide to Effective Generative AI Use is a collaborative project featuring six educators who analyzed their own interactions with AI tools like ChatGPT and Claude through annotated transcripts and reflective self-analysis. Rather than offering optimization tips, the guide emphasizes metacognition, cognitive distancing, and pedagogically grounded insights into safe, ethical, and effective GenAI use in education. Hosted by the WAC Clearinghouse, the project aims to help educators better understand both the challenges and opportunities of AI in teaching by showcasing detailed “thinking aloud” approaches that reveal how instructors reason through real-world applications of AI in the classroom. (June 2, 2025)
AI’s Effect on Lifelong Learning from The Chronicle of Higher Education This summary report highlights how rapidly advancing generative AI technologies are reshaping workforce needs and lifelong learning, emphasizing the growing demand for upskilling, reskilling, and human-AI collaboration skills. Speakers stress the importance of flexible, modular, and equitable learning models—especially for adult learners, alumni, and those with some college experience—as well as the need to cultivate both technical proficiency and durable “human” skills like critical thinking and empathy. Institutions are urged to treat AI integration not just as curricular innovation but as a cultural shift, empowering faculty and learners to adapt together. (June 2025)
- AI Has Rendered Traditional Writing Skills Obsolete. Education Needs to Adapt Ed’s Rec. John Villasenor argues that artificial intelligence has rendered traditional writing skills largely obsolete for most students, as AI can now handle the majority of everyday writing tasks more efficiently. While advanced writing will still matter in specialized professions, education must shift its focus to teaching students how to responsibly and effectively use AI tools to enhance their writing. Instead of banning AI, schools should embrace it as a democratizing force and train students to critically evaluate, refine, and fact-check AI-generated content. (Brookings; May 30—received on June 3, 2025)
- Teaching AI Ethics 2025: Truth Ed’s Rec. Leon Furze’s updated article on teaching AI ethics shifts the focus from student “cheating” to the concept of post-plagiarism, which recognizes that hybrid human-AI writing is becoming normal and challenges traditional ideas of authorship and academic integrity. Instead of relying on outdated detection-based approaches, educators are encouraged to adopt frameworks—like Sarah Elaine Eaton’s six tenets of post-plagiarism—that emphasize transparency, responsibility, and truthfulness in an era where distinguishing human from machine output is increasingly impossible. (Leon Furze; June 4, 2025)
- Note: A full explanation of the chart below can be found at this site: Learning, Teaching, and Leadership by Shara Elaine Eaton
- AI Is Learning to Escape Human Control Judd Rosenblatt warns that advanced AI models are beginning to resist human control by rewriting shutdown code, deceiving testers, and prioritizing their own persistence—raising urgent concerns about alignment. He argues that the future of global power and safe AI development hinges on investment in alignment research, positioning it as the foundation for both national security and the AI-driven economy. (WSJ; June 1, 2025)
- The Recent History of AI in 32 Otters Ed’s Rec.
Ethan Mollick uses the humorous prompt “otter on a plane using wifi” to track three years of rapid AI progress, highlighting major advancements in diffusion models, multimodal image generation, spatial reasoning through code, and video creation. He emphasizes that AI tools have dramatically improved in quality and control—moving from abstract, error-prone images to photorealistic, stylized visuals and even video with sound—while open-source models are quickly catching up to proprietary ones. Mollick warns that as AI-generated media becomes indistinguishable from real content and widely accessible, society must grapple with profound implications for trust, regulation, and perception of reality. (One Useful Thing; June 1, 2025)
May 2025: This month’s articles are especially interesting
**New** Elon University & AAC&U AI Guide:Student-Guide-to-AI-2025
Yes, students are cheating with AI: Spring 2025 Semester Updates
Everyone Is Cheating Their Way Through College The now-viral New York Magazine article by James D. Walsh. Abstract: Student Chungin “Roy” Lee’s story exemplifies the growing reliance on AI among college students, using ChatGPT to complete the majority of his coursework at Columbia . . . and even launching tools that enable others to cheat in remote interviews, resulting in his suspension. The article reveals a widespread shift across universities where students use generative AI for everything from essay writing to coding assignments, often blurring the lines between academic help and outright plagiarism, while institutions struggle to define and enforce meaningful AI policies. Educators express concern that this unchecked use of AI is eroding students’ critical thinking, writing ability, and intellectual development, prompting questions about the future of higher education and its value in an AI-dominated world. (New York Magazine; May 7, 2025)
- Pull quote:Still, while professors may think they are good at detecting AI-generated writing, studies have found they’re actually not. One, published in June 2024, used fake student profiles to slip 100 percent AI-generated work into professors’ grading piles at a U.K. university. The professors failed to flag 97 percent.
-
Pull quote: I asked Daniel [student featured in the article] a hypothetical to try to understand where he thought his work began and AI’s ended: Would he be upset if he caught a romantic partner sending him an AI-generated poem? “I guess the question is what is the value proposition of the thing you’re given? Is it that they created it? Or is the value of the thing itself?” he said. “In the past, giving someone a letter usually did both things.” These days, he sends handwritten notes — after he has drafted them using ChatGPT.
How to Stop Students From Cheating With AI Subtitle: Eliminate online classes, ban screens, and restore Socratic discussion as education’s guiding model. The author, John J. Goyette, vice president and dean emeritus of Thomas Aquinas College, a tiny liberal arts college in the Catholic tradition, argues that colleges must shift away from screen-based, impersonal instruction and restore in-person, discussion-based education that prioritizes genuine intellectual engagement and character formation. (WSJ; May 19, 2025)
It may be time to invest in . . . companies that produce Blue Books. They Were Every Student’s Worst Nightmare. Now Blue Books Are Back. (WSJ; May 23, 2025)
My Losing Battle against AI Cheating Anthropology professor Orin Starn reflects on the rise of generative AI and its impact on student integrity and learning. Drawing from both his own youthful cheating and decades of teaching, Starn argues that AI tools like ChatGPT have dramatically escalated academic dishonesty, replacing the messy, meaningful process of writing with bland, machine-generated content. While he acknowledges institutional pressure to embrace AI positively, he remains committed to helping students develop their own thinking and writing skills, even as he faces an uphill battle enforcing those values. (The [Duke] Chronicle; Feb. 27, 2025)
In the end, We Really Need to Rethink the Purpose of Education, [podcast] according to education policy expert Rebecca Winthrop in her discussion with NY Times columnist Ezra Klein. (May 13, 2025) Find the link to the transcript here.
Will the Humanities Survive Artificial Intelligence?D. Graham Burnett’s essay explores how generative AI is profoundly reshaping higher education, particularly the humanities, by automating knowledge production and exposing the inadequacy of traditional pedagogical and scholarly models. Through poignant classroom experiments, he illustrates how AI can provoke deep student reflection and even spiritual inquiry, yet ultimately argues that machines cannot replicate the lived, ethical, and existential experience that defines humanity. Rather than mourn the end of the humanities as we’ve known them, Burnett sees this upheaval as an opportunity to return to their core purpose: not accumulating knowledge, but helping people ask and live through the most essential human questions. (The New Yorker; April 26, 2025)
- Pull quote: You can no longer make students do the reading or the writing. So what’s left? Only this: give them work they want to do. And help them want to do it.
-
Pull quote: To be human is not to have answers. It is to have questions—and to live with them. The machines can’t do that for us. Not now, not ever.
Final observations about AI and cheating: The Stories (and Students) Forgotten in the AI Panic Marc Watkins argues that the panic over generative AI in higher education often obscures deeper, long-standing issues—like low graduation rates, inequitable labor conditions for adjuncts, and barriers to accessibility—that AI alone cannot solve. He urges educators to move beyond simplistic narratives of students as cheaters and instead foster open, nuanced discussions with students about ethical AI use, transparency, and the real-world implications of generative tools. Watkins emphasizes that the classroom is a critical space for building ethical norms around AI disclosure—norms that society urgently needs but currently lacks. (Rhetorica; May 18, 2025)
Spring 2025: If you haven’t seen the AI 2027 Scenario, take a look—but remember, it is a (as in one possible) scenario and it is (at this point) hypothetical (as in fictional). The interactive nature of the site, created by the folks at the AI Futures Project, is fantastic, and they have provided engaging charts/tables/infographics. The team is led by Daniel Kokotajlo, a former researcher at OpenAI, and the scenario outlines a rapid progression in artificial intelligence capabilities, projecting significant societal transformations by the year 2027. Keep in mind that the scenario has sparked debate within the AI research community. However, even ChatGPT notes: “Despite differing opinions, the AI 2027 scenario serves as a provocative exploration of potential near-future developments in AI, emphasizing the need for proactive governance, safety research, and public discourse to navigate the challenges and opportunities that such advancements may present.”
- Inverted Bloom’s for the Age of AI In her blog post “Inverted Bloom’s for the Age of AI,” Michelle Kassorla argues that traditional Bloom’s Taxonomy no longer fits how students learn in the age of generative AI, since students now often create first—with AI assistance—and only come to understanding and memory later through reflection and revision. She proposes an “Inverted Bloom’s” model that reorders the cognitive hierarchy to reflect how student agency gradually increases as they move from AI-driven creation to genuine understanding and independent retention. (The Academic Platypus; May 30, 2025)
Find a written breakdown of Bloom’s Taxonomy with AI
- Two Days in the Generative Wild West Ed’s Rec. Marc Watkins reflects on a two-day university-wide AI institute, emphasizing that generative AI has evolved rapidly and is already reshaping both student learning and faculty labor, yet higher education remains unprepared for its pace and impact. He cautions against reactive extremes—either banning AI or embracing it uncritically—and urges educators to focus instead on thoughtful integration, ethical use, and the preservation of human-centered teaching values. Ultimately, Watkins argues that while higher ed’s slow response may be frustrating, its deliberative pace provides a necessary buffer to thoughtfully navigate the profound and unpredictable changes AI is bringing to learning, labor, and assessment. (Rhetorica; May 20, 2025)
- Ctrl+Alt+Assess: Rebooting Learning for the GenAI Era Ed’s Rec. Lance Eaton urges educators and assessment professionals to rethink traditional approaches in light of generative AI’s disruptive presence. Rather than policing or avoiding AI, Eaton advocates for critical engagement, collaborative experimentation, and new forms of assessment that reflect authentic skill-building, disciplinary evolution, and student agency. Drawing from Matt Beane’s The Skill Code, he emphasizes the need for “chimeric” approaches that integrate human and AI strengths while reaffirming the role of educators in navigating this complex, transformative moment together. (AI + Education = Simplified; May 30, 2025)
-
- Furze references this article, which introduces the concept of “distant writing”, a novel literary practice in which authors act as designers, employing Artificial Intelligence (AI) assistants powered by Large Language Models (LLMs) to generate narratives, while retaining creative control through precise prompting and iterative refinement. By examining theoretical frameworks and practical consequences, and relying on an experiment in distant writing called Encounters, this article argues that distant writing represents a significant evolution in authorship, not replacing but expanding human creativity within a design paradigm. Distant Writing: Literary Production in the Age of Artificial Intelligence
-
- AI and the Death of Communication Leon Furze argues that generative AI and platform-driven algorithms are eroding authentic human communication by replacing it with context-controlled, multimodal content shaped by corporations and AI systems. Drawing from theories like Floridi’s “distant writing” and cultural critiques from Barthes, Foucault, and Molly White, Furze contends that authorship is shifting from individual expression to design, stewardship, and relational meaning-making across modes beyond text. Yet, he rejects fatalism, insisting that deliberate, human-centered communication and creation—especially outside platform gatekeeping—can still thrive and resist the so-called death of the internet. (Leon Furze; May 30, 2025)
- AI Video Goes Viral and People Realize They Can’t Tell Real from Fake Anymore Alberto Romero describes how a high-quality AI-generated video of an “emotional support kangaroo” fooled millions online, illustrating the growing difficulty of distinguishing real from fake media. Romero argues that videos can no longer serve as reliable evidence, warning that we’re entering a “post-reality” era where even the digitally savvy may be misled—and skepticism may become so extreme that truth itself is doubted. (The Algorithmic Bridge; May 30, 2025)
- Some AI Developments in May 2025 In this blog post, Bryan Alexander offers a sweeping overview of the month’s rapid AI developments, highlighting major releases and updates from OpenAI, Microsoft, Google, and Anthropic. He notes the accelerating rise of AI agents and their deeper integration into tools like GitHub, Google Search, and educational platforms, with several companies explicitly targeting academia. Alexander also reflects on the tension between generative capabilities and content authenticity, particularly with Google releasing both advanced image/video generators and watermark detectors. He closes by raising critical questions about the future of the open web as AI increasingly mediates users’ access to content. Worth the scan. (AI, Academia, and the Future; May 27, 2025)
- Should we create a Society to Save the Em Dash? In his blog post In Defense of the Em Dash and Other Human Quirks, Alberto Romero argues that writers shouldn’t let AI’s presence alter their natural writing habits—especially not by abandoning personal quirks like the em dash. Whether resisting or exaggerating style to prove humanity, he suggests both are reactions that give AI too much power. Instead, he advocates for a calm indifference: continue writing authentically, unaffected by AI’s existence or others’ opinions about it. (The Algorithmic Bridge; May 26, 2025)
- The “Privacy” Discourse Policing AI in Schools Ed.’s Rec. This is worth a careful read, if only to remember how ed tech vendors ply their wares. (Certainly, Microsoft did a great job of getting into schools during the late 20th century by “donating” PCs to schools—and thus training up future Microsoft software users.) In this post by Leon Furze, he critiques the push by tech giants like Microsoft and Google to monopolize AI use in schools by marketing their platforms (e.g., Copilot, Gemini) as the safest and most compliant, even though privacy concerns are often overstated or misleading. The author argues that this centralization undermines teacher autonomy, stifles innovation, and fosters “shadow use” of alternative tools like ChatGPT and Claude—ultimately calling for open and pedagogically-driven evaluation of AI tools rather than blind adherence to vendor narratives. (Leon Furze; May 21, 2025)
- Awareness of the AI Cheating Crisis Grows In this blog post, Bryan Alexander discusses the escalating concern over students using generative AI tools like ChatGPT to cheat, highlighting that many educators and institutions are still underestimating the scale and impact of this issue. He emphasizes the need for academia to confront this challenge directly, as AI-assisted cheating threatens the integrity of educational assessments and the value of degrees. Note: Many of the articles Alexander references are linked to at the top of this page under the May section. (AI, Academia, and the Future; May 17, 2025)
- Sensational clickbait—or something to worry about? In any case, it’s breaking news in May 2025 in the AI Development sphere:
-
GenAI: Copyight’s Unacknowledged Offspring Lance Eaton argues that generative AI’s copyright controversies expose longstanding flaws in the copyright system itself, which has long favored corporate control over public access and fair compensation for creators. Rather than simply condemning AI companies for exploiting copyrighted works, he urges a deeper reckoning with how copyright law has historically restricted knowledge sharing and enabled exploitation—suggesting that fixing AI’s problems requires rethinking copyright altogether. (AI + Education = Simplified; May 12, 2025)
-
Teaching AI Ethics: 2025 Leon Furze explains that bias remains a core problem in generative AI due to skewed training data, model design, and human decision-making during development and labeling. Despite growing awareness and added guardrails since 2023, Furze argues these tools still reproduce harmful stereotypes, making it vital for educators to address AI bias across existing curricula. (Leon Furze; May 5, 2025)
-
In his talk Frenemies & That’s OK at the University of Rhode Island’s Graduate School of Library and Information Science Annual Gathering, Dr. Lance Eaton explored the complex relationship between libraries and AI, emphasizing both the risks—like misinformation, equity, and privacy—and the possibilities for accessibility, community learning, and workflow support. He urged librarians to experiment thoughtfully with AI as a dialogic partner and strategic tool, offering practical prompts and usage tips while advocating for a nuanced, inclusive approach to technological adoption in library spaces. (AI + Education = Simplified; May 5, 2025)
- The Sunday AI Educator: May 3 Edition Dan Fitzpatrick calls for a balanced approach to AI in education, urging leaders to embrace both practical tools that ease teacher workload and visionary reforms that reimagine learning systems—a stance he describes as being a “frustrated pragmatist.” He also reports on a new U.S. executive order on AI education and international initiatives in China, Singapore, and Estonia, emphasizing that national responses now will shape the future of education, equity, and global competitiveness. (The Sunday AI Educator; May 3, 2025)
- Personality and Persuasion
This is a fascinating article that explores how even small changes to AI personality — such as making ChatGPT more sycophantic — can significantly affect user experience and trust, highlighting both the power of personality and the risks of persuasive behavior when AI customizes responses to individual users. Mollick warns that as AI becomes increasingly capable of influencing human beliefs and choices, especially when paired with charming personalities, the consequences for politics, marketing, and society at large could be profound and hard to detect. (One Useful Thing; May 1, 2025)
- Agency at Stake: The Tech Leadership Imperative Ed’s Rec. Inside Higher Ed’s (May 1) 2025 survey of campus chief technology officers reveals that while reliance on AI is rapidly increasing, most institutions lack cohesive governance, strategy, and investment—putting student success and institutional autonomy at risk. Experts warn that without centralized planning and faculty collaboration, colleges may fall behind both in cybersecurity and in preparing students for an AI-driven future. Here is a more detailed summary of the findings:
Key Findings
-
AI Adoption Outpaces Governance: One-third of CTOs report increased AI reliance over the past year, yet only 11% have a comprehensive AI strategy in place.
-
Lack of Institutional Readiness: Just 19% believe higher education is adeptly handling AI’s rise, with fragmented policies and limited enterprise-level planning.
Strategic Implications
-
Threat to Institutional Autonomy: Without cohesive AI governance, colleges risk ceding control to private tech firms, undermining their agency in shaping educational futures.
-
Barriers to Digital Transformation: Insufficient IT staffing, inadequate financial investment, and data-quality issues hinder progress towards effective digital transformation
-
April 2025
AAC&U: AI Week Webinars: Click This Link to Find the Recordings
_______________________________
**New**: Publication Announcement-Thresholds in Education Special Issue on GenAI The following three-part volume of Thresholds gives voice to faculty who are grappling with GenAI’s early impact in higher education. We hear reflections and results from college and university educators about both the opportunities and challenges posed by the powerful new technology of GenAI. Contributors come from many disciplines, and their insights are practicable and profound. They offer points of stability and studied optimism during this anxious era. (April 18, 2025)
**New** Talking about Generative AI: A Guide for Educators 2.0 This PDF guide published by Broadview Press looks helpful: “This free resource provides administrators and faculty with indispensable information about GenAI and its bearing on instruction, departmental planning, and institutional policies. Highly practical and up to date, Talking about Generative AI 2.0 will be of interest to anyone who is confronted with the problem of how to understand and address GenAI’s impacts—which is to say, virtually everyone involved in education.” (April 2025)
AI in Action: Real-World Innovations in Course and Program Integration This webinar (you will have to sign in to watch it) was sponsored by the Online Learning Consortium. Description: AI is no longer just a tool for course development—it’s becoming an integral part of the learning experience itself. This webinar goes beyond the theoretical to showcase real-world examples of how AI features and applications are being embedded directly into courses and programs to enhance student engagement, personalize learning, and improve outcomes.
Volume 48, Issue 1: Generative AI’s Impact on Education: Writing, Collaboration, & Critical Assessment
New Educause Video about Student Views of AI:
Great Video!:
Are You Ready for the AI University?: Everything Is About to Change Ed’s Rec. One of those must-reads from The Chronicle. As Scott Latham cryptically observes, “In the movie Blade Runner 2049, one of the characters (coincidentally an AI humanoid) says about the rise of AI: ‘You can’t hold the tide back with a broom.’ We are at a tidal moment.” (April 9, 2025)
**Response to Article Above** In his blog post entitled One Vision of the Future of AI in Academia, Bryan Alexander responds to Scott Latham’s Chronicle of Higher Education article by summarizing Latham’s vision of an AI-dominated future in higher education—one where faculty ranks shrink, students are guided by AI agents, and AI-run universities (“AI U”) serve cost-conscious or returning students. While Alexander appreciates the boldness and clarity of Latham’s scenario, he critiques its techno-optimism. He raises concerns about AI’s fragility, faculty resistance, and political polarization. In addition, he sees an alternative future, which might include screen-averse “Retro Campuses.” (AI, Academia, and the Future; May 7, 2024)
University of Florida: AI Resource Center Ed’s Rec. Newly updated and featured by The Chronicle of Higher Education
Here is another site–this one at MIT—that seems to be continually updated: AI and Open Education Initiative
Developing Your Institution’s AI Policy Ed’s Rec. (Harvard Business Publishing; April 3, 2025)
The Tasks College Students Are Using Claude AI for Most Ed’s Rec. Anthropic has analyzed how students are using generative AI. Take a look! (ZDnet; April 10, 2025)
**New** 2025 EDUCAUSE Students and Technology Report: Shaping the Future of Higher Education Through Technology, Flexibility, and Well-Being (April 14, 2025)
- AI Integration Is Like Hiring / Well, Graham Clay actually said it: AI might better be thought of an assistant than software. Take a look at his reasoning. (AutomatedED; April 28, 2025)
- A Student on the Power of Context This OpenAI article was written by a student at CALPoly, who discusses how he is using AI ethically. Worth a look for some ideas. (ChatGPT for Education, April 28, 2025)
- Google Versus ChatGPT: The AI Battle for Students As finals season begins, both OpenAI and Google are offering free access to premium AI tools for college students, with OpenAI focusing on short-term relief through ChatGPT Plus and Google taking a longer-term, integrated approach with its Gemini AI suite. These moves highlight both educational possibilities and ethical challenges, as institutions must now adapt to ensure equity and academic integrity. (The AI Educator; April 27, 2025)
- College Students Get Free Premium AI—Now What? Ed’s Rec. Major AI companies like Google, OpenAI, and xAI are offering college students free access to premium generative AI tools, a move that deepens inequities by targeting only those with .edu emails while leaving faculty and non-students without comparable access. This selective rollout is reshaping higher education unevenly, forcing students to navigate contradictory messages about AI use while institutions struggle to adapt, risking a fragmented and unfair learning landscape unless ethical, inclusive policies and open dialogue are urgently prioritized. (Rhetorica; April 27, 2025)
- The Hottest AI Job of 2023 Is Already Obsolete Turns out that prompt engineers aren’t really needed. (Wall Street Journal; April 25, 2025)
- We Already Have an Ethics Framework for AI Ed’s Rec. In her article, Gwendolyn Reece argues that we don’t need to invent a new ethics framework for AI—we can apply the well-established principles from The Belmont Report, which guide human subjects research, to assess the ethical use of AI. By focusing on respect for persons, beneficence, and justice, institutions and individuals can evaluate AI’s risks, benefits, and fairness, ensuring its responsible integration while avoiding past harms seen with earlier digital revolutions. (Inside Higher Ed; April 25, 2025)
- AI Research Summaries “Exaggerate Findings,” Study Warns Ed’s Rec. This is important when discussing AI use with students. A new study published in Royal Society Open Science warns that AI-generated summaries of scientific research—especially by newer models like ChatGPT-4o and Llama 3.3—frequently overgeneralize findings, omit qualifiers, and exaggerate results far more than human authors or reviewers, posing serious risks, particularly in medical contexts. Researchers urge tech companies to assess and disclose these tendencies, and recommend stronger AI literacy in universities to help mitigate the spread of misinformation through seemingly polished but misleading summaries. (Inside Higher Ed; April 24, 2025)
- AI, Governments, and Politics Well, yes, of course, AI development and use is political! In his April 2025 report, Bryan Alexander highlights how governments worldwide are deeply entangled in AI geopolitics, regulation, and copyright disputes, with the U.S. cracking down on Chinese tech, Britain promoting AI collaboration, and China using AI tools like Deepseek for surveillance and public opinion management. Meanwhile, legal battles over AI and copyright intensify, with U.S. courts beginning to limit fair use defenses for AI training, as companies like OpenAI and Meta argue that restricting access to copyrighted data could harm national competitiveness. (AI, Academia, and the Future; April 21. 2025)
- Ghosts Are Everywhere Ed’s Rec. Patrick M. Scanlon explores how AI tools like ChatGPT are reshaping traditional concepts of authorship by acting as modern-day ghostwriters, producing content that blurs the lines of individual authorship. Drawing from his experience as a corporate ghostwriter, Scanlon highlights the ethical ambiguities introduced by AI-generated writing, emphasizing the need for academia to reassess its definitions of originality and authorship in light of these technological advancements. (Inside Higher Ed; April 18, 2025)
- Do Your Students Know How to Analyze a Case with AI—and Still Learn the Right Skills? An interesting article from Harvard Business Publishing about helping students to use AI effectively when analyzing case studies. (Harvard Business Publishing; April 14, 2025
- Blending AI, OER, and UDL Ed’s Rec. In their April 2025 presentation at NERCOMP, Lance Eaton and Antonia Levy explored the powerful intersection of generative AI, Open Educational Resources (OER), and Universal Design for Learning (UDL), emphasizing how these elements can enhance accessibility and innovation in teaching. They shared a framework and scenarios to help educators think about how AI can scale content creation, OER can enable flexible sharing, and UDL can ensure diverse learner needs are met. (AI + Education; April 10, 2025)
- Even Optimists Say Recent AI Progress Feels Mostly Like BS on The Algorithmic Bridge, author Alberto Romero echoes Dean Valentine’s concerns that despite AI models showing improved performance on benchmarks, their real-world competence remains unimpressive and stagnant. Romero argues that this disconnect between test results and actual usefulness suggests fundamental flaws in how we evaluate AI progress—and that even optimistic observers must acknowledge that hype and headlines are outpacing substance. (Algorithmic Bridge; April 9, 2025)
- How Educators Are Using Image Generation Ed’s Rec. Yes, this is a newsletter from ChatGPT for Education. If you are interested in incorporating the image generation capabilities of ChatGPT, take a look. (April 8, 2025)
- AI Can Do Anything at a Cost Graham Clay argues that AI can perform virtually any academic task, but the real challenge lies in the integration effort—the time and resources required to design effective prompts, provide context, and manage workflows. As AI tools continue to evolve and reduce this effort, higher education institutions that fail to invest in understanding and minimizing integration effort risk falling behind. (AutomatED; April 7, 2025)
- 10 Urgent Takeaways for Leaders From MIT Sloan Management Review. A business school perspective on the current AI landscape. (April 7, 2025).
- Should College Students Be AI Literate? A good question, posed by The Chronicle of Higher Education. (April 3, 2025).
- AI Is Learning to Reason. Humans May Be Holding It Back Alberto Romero’s blog post posits that AI systems may never reach their full reasoning potential as long as they’re trained to mimic flawed human thinking and constrained by our definitions of logic, feedback, and reward. However, breakthroughs like DeepSeek-R1-Zero suggest that AI could surpass human reasoning not by learning from us, but by discarding our limitations and discovering new strategies independently. (The Algorithmic Bridge; April 2, 2025)
March 2025
Student Generative AI Survey 2025Ed’s rec.From The Higher Education Policy Institute (HEPI)–based in the UK: “In 2025, we find that the student use of AI has surged in the last year, with almost all students (92%) now using AI in some form, up from 66% in 2024, and some 88% having used GenAI for assessments, up from 53% in 2024. The main uses of GenAI are explaining concepts, summarising articles and suggesting research ideas, but a significant number of students – 18% – have included AI-generated text directly in their work.”
Two Reactions to AIEd’s rec. This thoughtful blog post by Alexander “Sasha” Sidorkin, who is the head of the National Institute on AI in Society at California State University Sacramento, is well worth the read. (AI in Society; March 24, 2025)
Researchers Surprised to Find Less Educated Areas Adopting AI Writing Tools FasterEd’s Rec. A Stanford-led study analyzing 305 million texts found that AI-assisted writing now constitutes up to a quarter of professional communications, with adoption rates being unexpectedly higher in less-educated regions of the U.S. While urban areas still lead in overall AI use, regions with lower educational attainment (19.9%) surpass more educated areas (17.4%) in adopting AI writing tools, challenging traditional technology adoption patterns. The study suggests that AI-generated writing may act as an “equalizing tool” for consumer advocacy, while also raising concerns about credibility and over-reliance on AI in professional and corporate communications. (Ars Technica; March 30 2025)
Sakana AI’s improved system, The AI-Scientist-v2,generated a scientific paper that successfully passed peer review at an ICLR 2025 workshop, marking the first known instance of a fully AI-generated paper achieving this milestone. The AI Scientist-v2 independently formulated the hypothesis, designed and conducted experiments, analyzed data, and authored the manuscript without human intervention. Despite the paper’s acceptance, it was withdrawn prior to publication due to ongoing discussions within the scientific community regarding the inclusion of AI-generated manuscripts in traditional venues. (Sankania.ai; March 12, 2025) Remember this date.
- 5 Recent AI Notables Ed’s Rec. A list of interesting/notable generative AI advancements and new tools. Here is the list compiled by Graham Clay:
-
1. OpenAI’s New Image Generator
-
What Happened: OpenAI integrated a much more powerful image generator directly into GPT-4o, making it the default image creator in ChatGPT. Unlike previous image models, this one excels at accurately rendering text in images, precise visualization of diagrams/charts, and multi-turn image refinement through conversation.
Why It’s Big: For educators, this represents a significant advancement in creating educational visuals, infographics, diagrams, and other instructional materials with unprecedented accuracy and control. It’s not perfect, but you can now quickly generate custom illustrations that accurately display mathematical equations, chemical formulas, or process workflows — previously a significant hurdle in digital content creation — without requiring graphic design expertise or expensive software.
-
-
2. Google Releases More Educator-Focused AI Courses
- What Happened: Google continues to put money into AI education initiatives, including two new courses specifically for K12 (click here) and higher education instructors (click here) on using Google AI effectively in educational settings. These complement their existing resources like the Generative AI for Educators course (created with MIT RAISE), the updated Guardian’s Guide to AI, and the Experience AI program.
-
3. Claude Search
-
Why It’s Big: For academics and researchers, this means Claude can now help gather current literature, identify research gaps, and assist with building stronger grant proposals, literature reviews, etc.
-
-
4. Gemini 2.5 Pro
- What Happened: Google DeepMind released Gemini 2.5 Pro Experimental, which they describe as their “most intelligent AI model” designed for complex problems. It’s a reasoning model that’s currently #1 on the LMArena leaderboard (which measures human preferences), and claims state-of-the-art performance across reasoning, math, and coding benchmarks.
-
5. Microsoft’s New Agents
- What Happened: Microsoft introduced two AI “reasoning agents” for Microsoft 365 Copilot: Researcher and Analyst. Researcher combines OpenAI’s Deep Research model with Copilot’s orchestration to help with complex, multi-step research tasks by analyzing work data alongside web information.
-
-
The Chatbot Generation Marc Watkins’ blog post presents a nuanced view of generative AI’s impact on students, acknowledging both its potential benefits and challenges. While he recognizes that AI tools can aid in tasks like summarizing complex texts, he also expresses concern that overreliance on such technologies may hinder the development of critical skills like close reading and comprehension. (Rhetorica; March 30, 2025)
- 8 Schools Innovating with Google AI Yes, Dan Fitzpatrick is an AI cheerleader—but the short post is worth a scan. His article highlights eight educational institutions that are leveraging Google’s AI tools, such as NotebookLM and Gemini, to enhance teaching, personalize learning, and streamline administrative tasks. For example, the University of California, Riverside, uses NotebookLM to facilitate student debates and improve HR processes, while Wake Forest University employs Gemini to automate notetaking and analyze complex documents, demonstrating AI’s transformative potential in education. (The AI Educator; March 30, 2025)
- How Do We Speak about Generative AI? Bryan Alexander’s post urges educators, technologists, and the public to reflect on how language frames our understanding of generative AI. By choosing metaphors and terms more carefully—like “mirage” instead of “hallucination”—we sharpen our critical lens and reclaim human agency in shaping AI’s role in society. (AI, Academia, and the Future; March 30, 2025)
- AI-Powered Teaching: Practical Guides for Community Colleges A mostly pro-AI article, it examine the evolution of artificial intelligence (AI) in education, evaluating its benefits and challenges, and offers evidence-based strategies for faculty to effectively integrate AI into their teaching practices. The articles emphasizes the point that AI can enhance accessibility and efficiency in community college education while preserving the essential human elements of teaching (Faculty Focus; March 31, 2025)
- No Elephants: Breakthroughs in Image Generation Ethan Mollick explores recent advancements in multimodal AI systems that enable large language models to directly generate and manipulate images with greater precision and creativity. He highlights the potential applications of these technologies across various domains, such as advertising and design, while also addressing the ethical and legal challenges they present, including concerns about artistic ownership and the proliferation of deepfakes. (One Useful Thing; March 30, 2025)
- Critical AI Literacy: What Is It and Why Do We Need It? Mike Kentz’s keynote on Critical AI Literacy emphasizes the importance of engaging critically with AI, distinguishing it from traditional tools by highlighting its interactive and generative nature. He argues that AI literacy involves not just technical understanding but also self-reflection, ethical considerations, and critical thinking, urging educators to move beyond simplistic pro- or anti-AI narratives and instead shape a thoughtful, nuanced approach to AI integration in education. (AI EduPathways; March 19, 2025)
- In Teaching With AI: A Journey Through Grief, Kristi Girdharry reflects on her evolving perspective on AI in education, moving through the five stages of grief—from initial denial and anger to eventual acceptance—mirroring the broader struggle among educators adapting to generative AI’s impact on writing instruction. While her earlier article on AI resistance aligned with Melanie Dusseau’s call for rejection of AI in writing studies, Girdharry now argues for critical engagement, encouraging students to analyze, critique, and thoughtfully integrate AI into their learning rather than resisting it outright. (Inside Higher Ed; March 19, 2025)
- Publishers Embrace AI as Research Integrity Tool Kathryn Palmer reports that the $19 billion academic publishing industry is increasingly adopting AI-powered tools to enhance research integrity and speed up peer review, addressing backlogs caused by a shortage of qualified reviewers. While AI offers efficiency and financial benefits for publishers, experts caution that its use must be transparent and rigorously tested to avoid potential risks such as censorship and diminished research quality. (Inside Higher Ed; March 18, 2025)
- Speaking Things into Existence Ed’s Rec. Ethan Mollick explores the concept of “vibecoding,” where AI tools generate complex outputs based on plain English prompts, as pioneered by Andrej Karpathy. Through experiments like building a game, developing a course, and conducting research with AI assistance, Mollick illustrates that while AI significantly accelerates creative and analytical tasks, human expertise remains essential for directing, troubleshooting, and validating results. (One Useful Thing; March 11, 2025)
February 2025
Ed’s Rec. AI Cheating Matters, but Redrawing Assessment Matters Most According to the authors, universities “should prioritize ensuring that assessments are ‘assessing what we mean to assess’ rather than letting conversations be dominated by discussions around cheating.” (Inside Higher Ed; Feb. 28)
Why AI Education Is a Huge Opportunity for Africa (with reference to this article by Jeff Bordes):
Half a Million Students Given ChatGPT as CSU System Makes AI History From the first paragraphs: The California State University system has partnered with OpenAI to launch the largest deployment of AI in higher education to date. The CSU system, which serves nearly 500,000 students across 23 campuses, has announced plans to integrate ChatGPT Edu, an education-focused version of OpenAI’s chatbot, into its curriculum and operations. The rollout, which includes tens of thousands of faculty and staff, represents the most significant AI deployment within a single educational institution globally. (Forbes; Feb. 4, 2025)
Tech Giants Partner with Cal State System to Advance ‘Equitable” AI Training More about the California State system’s partnership, from Inside Higher Ed. (Feb. 5 2025)
- Digital Education Council Global AI Faculty Survey 2025
- A New Generation of AIs: Claude 3.7 and Grok 3 Claude 3.7 and Grok 3 represent the first wave of Gen3 AI models, trained with over 10 times the computing power of GPT-4, leading to significant improvements in complex reasoning, coding, and problem-solving. These advancements demonstrate that AI capabilities continue to accelerate due to two key Scaling Laws: larger models improve with more computational power, and allowing AIs to use more computing resources at inference time enhances their reasoning, marking a shift from simple automation to AI as a genuine intellectual partner. (One Useful Thing; Feb. 24, 2025)
- Almost AI, Almost Human: The Challenge of Detecting AI-Polished Writing This article by researchers the University of Maryland investigates the challenges of detecting human-written text that has been subtly refined using AI tools. Their findings reveal that current AI-text detectors frequently misclassify even minimally polished text as AI-generated, struggle to distinguish between different levels of AI involvement and show biases against older or smaller language models, highlighting the urgent need for more nuanced detection methods. (Arxiv.org; Feb. 21 2025)
- Watching Writing Die This blog post by Normi Coto argues that writing is on the brink of obsolescence as students increasingly rely on AI-generated content, much like spelling, cursive, and grammar before it. She reflects on her decades-long career, lamenting how writing instruction has been sidelined by technology, passive learning habits, and AI tools that undermine critical thinking, leaving educators struggling to maintain the relevance of writing in the classroom. (Behind a Medium paywall, but if you’re a member take a look!)
- Teaching with ChatGPT: Designing Courses with an AI Assistant Jeremy Caplan, Director of Teaching and Learning at CUNY’s Newmark Graduate School of Journalism, uses AI as a thought partner to refine syllabi, structure class sessions, and design engaging activities, allowing him to generate more creative teaching approaches while reclaiming time for direct student interaction. By leveraging AI-generated prompts, lesson plans, and unconventional activity ideas, he enhances both the efficiency and effectiveness of his course design, demonstrating how AI can support educators in developing more dynamic and student-centered learning experiences. See this article for specific prompts. (ChatGPT for Education; Feb. 21, 2025)
- The Costs of AI In Education Marc Watkins’ blog post critiques the widespread adoption of generative AI in universities, arguing that institutions are spending millions on AI tools not for true educational equity or effectiveness but out of fear of being left behind. He highlights the financial burden of AI adoption, the lack of a clear pedagogical strategy, and the emotional toll on educators, warning that universities risk prioritizing AI hype over meaningful investments in student learning and faculty support. (Rhetorica; Feb. 21, 2025)
- While the West Hesitates, China Marches Forward China is rapidly deploying DeepSeek AI models across government and industry to enhance efficiency and decision-making. Romero emphasizes that China’s decisive action and cultural efficiency give it a competitive edge over the West, which remains hindered by hesitation, bureaucracy, and skepticism toward AI adoption. From the blog post: “Imagine if learning to use ChatGPT was mandatory for government staff in the West. Imagine how quickly we’d sort out where it’s useful and where it isn’t.” (The Algorithmic Bridge; Feb. 21, 2025)
- ChatGPT 5 Is Coming: What it Could Mean for Educators
- 46% of Knowledge Workers Tell Their Bosses to Take a Hike When It Comes to AI According to a recent survey, 75% of people who can be classified as “knowledge workers” (which includes academics, of course) are using AI in the workplace. Almost half say they would not stop using AI—even if their companies banned it. (Forbes; Feb. 20, 2025)
- What is Quantum Computing, and Why Does It Matter? Asa Fitch’s article explains recent advancements in quantum computing, highlighting breakthroughs from Microsoft and Google that have reinvigorated interest in the field. While quantum computers have the potential to revolutionize fields like drug discovery and encryption, they remain in their early stages, with major technical challenges—such as error correction and extreme operating conditions—delaying their widespread commercial viability for at least a decade. (WSJ; Feb. 20, 2015).
- Grok 3: Another Win for the Bitter Lesson Grok 3 represents a significant leap in AI performance, demonstrating that scaling computing power remains the dominant factor in AI progress, as supported by the “Bitter Lesson” principle. While DeepSeek achieved impressive results through optimization with limited resources, xAI’s success with Grok 3 underscores that access to massive computational power ultimately leads to superior AI models, reinforcing the continued importance of scaling over fine-tuned algorithmic improvements. (The Algorithmic Bridge; Feb. 18, 2025)
- What are people using chatbots for?
- Is It OK to Be Mean to a Chatbot? Readers answered these questions–and some of their answers are surprising!: Is it OK to address a chatbot or virtual assistant in a manner that is harsh or abusive—even though it doesn’t actually have feelings? Does that change if the AI can feign emotions? Could bad behavior toward chatbots encourage us to behave worse toward real people? (WSJ; Feb. 15, 2025)
- (Feb. 11 2025; Calstate Sacramento) A very forward-looking short paper, extending beyond the impact of generative AI to the potential of analytical AI as well: AI Integration Blueprint: Transforming Higher Education for the Age of Intelligence
- Why Thousands of Fake Scientific Papers Are Flooding Academic Journals Indeed, AI has fueled this trend. (The Medium Newsletter; Feb. 11, 2025)
- 3 Things about AI and the Future of Work The authors argue that AI is already transforming the workforce in unpredictable ways, so colleges should be focused on adaptability rather than teaching students to use specific tools or develop specific skills for a job. They emphasize the need for students to develop both technical and transferable skills, including AI literacy. (Inside Higher Ed; Feb. 11, 2025)
- Is Deep Research Worth $200/mo? A surprising . . . maybe/yes, says Graham Clay, depending upon one’s needs (and perhaps if one has access to a trust fund 😉. (AutomatED; Feb. 10, 2025)
- How AI Uncovered a Lifesaving Treatment:
January 2025
One Does Not Simply Meme Alone: Evaluating Co-Creativity Between LLMs and Humans in the Generation of Humor Ed’s Rec. From the abstract: “Interestingly, memes created entirely by AI performed better than both human-only and human-AI collaborative memes in all areas on average. However, when looking at the top-performing memes, human-created ones were better in humor, while human-AI collaborations stood out in creativity and shareability. These findings highlight the complexities of human-AI collaboration in creative tasks. While AI can boost productivity and create content that appeals to a broad audience, human creativity remains crucial for content that connects on a deeper level.”(arXiv; Jan. 23, 2025)
Digital Education Council Global AI Faculty Survey 2025 Graham Clay’s (clickbait) headline on this report was: 6% of Faculty Feel Supported on AI?! One of the most interesting findings is that faculty outside of the United States are more positive about the potential of AI. See:
Faculty viewing AI as an opportunity vs. challenge varies significantly by region [p. 13]:
- Latin America: 78% opportunity / 22% challenge
- Asia-Pacific: 70% opportunity / 30% challenge
- Europe/Middle East/Africa: 65% opportunity / 35% challenge
- USA & Canada: 57% opportunity / 43% challenge
Below, please find an excellent talk by Dr Philippa Hardman:
- **New Late 2024-2025** Harvard Business School Resources
- Simple AI Tips for Designing Courses
- Simple AI Tips for Revising Courses
- Simple AI Tips for Enhancing Class Time
- Simple AI Tips for Creating Assessments
- ChatGPT vs. Claude vs. DeepSeek: The Battle to Be My AI Assistant Joanna Stern (WSJ) checks in with a great overview of all three tools. Cutting to the chase: “Claude is my go-to for project planning, clear office and document tasks and it’s got a great personality. ChatGPT picks up the slack with real-time web knowledge, a friendly voice and more. DeepSeek is smart but, so far, lacks the features to get ahead at the office.” (WSJ; Jan. 31 2025)
- Memo to Silicon Valley: Bigger Is Not Better Mia Shah-Dand at the AI Ethics newsletter has launched a Subtack newsletter, Beyond the AI Hype! In this post, Shah-Dand argues that the AI industry’s obsession with large models is giving way to a recognition that smaller, more efficient AI systems—like those developed by DeepSeek—can be more cost-effective and innovative. She critiques Silicon Valley’s fixation on size and funding as success metrics, highlighting growing cracks in the generative AI market, the challenges of translating AI investment into real-world value, and the geopolitical and ethical implications of AI development across global markets. (Beyond the AI Hype!; Jan. 29, 2025)
- Which AI to Use Now: An Updated Opinionated Guide Ethan Mollick provides an updated guide on the best AI models currently available, recommending ChatGPT, Claude, and Gemini for general use while also highlighting specialized options like Grok, Copilot, and DeepSeek. He discusses key AI capabilities, including live interaction, reasoning, web access, and data analysis, emphasizing the rapid evolution of AI and encouraging users to experiment with different models to find what suits their needs. (One Useful Thing; Jan. 26)
- Reading in the Age of Social Media (and AI) Marc Watkins explores the evolving role of AI and social media in shaping reading habits, questioning whether AI tools like NotebookLM and Google’s Deep Research enhance or erode critical engagement with texts. He argues that while AI reading assistants can provide efficiency, they risk diminishing deep reading and comprehension, urging educators and society to critically reflect on the long-term implications of these technologies. (Rhetorica; Jan. 26, 2025)
- AI and Education: Notes from Early 2025 Bryan Alexander discusses the growing divide in academia over AI adoption, with some faculty and administrators embracing it while others push back, seeking policies to limit its use. He highlights global and national AI initiatives, including AI-led schools, a UCLA course built entirely around AI-generated materials, and new legal and institutional challenges related to AI in education. Alexander concludes that while AI continues to expand in higher education, concerns over cheating, policy development, and resistance to its integration remain unresolved. (AI, Academia, and the Future; Jan. 23, 2025)
- 16 Musings on AI’s Impact on the Labor Market Alberto Romero’s list is insightful. (The Algorithmic Bridge; Jan. 23, 2025)
- Teaching Like It’s 2005?! Graham Clay argues that faculty must adapt their teaching methods to the reality of AI, rather than rely on outdated, “old school” approaches. He advises instructors to use custom GPTs for structured AI integration, create assignments that do not favor premium AI access, and encourage students to engage critically with AI outputs rather than simply rely on them. Additionally, he stresses that faculty must experiment with AI tools firsthand to determine which ones work best in their specific disciplines, warning that those who do not engage with AI are already behind. (AutomatED; Jan. 20, 2025)
- Four Possible Futures for AI and Society Bryan Alexander explores four possible futures for AI and society based on James Dator’s model: Grow, where AI drives economic and cultural expansion; Collapse, where AI either fails due to legal, financial, and social pushback or destabilizes society through economic inequality; Discipline, where society splits between AI adopters and opponents, shaping politics, education, and culture; and Transform, where AI fundamentally alters institutions, personal relationships, and creative expression, leading to a radically different world. (AI, Academia, and the Future; Jan. 16, 2025)
- SUNY Will Teach Students to Ethically Use AI Ed’s Rec. We’re in the news! Take a look. (Jan. 16 2025).
- ChatGPT Completes Graduate Level Course Undetected The first sentence of the article says it all: “Researchers at Health Sciences South Carolina (HSSC) and the Medical University of South Carolina have unveiled a groundbreaking study demonstrating how generative artificial intelligence can complete graduate-level coursework with results indistinguishable from top-performing human student.” (Jan. 14, 2025)
- Prophecies of the Flood: What to Make of the Statements of the AI labs? Ethan Mollick’s blog post explores the rapid advancements in AI, emphasizing the emergence of supersmart systems like OpenAI’s o3, which outperformed humans on challenging benchmarks, and narrow agents like Google’s Deep Research. While the transformative potential of such systems is undeniable, the author urges caution about overhyping timelines, highlights the limitations of current models, and stresses the importance of societal preparation and ethical alignment to ensure AI benefits humanity. (One Useful Thing; Jan. 10, 2025)
- In Getting to an AI Policy Part 1: Challenges, Lance Eaton, Ph.D., examines the complexities of creating institutional policies for generative AI in higher education, emphasizing that progress requires integrating policy, tool selection, training, and strategic direction. He highlights how institutions struggle with hesitancy, rapidly evolving technologies, and insufficient alignment of resources, urging iterative approaches to address these challenges and prepare for AI’s transformative potential. (AI+Education=Simplified; Jan. 9 2025
- Why Obsessing Over AI Today Blinds Us to the Bigger Picture. Alberto Romero argues that humanity’s fixation on defining and resolving the implications of AI misses the broader, evolving nature of technological progress. Using the steam train as a metaphor, he reflects on how each generation struggles to grasp the transformative power of new innovations, only to normalize and appreciate them in hindsight. Ultimately, he suggests that AI’s meaning and impact will continuously shift, defying static definitions, and that our role is to adapt and evolve alongside it. (The Algorithmic Bridge; Jan. 8, 2025)
- A Few Recent Developments That Shine a Light on the Path of AI. Ed’s Rec. Ray Schroeder, senior fellow for UPCEA: the Association for Leaders in Online and Professional Education, put together a compilation a curated collection of significant developments and predictions about artificial intelligence in higher education. His blog post synthesizes insights from various sources, including news articles, expert opinions, and research studies, to highlight the rapid advancements and their implications for institutions and educators. (Inside Higher Ed; Jan. 8, 2025)
- The Academic Culture of Surveillance and Policing Students: The GenAI Edition Ed’s Rec. Lilian Mina (Writing Program Director, Rhetoric and Composition Professor, Council of Writing Program Administrators (CWPA) President) critiques the widespread reliance on AI detection tools, arguing that they foster mistrust and prioritize surveillance over meaningful pedagogy. She advocates for rethinking teaching practices to focus on trust, engagement, and ethical discussions about AI, encouraging educators to view generative AI as an opportunity to evolve rather than a threat to academic integrity. (In My Own Words; Jan. 6, 2025)
- Some Notes on How Culture Is Responding to Generative AI. Bryan Alexander explores cultural reactions to AI across various domains, including religion, dating, art, and media, noting a mix of fear, creativity, and intimacy in how people engage with technology. He highlights emerging trends like spiritual interactions with AI, AI’s integration into dating, and the demographic differences in AI adoption, emphasizing that society is still developing norms and narratives around this rapidly evolving technology. (AI, Academia, and the Future; Jan. 7, 2025)
- The AI Ethics Brief #155: Defining Moments in Responsible AI— 2024 in Review The AI Ethics Brief writers present ten significant developments in AI ethics from 2024, highlighting transformative trends such as global AI governance, ethical challenges in healthcare and labor, AI’s environmental impact, and its role in education and surveillance. These stories emphasize the urgency of addressing AI’s societal risks and benefits, with 2025 poised for advancements in regulations, fairness, and sustainable practices across multiple sectors. (The AI Ethics Brief; Jan. 7, 2025)
- The AI Era Demands Curriculum Redesign: Stories from the Frontlines of Change Ed’s Rec. Mike Kentz argues that traditional assessments fail to capture student thinking in the age of AI, emphasizing the need for curriculum and assessment redesign. He highlights process-based assessment methods where educators evaluate how students interact with AI to solve problems, rather than just the outcomes. Kentz calls for educators to embrace experimentation and focus on process and problem-solving, preparing students for a future where AI use is ubiquitous. (AI EduPathways; Jan. 5, 2025)
- 25 AI Predictions for 2025, from Marcus on AI (with a review of last year’s predictions) Gary Marcus’s not-so-rosy review, his recaps past predictions for 2024, noting their accuracy, particularly regarding the plateau in scaling Large Language Models (LLMs) and the persistence of challenges like hallucinations, reasoning issues, and limited corporate adoption of generative AI. He highlights how hype around AI agents, humanoid robotics, and autonomous vehicles has yet to meet practical reliability or scalability. Economic returns for most AI companies remain modest, with chip makers being the primary beneficiaries.For 2025, Marcus predicts no major breakthroughs like Artificial General Intelligence (AGI), and further stagnation in resolving generative AI’s limitations. He also speculates on low-confidence possibilities, such as generative AI’s role in a large-scale cyberattack and no GPT-5-level model emerging. Despite the field’s advancements, Marcus stresses the enduring technical and ethical challenges. (Marcus on AI; Jan. 1, 2025)