February 2025
Ed’s Rec. AI Cheating Matters, but Redrawing Assessment Matters Most According to the authors, universities “should prioritize ensuring that assessments are ‘assessing what we mean to assess’ rather than letting conversations be dominated by discussions around cheating.” (Inside Higher Ed; Feb. 28)
Half a Million Students Given ChatGPT as CSU System Makes AI History From the first paragraphs: The California State University system has partnered with OpenAI to launch the largest deployment of AI in higher education to date. The CSU system, which serves nearly 500,000 students across 23 campuses, has announced plans to integrate ChatGPT Edu, an education-focused version of OpenAI’s chatbot, into its curriculum and operations. The rollout, which includes tens of thousands of faculty and staff, represents the most significant AI deployment within a single educational institution globally. (Forbes; Feb. 4, 2025)
Tech Giants Partner with Cal State System to Advance ‘Equitable” AI Training More about the California State system’s partnership, from Inside Higher Ed. (Feb. 5 2025)
- Digital Education Council Global AI Faculty Survey 2025
- A New Generation of AIs: Claude 3.7 and Grok 3 Claude 3.7 and Grok 3 represent the first wave of Gen3 AI models, trained with over 10 times the computing power of GPT-4, leading to significant improvements in complex reasoning, coding, and problem-solving. These advancements demonstrate that AI capabilities continue to accelerate due to two key Scaling Laws: larger models improve with more computational power, and allowing AIs to use more computing resources at inference time enhances their reasoning, marking a shift from simple automation to AI as a genuine intellectual partner. (One Useful Thing; Feb. 24, 2025)
- Watching Writing Die This blog post by Normi Coto argues that writing is on the brink of obsolescence as students increasingly rely on AI-generated content, much like spelling, cursive, and grammar before it. She reflects on her decades-long career, lamenting how writing instruction has been sidelined by technology, passive learning habits, and AI tools that undermine critical thinking, leaving educators struggling to maintain the relevance of writing in the classroom. (Behind a Medium paywall, but if you’re a member take a look!)
- Teaching with ChatGPT: Designing Courses with an AI Assistant Jeremy Caplan, Director of Teaching and Learning at CUNY’s Newmark Graduate School of Journalism, uses AI as a thought partner to refine syllabi, structure class sessions, and design engaging activities, allowing him to generate more creative teaching approaches while reclaiming time for direct student interaction. By leveraging AI-generated prompts, lesson plans, and unconventional activity ideas, he enhances both the efficiency and effectiveness of his course design, demonstrating how AI can support educators in developing more dynamic and student-centered learning experiences. See this article for specific prompts. (ChatGPT for Education; Feb. 21, 2025)
- The Costs of AI In Education Marc Watkins’ blog post critiques the widespread adoption of generative AI in universities, arguing that institutions are spending millions on AI tools not for true educational equity or effectiveness but out of fear of being left behind. He highlights the financial burden of AI adoption, the lack of a clear pedagogical strategy, and the emotional toll on educators, warning that universities risk prioritizing AI hype over meaningful investments in student learning and faculty support. (Rhetorica; Feb. 21, 2025)
- While the West Hesitates, China Marches Forward China is rapidly deploying DeepSeek AI models across government and industry to enhance efficiency and decision-making. Romero emphasizes that China’s decisive action and cultural efficiency give it a competitive edge over the West, which remains hindered by hesitation, bureaucracy, and skepticism toward AI adoption. From the blog post: “Imagine if learning to use ChatGPT was mandatory for government staff in the West. Imagine how quickly we’d sort out where it’s useful and where it isn’t.” (The Algorithmic Bridge; Feb. 21, 2025)
- ChatGPT 5 Is Coming: What it Could Mean for Educators
- 46% of Knowledge Workers Tell Their Bosses to Take a Hike When It Comes to AI According to a recent survey, 75% of people who can be classified as “knowledge workers” (which includes academics, of course) are using AI in the workplace. Almost half say they would not stop using AI—even if their companies banned it. (Forbes; Feb. 20, 2025)
- What is Quantum Computing, and Why Does It Matter? Asa Fitch’s article explains recent advancements in quantum computing, highlighting breakthroughs from Microsoft and Google that have reinvigorated interest in the field. While quantum computers have the potential to revolutionize fields like drug discovery and encryption, they remain in their early stages, with major technical challenges—such as error correction and extreme operating conditions—delaying their widespread commercial viability for at least a decade. (WSJ; Feb. 20, 2015).
- Grok 3: Another Win for the Bitter Lesson Grok 3 represents a significant leap in AI performance, demonstrating that scaling computing power remains the dominant factor in AI progress, as supported by the “Bitter Lesson” principle. While DeepSeek achieved impressive results through optimization with limited resources, xAI’s success with Grok 3 underscores that access to massive computational power ultimately leads to superior AI models, reinforcing the continued importance of scaling over fine-tuned algorithmic improvements. (The Algorithmic Bridge; Feb. 18, 2025)
- What are people using chatbots for?
- Is It OK to Be Mean to a Chatbot? Readers answered these questions–and some of their answers are surprising!: Is it OK to address a chatbot or virtual assistant in a manner that is harsh or abusive—even though it doesn’t actually have feelings? Does that change if the AI can feign emotions? Could bad behavior toward chatbots encourage us to behave worse toward real people? (WSJ; Feb. 15, 2025)
- (Feb. 11 2025; Calstate Sacramento) A very forward-looking short paper, extending beyond the impact of generative AI to the potential of analytical AI as well: AI Integration Blueprint: Transforming Higher Education for the Age of Intelligence
- Why Thousands of Fake Scientific Papers Are Flooding Academic Journals Indeed, AI has fueled this trend. (The Medium Newsletter; Feb. 11, 2025)
- 3 Things about AI and the Future of Work The authors argue that AI is already transforming the workforce in unpredictable ways, so colleges should be focused on adaptability rather than teaching students to use specific tools or develop specific skills for a job. They emphasize the need for students to develop both technical and transferable skills, including AI literacy. (Inside Higher Ed; Feb. 11, 2025)
- Is Deep Research Worth $200/mo? A surprising . . . maybe/yes, says Graham Clay, depending upon one’s needs (and perhaps if one has access to a trust fund 😉. (AutomatED; Feb. 10, 2025)
- How AI Uncovered a Lifesaving Treatment:
January 2025
Digital Education Council Global AI Faculty Survey 2025 Graham Clay’s (clickbait) headline on this report was: 6% of Faculty Feel Supported on AI?! One of the most interesting findings is that faculty outside of the United States are more positive about the potential of AI. See:
Faculty viewing AI as an opportunity vs. challenge varies significantly by region [p. 13]:
- Latin America: 78% opportunity / 22% challenge
- Asia-Pacific: 70% opportunity / 30% challenge
- Europe/Middle East/Africa: 65% opportunity / 35% challenge
- USA & Canada: 57% opportunity / 43% challenge
Below, please find an excellent talk by Dr Philippa Hardman:
- **New Late 2024-2025** Harvard Business School Resources
- Simple AI Tips for Designing Courses
- Simple AI Tips for Revising Courses
- Simple AI Tips for Enhancing Class Time
- Simple AI Tips for Creating Assessments
- ChatGPT vs. Claude vs. DeepSeek: The Battle to Be My AI Assistant Joanna Stern (WSJ) checks in with a great overview of all three tools. Cutting to the chase: “Claude is my go-to for project planning, clear office and document tasks and it’s got a great personality. ChatGPT picks up the slack with real-time web knowledge, a friendly voice and more. DeepSeek is smart but, so far, lacks the features to get ahead at the office.” (WSJ; Jan. 31 2025)
- Memo to Silicon Valley: Bigger Is Not Better Mia Shah-Dand at the AI Ethics newsletter has launched a Subtack newsletter, Beyond the AI Hype! In this post, Shah-Dand argues that the AI industry’s obsession with large models is giving way to a recognition that smaller, more efficient AI systems—like those developed by DeepSeek—can be more cost-effective and innovative. She critiques Silicon Valley’s fixation on size and funding as success metrics, highlighting growing cracks in the generative AI market, the challenges of translating AI investment into real-world value, and the geopolitical and ethical implications of AI development across global markets. (Beyond the AI Hype!; Jan. 29, 2025)
- Which AI to Use Now: An Updated Opinionated Guide Ethan Mollick provides an updated guide on the best AI models currently available, recommending ChatGPT, Claude, and Gemini for general use while also highlighting specialized options like Grok, Copilot, and DeepSeek. He discusses key AI capabilities, including live interaction, reasoning, web access, and data analysis, emphasizing the rapid evolution of AI and encouraging users to experiment with different models to find what suits their needs. (One Useful Thing; Jan. 26)
- Reading in the Age of Social Media (and AI) Marc Watkins explores the evolving role of AI and social media in shaping reading habits, questioning whether AI tools like NotebookLM and Google’s Deep Research enhance or erode critical engagement with texts. He argues that while AI reading assistants can provide efficiency, they risk diminishing deep reading and comprehension, urging educators and society to critically reflect on the long-term implications of these technologies. (Rhetorica; Jan. 26, 2025)
- AI and Education: Notes from Early 2025 Bryan Alexander discusses the growing divide in academia over AI adoption, with some faculty and administrators embracing it while others push back, seeking policies to limit its use. He highlights global and national AI initiatives, including AI-led schools, a UCLA course built entirely around AI-generated materials, and new legal and institutional challenges related to AI in education. Alexander concludes that while AI continues to expand in higher education, concerns over cheating, policy development, and resistance to its integration remain unresolved. (AI, Academia, and the Future; Jan. 23, 2025)
- 16 Musings on AI’s Impact on the Labor Market Alberto Romero’s list is insightful. (The Algorithmic Bridge; Jan. 23, 2025)
- Teaching Like It’s 2005?! Graham Clay argues that faculty must adapt their teaching methods to the reality of AI, rather than rely on outdated, “old school” approaches. He advises instructors to use custom GPTs for structured AI integration, create assignments that do not favor premium AI access, and encourage students to engage critically with AI outputs rather than simply rely on them. Additionally, he stresses that faculty must experiment with AI tools firsthand to determine which ones work best in their specific disciplines, warning that those who do not engage with AI are already behind. (AutomatED; Jan. 20, 2025)
- Four Possible Futures for AI and Society Bryan Alexander explores four possible futures for AI and society based on James Dator’s model: Grow, where AI drives economic and cultural expansion; Collapse, where AI either fails due to legal, financial, and social pushback or destabilizes society through economic inequality; Discipline, where society splits between AI adopters and opponents, shaping politics, education, and culture; and Transform, where AI fundamentally alters institutions, personal relationships, and creative expression, leading to a radically different world. (AI, Academia, and the Future; Jan. 16, 2025)
- SUNY Will Teach Students to Ethically Use AI Ed’s Rec. We’re in the news! Take a look. (Jan. 16 2025).
- ChatGPT Completes Graduate Level Course Undetected The first sentence of the article says it all: “Researchers at Health Sciences South Carolina (HSSC) and the Medical University of South Carolina have unveiled a groundbreaking study demonstrating how generative artificial intelligence can complete graduate-level coursework with results indistinguishable from top-performing human student.” (Jan. 14, 2025)
- Prophecies of the Flood: What to Make of the Statements of the AI labs? Ethan Mollick’s blog post explores the rapid advancements in AI, emphasizing the emergence of supersmart systems like OpenAI’s o3, which outperformed humans on challenging benchmarks, and narrow agents like Google’s Deep Research. While the transformative potential of such systems is undeniable, the author urges caution about overhyping timelines, highlights the limitations of current models, and stresses the importance of societal preparation and ethical alignment to ensure AI benefits humanity. (One Useful Thing; Jan. 10, 2025)
- In Getting to an AI Policy Part 1: Challenges, Lance Eaton, Ph.D., examines the complexities of creating institutional policies for generative AI in higher education, emphasizing that progress requires integrating policy, tool selection, training, and strategic direction. He highlights how institutions struggle with hesitancy, rapidly evolving technologies, and insufficient alignment of resources, urging iterative approaches to address these challenges and prepare for AI’s transformative potential. (AI+Education=Simplified; Jan. 9 2025
- Why Obsessing Over AI Today Blinds Us to the Bigger Picture. Alberto Romero argues that humanity’s fixation on defining and resolving the implications of AI misses the broader, evolving nature of technological progress. Using the steam train as a metaphor, he reflects on how each generation struggles to grasp the transformative power of new innovations, only to normalize and appreciate them in hindsight. Ultimately, he suggests that AI’s meaning and impact will continuously shift, defying static definitions, and that our role is to adapt and evolve alongside it. (The Algorithmic Bridge; Jan. 8, 2025)
- A Few Recent Developments That Shine a Light on the Path of AI. Ed’s Rec. Ray Schroeder, senior fellow for UPCEA: the Association for Leaders in Online and Professional Education, put together a compilation a curated collection of significant developments and predictions about artificial intelligence in higher education. His blog post synthesizes insights from various sources, including news articles, expert opinions, and research studies, to highlight the rapid advancements and their implications for institutions and educators. (Inside Higher Ed; Jan. 8, 2025)
- The Academic Culture of Surveillance and Policing Students: The GenAI Edition Ed’s Rec. Lilian Mina (Writing Program Director, Rhetoric and Composition Professor, Council of Writing Program Administrators (CWPA) President) critiques the widespread reliance on AI detection tools, arguing that they foster mistrust and prioritize surveillance over meaningful pedagogy. She advocates for rethinking teaching practices to focus on trust, engagement, and ethical discussions about AI, encouraging educators to view generative AI as an opportunity to evolve rather than a threat to academic integrity. (In My Own Words; Jan. 6, 2025)
- Some Notes on How Culture Is Responding to Generative AI. Bryan Alexander explores cultural reactions to AI across various domains, including religion, dating, art, and media, noting a mix of fear, creativity, and intimacy in how people engage with technology. He highlights emerging trends like spiritual interactions with AI, AI’s integration into dating, and the demographic differences in AI adoption, emphasizing that society is still developing norms and narratives around this rapidly evolving technology. (AI, Academia, and the Future; Jan. 7, 2025)
- The AI Ethics Brief #155: Defining Moments in Responsible AI— 2024 in Review The AI Ethics Brief writers present ten significant developments in AI ethics from 2024, highlighting transformative trends such as global AI governance, ethical challenges in healthcare and labor, AI’s environmental impact, and its role in education and surveillance. These stories emphasize the urgency of addressing AI’s societal risks and benefits, with 2025 poised for advancements in regulations, fairness, and sustainable practices across multiple sectors. (The AI Ethics Brief; Jan. 7, 2025)
- The AI Era Demands Curriculum Redesign: Stories from the Frontlines of Change Ed’s Rec. Mike Kentz argues that traditional assessments fail to capture student thinking in the age of AI, emphasizing the need for curriculum and assessment redesign. He highlights process-based assessment methods where educators evaluate how students interact with AI to solve problems, rather than just the outcomes. Kentz calls for educators to embrace experimentation and focus on process and problem-solving, preparing students for a future where AI use is ubiquitous. (AI EduPathways; Jan. 5, 2025)
- 25 AI Predictions for 2025, from Marcus on AI (with a review of last year’s predictions) Gary Marcus’s not-so-rosy review, his recaps past predictions for 2024, noting their accuracy, particularly regarding the plateau in scaling Large Language Models (LLMs) and the persistence of challenges like hallucinations, reasoning issues, and limited corporate adoption of generative AI. He highlights how hype around AI agents, humanoid robotics, and autonomous vehicles has yet to meet practical reliability or scalability. Economic returns for most AI companies remain modest, with chip makers being the primary beneficiaries.For 2025, Marcus predicts no major breakthroughs like Artificial General Intelligence (AGI), and further stagnation in resolving generative AI’s limitations. He also speculates on low-confidence possibilities, such as generative AI’s role in a large-scale cyberattack and no GPT-5-level model emerging. Despite the field’s advancements, Marcus stresses the enduring technical and ethical challenges. (Marcus on AI; Jan. 1, 2025)