February 2025
Half a Milion Students Given ChatGPT as CSU System Makes AI History From the first paragraphs: The California State University system has partnered with OpenAI to launch the largest deployment of AI in higher education to date. The CSU system, which serves nearly 500,000 students across 23 campuses, has announced plans to integrate ChatGPT Edu, an education-focused version of OpenAI’s chatbot, into its curriculum and operations. The rollout, which includes tens of thousands of faculty and staff, represents the most significant AI deployment within a single educational institution globally. (Forbes; Feb. 4, 2025)
Tech Giants Partner with Cal State System to Advance ‘Equitable” AI Training More about the California State system’s partnership, from Inside Higher Ed. (Feb. 5 2025)
January 2025
Digital Education Council Global AI Faculty Survey 2025 Graham Clay’s (clickbait) headline on this report was: 6% of Faculty Feel Supported on AI?! One of the most interesting findings is that faculty outside of the United States are more positive about the potential of AI. See:
Faculty viewing AI as an opportunity vs. challenge varies significantly by region [p. 13]:
- Latin America: 78% opportunity / 22% challenge
- Asia-Pacific: 70% opportunity / 30% challenge
- Europe/Middle East/Africa: 65% opportunity / 35% challenge
- USA & Canada: 57% opportunity / 43% challenge
Below, please find an excellent talk by Dr Philippa Hardman:
- **New Late 2024-2025** Harvard Business School Resources
- Simple AI Tips for Designing Courses
- Simple AI Tips for Revising Courses
- Simple AI Tips for Enhancing Class Time
- Simple AI Tips for Creating Assessments
- ChatGPT vs. Claude vs. DeepSeek: The Battle to Be My AI Assistant Joanna Stern (WSJ) checks in with a great overview of all three tools. Cutting to the chase: “Claude is my go-to for project planning, clear office and document tasks and it’s got a great personality. ChatGPT picks up the slack with real-time web knowledge, a friendly voice and more. DeepSeek is smart but, so far, lacks the features to get ahead at the office.” (WSJ; Jan. 31 2025)
- Memo to Silicon Valley: Bigger Is Not Better Mia Shah-Dand at the AI Ethics newsletter has launched a Subtack newsletter, Beyond the AI Hype! In this post, Shah-Dand argues that the AI industry’s obsession with large models is giving way to a recognition that smaller, more efficient AI systems—like those developed by DeepSeek—can be more cost-effective and innovative. She critiques Silicon Valley’s fixation on size and funding as success metrics, highlighting growing cracks in the generative AI market, the challenges of translating AI investment into real-world value, and the geopolitical and ethical implications of AI development across global markets. (Beyond the AI Hype!; Jan. 29, 2025)
- Which AI to Use Now: An Updated Opinionated Guide Ethan Mollick provides an updated guide on the best AI models currently available, recommending ChatGPT, Claude, and Gemini for general use while also highlighting specialized options like Grok, Copilot, and DeepSeek. He discusses key AI capabilities, including live interaction, reasoning, web access, and data analysis, emphasizing the rapid evolution of AI and encouraging users to experiment with different models to find what suits their needs. (One Useful Thing; Jan. 26)
- Reading in the Age of Social Media (and AI) Marc Watkins explores the evolving role of AI and social media in shaping reading habits, questioning whether AI tools like NotebookLM and Google’s Deep Research enhance or erode critical engagement with texts. He argues that while AI reading assistants can provide efficiency, they risk diminishing deep reading and comprehension, urging educators and society to critically reflect on the long-term implications of these technologies. (Rhetorica; Jan. 26, 2025)
- AI and Education: Notes from Early 2025 Bryan Alexander discusses the growing divide in academia over AI adoption, with some faculty and administrators embracing it while others push back, seeking policies to limit its use. He highlights global and national AI initiatives, including AI-led schools, a UCLA course built entirely around AI-generated materials, and new legal and institutional challenges related to AI in education. Alexander concludes that while AI continues to expand in higher education, concerns over cheating, policy development, and resistance to its integration remain unresolved. (AI, Academia, and the Future; Jan. 23, 2025)
- 16 Musings on AI’s Impact on the Labor Market Alberto Romero’s list is insightful. (The Algorithmic Bridge; Jan. 23, 2025)
- Teaching Like It’s 2005?! Graham Clay argues that faculty must adapt their teaching methods to the reality of AI, rather than rely on outdated, “old school” approaches. He advises instructors to use custom GPTs for structured AI integration, create assignments that do not favor premium AI access, and encourage students to engage critically with AI outputs rather than simply rely on them. Additionally, he stresses that faculty must experiment with AI tools firsthand to determine which ones work best in their specific disciplines, warning that those who do not engage with AI are already behind. (AutomatED; Jan. 20, 2025)
- Four Possible Futures for AI and Society Bryan Alexander explores four possible futures for AI and society based on James Dator’s model: Grow, where AI drives economic and cultural expansion; Collapse, where AI either fails due to legal, financial, and social pushback or destabilizes society through economic inequality; Discipline, where society splits between AI adopters and opponents, shaping politics, education, and culture; and Transform, where AI fundamentally alters institutions, personal relationships, and creative expression, leading to a radically different world. (AI, Academia, and the Future; Jan. 16, 2025)
- SUNY Will Teach Students to Ethically Use AI Ed’s Rec. We’re in the news! Take a look. (Jan. 16 2025).
- ChatGPT Completes Graduate Level Course Undetected The first sentence of the article says it all: “Researchers at Health Sciences South Carolina (HSSC) and the Medical University of South Carolina have unveiled a groundbreaking study demonstrating how generative artificial intelligence can complete graduate-level coursework with results indistinguishable from top-performing human student.” (Jan. 14, 2025)
- Prophecies of the Flood: What to Make of the Statements of the AI labs? Ethan Mollick’s blog post explores the rapid advancements in AI, emphasizing the emergence of supersmart systems like OpenAI’s o3, which outperformed humans on challenging benchmarks, and narrow agents like Google’s Deep Research. While the transformative potential of such systems is undeniable, the author urges caution about overhyping timelines, highlights the limitations of current models, and stresses the importance of societal preparation and ethical alignment to ensure AI benefits humanity. (One Useful Thing; Jan. 10, 2025)
- In Getting to an AI Policy Part 1: Challenges, Lance Eaton, Ph.D., examines the complexities of creating institutional policies for generative AI in higher education, emphasizing that progress requires integrating policy, tool selection, training, and strategic direction. He highlights how institutions struggle with hesitancy, rapidly evolving technologies, and insufficient alignment of resources, urging iterative approaches to address these challenges and prepare for AI’s transformative potential. (AI+Education=Simplified; Jan. 9 2025
- Why Obsessing Over AI Today Blinds Us to the Bigger Picture. Alberto Romero argues that humanity’s fixation on defining and resolving the implications of AI misses the broader, evolving nature of technological progress. Using the steam train as a metaphor, he reflects on how each generation struggles to grasp the transformative power of new innovations, only to normalize and appreciate them in hindsight. Ultimately, he suggests that AI’s meaning and impact will continuously shift, defying static definitions, and that our role is to adapt and evolve alongside it. (The Algorithmic Bridge; Jan. 8, 2025)
- A Few Recent Developments That Shine a Light on the Path of AI. Ed’s Rec. Ray Schroeder, senior fellow for UPCEA: the Association for Leaders in Online and Professional Education, put together a compilation a curated collection of significant developments and predictions about artificial intelligence in higher education. His blog post synthesizes insights from various sources, including news articles, expert opinions, and research studies, to highlight the rapid advancements and their implications for institutions and educators. (Inside Higher Ed; Jan. 8, 2025)
- The Academic Culture of Surveillance and Policing Students: The GenAI Edition Ed’s Rec. Lilian Mina (Writing Program Director, Rhetoric and Composition Professor, Council of Writing Program Administrators (CWPA) President) critiques the widespread reliance on AI detection tools, arguing that they foster mistrust and prioritize surveillance over meaningful pedagogy. She advocates for rethinking teaching practices to focus on trust, engagement, and ethical discussions about AI, encouraging educators to view generative AI as an opportunity to evolve rather than a threat to academic integrity. (In My Own Words; Jan. 6, 2025)
- Some Notes on How Culture Is Responding to Generative AI. Bryan Alexander explores cultural reactions to AI across various domains, including religion, dating, art, and media, noting a mix of fear, creativity, and intimacy in how people engage with technology. He highlights emerging trends like spiritual interactions with AI, AI’s integration into dating, and the demographic differences in AI adoption, emphasizing that society is still developing norms and narratives around this rapidly evolving technology. (AI, Academia, and the Future; Jan. 7, 2025)
- The AI Ethics Brief #155: Defining Moments in Responsible AI— 2024 in Review The AI Ethics Brief writers present ten significant developments in AI ethics from 2024, highlighting transformative trends such as global AI governance, ethical challenges in healthcare and labor, AI’s environmental impact, and its role in education and surveillance. These stories emphasize the urgency of addressing AI’s societal risks and benefits, with 2025 poised for advancements in regulations, fairness, and sustainable practices across multiple sectors. (The AI Ethics Brief; Jan. 7, 2025)
- The AI Era Demands Curriculum Redesign: Stories from the Frontlines of Change Ed’s Rec. Mike Kentz argues that traditional assessments fail to capture student thinking in the age of AI, emphasizing the need for curriculum and assessment redesign. He highlights process-based assessment methods where educators evaluate how students interact with AI to solve problems, rather than just the outcomes. Kentz calls for educators to embrace experimentation and focus on process and problem-solving, preparing students for a future where AI use is ubiquitous. (AI EduPathways; Jan. 5, 2025)
- 25 AI Predictions for 2025, from Marcus on AI (with a review of last year’s predictions) Gary Marcus’s not-so-rosy review, his recaps past predictions for 2024, noting their accuracy, particularly regarding the plateau in scaling Large Language Models (LLMs) and the persistence of challenges like hallucinations, reasoning issues, and limited corporate adoption of generative AI. He highlights how hype around AI agents, humanoid robotics, and autonomous vehicles has yet to meet practical reliability or scalability. Economic returns for most AI companies remain modest, with chip makers being the primary beneficiaries.For 2025, Marcus predicts no major breakthroughs like Artificial General Intelligence (AGI), and further stagnation in resolving generative AI’s limitations. He also speculates on low-confidence possibilities, such as generative AI’s role in a large-scale cyberattack and no GPT-5-level model emerging. Despite the field’s advancements, Marcus stresses the enduring technical and ethical challenges. (Marcus on AI; Jan. 1, 2025)