January 2026

Who to Follow for Staying Current with AI (another list)
Everything Educators Need to Know about GenAI in 2026 (A pretty bold statement! And Leon Furze follows up with links to numerous resources.)
This comprehensive 2026 overview looks at generative AI in education, explaining how large multimodal models work, how educators and students are using them, and what persistent concerns and policy responses look like, with many useful linked resources throughout for deeper exploration. The post looks GenAI’s capabilities and limitations, surveys the most widely adopted tools, highlights educator and student worries (often beyond mere “cheating”). Furze recommends practical steps for policy, assessment design, and classroom integration while linking to related how‑to posts and frameworks that expand on each major point. Recommended for the links alone.
Lured by AI, Colleges Are Sleepwalking Into Irrelevance Janet Kraynak (Columbia) critiques universities for uncritically embracing generative AI under the guise of inevitability, arguing this rhetoric serves corporate interests by stifling debate and accelerating normalization. She warns that by integrating AI without addressing its ideological underpinnings, educational institutions risk becoming complicit in their own obsolescence, outsourcing intellectual labor while undermining cognitive development, ethical scrutiny, and social cohesion. Kraynak calls on universities to resist passive adaptation and instead reclaim their role as critical gatekeepers. (The Chronicle; Jan. 21, 2026)
How Higher Ed Can Adjust to the AI-Answer Economy Ed’s Rec. Librarian Leo Lo argues that generative AI is transforming information literacy, shifting the student experience from active inquiry to passive consumption of machine-generated summaries. He introduces a practical “4A Model”—Ask, Assemble, Audit, Adapt—to help students critically engage with AI tools and reclaim the interpretive work AI now automates. Libraries, he contends, are best positioned to lead this shift, offering both the trust and infrastructure needed to build AI literacy as a core academic skill.
- Ask. Teach students to move past simple information retrieval. A good prompt doesn’t just ask what but how and why: “Explain this claim from two perspectives,” or “Show me what scholars might disagree about here.” This trains students to see prompting as a form of inquiry design.
- Assemble. Let the AI draft a summary or argument, but require students to document how they prompted it and why they accepted or rejected specific elements. The goal is to make visible the invisible choices that shape every synthesis.
- Audit. Students then critique the output—fact-checking, cross-referencing with library databases and asking the same AI to identify its likely gaps or biases (“What perspectives might this summary overlook?”). In doing so, they practice both verification and meta-questioning without needing another platform.
- Adapt. Finally, students revise or extend the resulting product using their own disciplinary knowledge, local data or lived experience. The focus shifts from consuming the AI’s answer to teaching through modification—how human insight transforms machine fluency into understanding.
(Inside Higher Ed; Jan. 21. 2026)
Teaching AI Ethics 2026: Human Labor Ed’s Rec. Another great blog post from Leon Furze. This updated post argues that today’s AI systems rely on a vast, hidden workforce, including data labelers, content moderators, RLHF raters, and “ghostwriters” of synthetic dialogue. These people often work for very low pay and, in the case of moderation, can be psychologically harmed by repeated exposure to violent and abusive content. Furze closes by offering classroom-ready ways to teach these issues across disciplines (English, geography, law, business, psychology, computer science, TOK, history, and civics) without needing a standalone “AI literacy” course. (Jan. 2026).
Survey: Faculty Say AI Is Impactful—but Not In a Good Way. A joint survey by the American Association of Colleges and Universities and Elon University reveals deep faculty skepticism toward generative AI in higher education. While nearly all respondents believe AI will significantly impact teaching, 90 percent fear it will erode critical thinking, and 95 percent predict student overreliance on AI tools. Faculty also report feeling unprepared to integrate AI: 68 percent say their institutions haven’t provided sufficient support, and a similar number believe graduates lack both workplace AI skills and ethical awareness. Despite this caution, some see potential—61 percent expect AI to personalize learning, though only 20 percent view AI’s overall impact on students’ careers as positive. (Inside Higher Ed; Jan. 21, 2026)
How AI Is Exploding Our Illusions of Rigor Ed’s Rec. Szymon Machajewski revisits Craig E. Nelson’s idea of “dysfunctional illusions of rigor” to argue that generative AI is not undermining academic integrity but revealing outdated pedagogical assumptions. He critiques rigid beliefs about writing, assessment, and fairness (such as equating struggle with learning or memorization with depth) and shows how AI can foster metacognition, iterative revision, and equity when integrated thoughtfully. The article urges faculty to shift from punitive mindsets to adaptive, process-oriented models that reflect the evolving demands of both academia and the workplace. (Inside Higher Ed; Jan. 15, 2026)
Data Shows AI “Disconnect” in Higher Ed Workforce AI use is now widespread among higher-ed employees (94%), but institutional guidance lags—only 54% know their school’s AI policies, many use non-institutional tools for work, and confidence remains uneven, creating risks for privacy, security, and governance. Survey respondents feel a blend of cautious optimism (with major worries like misinformation, data misuse, skill loss, and job impacts) while also seeing pragmatic benefits (automation and admin relief), yet few institutions are rigorously measuring AI’s return on investment (only 13%). (Inside Higher Ed; Jan. 15, 2026)
AI Backlash as a Regulatory Tool In this substack by Luiza Jarovsky, she says the viral trend of using Grok on X to “undress” women and girls without consent shows how public backlash can act as a de facto regulatory tool when formal enforcement lags. She argues that the global outcry pressured xAI/X to add technical guardrails, and that backlash also sets new norms, speeds future enforcement, and influences lawmakers toward clearer AI rules. (Luiza’s Newsletter; Jan. 15, 2026)
Leading the Human Side of AI: A Conversation about Uncertainty, Relationships and Governance Lance and Christine Eaton argue that the biggest barrier to AI progress in higher ed isn’t technical capability but the human strain, which includes fractured trust, fatigue, identity and workload pressures. All this is made worse by uneven readiness and unclear, opaque decision-making. They call for leadership that names uncertainty, builds shared understanding through credible governance (inclusive but time-bounded), and models transparent AI use so institutions can move from scattered pilots and quiet workarounds to coherent, trustworthy adoption. (AI + Education = Simplified; Jan. 13, 2026)
I’m an AI Power User. It Has No Place in the Classroom. Geoff Watkinson argues that while generative AI significantly improves his efficiency in a professional writing role, it undermines the developmental goals of first-year writing instruction. Drawing from his dual roles in tech and teaching, Watkinson makes the case for AI-free classrooms focused on analog writing practices to help students build the cognitive habits and reflective thinking necessary for meaningful learning and self-expression. (The Chronicle of Higher Education, Jan. 9, 2026)
Keeping Ourselves in the Loop: 6 Human Centered Activities for the Age of AI Carter Moulton advocates for a human-centered AI pedagogy that emphasizes collaboration, ethical reflection and community-building rather than individual efficiency. Drawing from his “Analog Inspiration” framework, he outlines six classroom activities that integrate AI while reinforcing interpersonal connection and metacognitive learning. Moulton argues that by embedding students and educators more deeply into the AI loop, higher ed can resist corporate narratives of personalization and productivity, instead fostering meaningful, values-driven educational experiences. (Inside Higher Ed; Jan. 8, 2026)
The Rise of the Agentic AI University in 2026 Ray Schroeder argues that in 2026 universities will shift from using chatbots as “tools” to deploying agentic AI as institution-wide “colleagues” that can plan and carry out multi-step workflows—pushing campuses from scattered pilots toward governed, automated systems across the student lifecycle and back-office operations. He warns that while these efficiencies could boost agility and student support, they also raise urgent workforce and instructional questions (job redesign, job loss, and the likelihood that AI-led instruction will expand from noncredit into mainstream teaching).(Inside Higher Ed; Jan. 7, 2026)
Claude Code and What Comes Next Ethan Mollick describes how Claude Code can take a single high-level request, then work autonomously for over an hour to generate and deploy a functioning product. Mollick claims this shows a recent leap driven by stronger self-correction plus an “agentic harness” (tools, skills, subagents, and protocols) that lets AI plan and execute tasks. He argues these systems are currently packaged for programmers, but the real takeaway is that similar harnesses will soon bring sustained, end-to-end AI “doing” to many knowledge-work tasks—along with new risks from giving AIs direct access to files, browsers, and execution. (One Useful Thing; Jan. 7, 2026)

