2024 Articles & Resources

May 2024

Rethinking Assessment for Generative AI 

  • AI Has Changed Learning, Why Aren’t We Regulating It? According to Marc Watkins’s sobering analysis, Educators need to be prepared for the changes AI tools like GPT-4o bring to the learning process. Existing regulations, such as FERPA, COPPA, IDEA, CIPA, and Section 504, must be considered for AI tools in education. (Rhetorics; May 14, 2024)
  • GenAI Strategy: Attack Your Assessment Ed’s Rec Leon Furze, as usual, is on target, this time with this call to revisit faculty approaches to assessment. The central theme is the necessity for educators, regardless of their field, to critically evaluate and, if necessary, revise their assessment strategies. This involves using GenAI tools to simulate student responses to current assessment tasks, thereby revealing vulnerabilities. The author provides practical steps for educators to engage with AI technology during faculty meetings and use it to test the robustness of their assessment tasks. (Leon Furze; May 13, 2024)
  • Experts Predict Major AI Impacts in New Report The Educause report, which includes the opinions of higher ed and tech experts, highlights the potential transformation AI can bring to teaching, learning, student support, and institutional management. It emphasizes the importance of preparing educators and institutions for this AI-driven shift in education to maximize its benefits. (Inside Higher Ed; May 13, 2024)
  • The Good, the Bad, & the Unknown of AI In a keynote at UMass, Lance Eaton focused on both the challenges and successes of generative AI. Two important points: AI can assist in navigating tedious tasks and projects, minimizing energy drain and AI’s fallibility can heighten awareness and scrutiny of the information received (AI + Education = Simplified; May 13, 2024)
  • Harnessing the Power of Generative AI: A Call to Action for Educators Ripsimé K. Bledsoe (Texas A&M) argues that if educators steer the integration of AI with intention and purpose, it can reach its potential as an immense and versatile student success tool. She offers six areas on which to focus efforts. (Inside Higher Ed; May 10, 2024)
  • No One Is Talking about AI’s Impact on Reading AI assistants can now summarize and query PDFs (this editor uses it), highlighting the potential impact on reading skills. While AI assistants can help many readers, including neurodiverse and second-language learners, these tools may wind up hindering students’ ability to critically engage with texts and form nuanced conclusions. Watkins suggests active reading assignments, social annotation tools, and group discussions can encourage students to focus on the act of reading and not rely solely on AI assistants. (Rhetorics; May 3, 2024)
  • Ditch the Detectors: Six Ways to Rethink Assessment for Generative Artificial Intelligence Ed’s Rec Leon Furze again writes a prescient article, this time about assessment. (Leon Furze; May 3, 2024)
  • GenAI Strategy for Faculty Leaders Ed’s Rec. Leon Furze offers a blueprint for developing AI guidelines. A great resource. (Leon Furze; May 1, 2024)
  • New AI Guidelines Aim to Help Research Libraries  The Association of Research Libraries announced a set of seven guiding principles for university librarians to follow in light of rising generative AI use. (Inside Higher Ed; May 1, 2024)

April 2024

AI Detection in Education Is a Dead End Researcher-PhD candidate Leon Furze’s blog post is a well-written exploration—and explanation—about AI detection software and its impact on students. For those who don’t want to wade through the study below, this is a friendly interpretation of some of the findings. (April 9, 2024)

**New Study**GenAI Detection Tools, Adversarial Techniques and Implications of Inclusivity in Higher Education From the article’s abstract: The results demonstrate that the detectors’ already low accuracy rates (39.5%) show major reductions in accuracy (17.4%) when faced with manipulated content, with some techniques proving more effective than others in evading detection. The accuracy limitations and the potential for false accusations demonstrate that these tools cannot currently be recommended for determining whether violations of academic integrity have occurred, underscoring the challenges educators face in
maintaining inclusive and fair assessment practices. However, they may have a role in supporting student learning and maintaining academic integrity when used in a non-punitive manner.


March 2024

Report: The Advantages that AI Brings to Higher Ed (Link to report included) A report highlights AI’s potential to enhance higher education through student support and data analysis, emphasizing the importance of equitable access and culturally aware design to prevent a new digital divide and ensure HBCUs and MSIs benefit without falling behind. (Diverse Issues in Higher Education; March 13, 2024)


February 2024

2024 Educause AI Landscape Study Ed’s Rec. This is an important study, looking at a number of areas, including strategic planning. (Educause: Feb. 2024)

  • AI & The Copyright & Plagiarism Dilemma As Lance Eaton notes, potential lawsuits against AI companies may lead to a rethinking of copyright in the digital age. A very thoughtful parsing of legal terms, plagiarism, and “transformative use.” (AI + Education = Simplified; Feb. 16, 2023)
  • How AI Has Begun Changing University Roles, Responsibilities A survey by Educause found that more faculty members and university leaders are starting to work with artificial intelligence in their roles. A lack of formalized training in AI was observed, with only 56 percent of universities training faculty members and even lower percentages for staff and students. (Inside Higher Ed; Feb. 13, 2024).
  • AI: The Unseen Ally in Mastering Deep WorkEd’s Rec  Srinivas Rao makes a wonderful observation about how AI can enhance our capacity for deep work by helping to master complex things quickly and work at high levels of depth. Well worth the read. (Medium; Feb. 9, 2024)

A short 15-minute by ed technologist Lance Eaton about how faculty and instructional designers can approach the use of generative AI:

  • Google’s Gemini Advanced: Tasting Notes and Implications Ethan Mollick does not provide a detailed review of Gemini but makes several broad statements about its capabilities. Gemini Advanced shares similarities with GPT-4 but also has its own strengths and weaknesses and provides insight into the future of AI development and the emergence of advanced AI models. Gemini Advanced signifies the start of a wave of AI development rather than the end. It suggests the potential for future AI agents to function as powerful assistants. (One Useful Thing; Feb. 8, 2024)
  • AI Content Vs a Top 1% Writer (Dan Martin from AI Monks; behind paywall on Medium, but you should be able to see the opening paragraphs). Here is a good summary of Martin’s findings about ChatGPT at this moment in time:

    Comparison of AI-generated writing and human-written content highlights limitations and emphasizes the need for human creativity and originality in content creation:

    • AI’s Limitations in Writing:
      • AI-generated writing lacks readability and quality, and is incapable of producing new ideas and insights without heavy prompting.
      • AI writing tools like ChatGPT simply replicate what’s already out there, using different phrasing to give the illusion of being creative.
    • Use of ChatGPT for Idea Generation:
      • ChatGPT can assist in brainstorming and suggesting themes and ideas based on the user’s inputs.
      • It can also help flesh out rough drafts and provide structure, making it a valuable tool for generating content ideas.
    • Overcoming Writer’s Block with ChatGPT:
      • When struggling with writer’s block, users can input a rough outline or bullet points into ChatGPT to kickstart creativity and get the writing process moving again.
      • This demonstrates the potential of ChatGPT as a creative ally rather than a lazy shortcut.
    • ChatGPT’s Role in Content Quality Enhancement:
      • ChatGPT can also be used to proofread content, check for grammatical errors, and suggest improvements in readability, thereby enhancing content quality.
      • Users should ensure to balance ChatGPT’s outputs with their unique voice and style and verify information for ethical and quality considerations.
    • AI Content vs. Human Writing:
      • The comparison between AI-generated writing and human-written content highlights the limitations of AI in terms of context understanding, accuracy, and genuine creativity.
      • It emphasizes the need for human creativity and originality in content creation despite the assistance of AI tools like ChatGPT.
    • Differences in Writing Styles:
      • AI-generated content can be identified by specific words and phrases it overuses, such as ‘ever-evolving landscape,’ ‘harness,’ ‘delves,’ and an overuse of semi-colons.
      • Human writing exhibits perplexity and burstiness, characteristics that AI struggles to replicate, leading to more robotic-sounding content.
  • The AI Revolution in Higher Ed  Even keeping in mind that Grammarly helped to produce this booklet, one has to say it still provides some useful data and interesting ideas. (Feb. 2024)

  • Is AI Just a Tool for Lazy People? Short answer: No. Mark Herschberg’s (MIT) conclusion is that generative AI is actually being leveraged effectively by highly engaged professionals. (Medium; Feb. 7, 2023)
  • Wisdom Skills Are Hard to Teach—AI Can Help The author makes the case that Experiential learning through AI-powered games can address the shortage of extended on-the-job experience, offering the potential for unlocking big-picture cognition.(Inside Higher Ed; Feb. 7, 2024)
  • Generative AI, Bullshit as a Service As Alberto Romero points out, AI is being used for dishonest and malicious purposes, from generating disinformation to creating spam. While these uses are disturbing, Romero argues that despite dire warnings of catastrophic outcomes, AI is primarily used to create “BS.” A philosophical-lite treatise, worth a read. (The Algorithmic Bridge; Feb. 6, 2024) 
  • Pinging the scanner, early February 2024 Bryan Alexander takes a look at recent AI and tech updates from Google, Amazon and Microsoft. Yes, Co-Pilot will be ubiquitous. Also, programmers are designing “hostile AI architecture” in the hopes of addressing copyright infringement issues. Below, you will find the OG Rufus, the Welsh Corgi after which Amazon programmers have named their chatbot shopping assistant. (Bryan’s Substack; Feb. 6, 2024)

    Photgraph of welsh corgi
    The Original Rufus

Education Week: Spotlight on AI Ed’s Rec This compilation of articles about generative AI in the k-12 space is very helpful. (Education Week; Feb. 2024)

  • 7 Questions College Leaders Should Ask about AI Presidents and others should be developing strategies to ensure their institutions are positioned to respond to the opportunities and risks, writes David Weil (Brandeis). (Inside Higher Ed; Feb. 1, 2014).

January 2024

  • What Can Be Done in 59 Seconds: An Opportunity (and a Crisis) Ed’s Rec. Mollick reflects on how generative AI has proven to be a powerful productivity booster, with evidence for its effectiveness growing over the past 10 months. The wide release of Microsoft’s Copilot for Office and OpenAI’s GPTs has made AI use much easier and more normalized. (One Useful Thing; Jan. 31, 2024)
  • The Biggest AI Risk in 2024 May be behind a paywall, but I will provide a summary. Actually, Thomas Smith traces several big issues: Data Privacy (1 in 10 medical providers use ChatGPT, which means patient data is likely being compromised); Copyright Issues and Hallucinations, of course; However, Smith sees the biggest risk of generative AI is . . . pretending it doesn’t exist and not learning how to use it ethically. His focus is primarily on business, but this observation is worth considering: “Avoiding grappling with AI challenges is itself a decision.” (The Generator; Jan. 26, 2024)
  • Embracing AI in English Composition Ed’s Rec. From the abstract: A mixed-method study conducted in Fall 2023 across three sections, including one English Composition I and two English Composition II courses, provides insightful revelations. The study, comprising 28 student respondents, delved into the impact of AI tools through surveys, analysis of writing artifacts, and a best practices guide developed by an honors student. (International Journal of Changes in Education; Jan. 22, 2024)

  • Last Year’s AI Views Revisited Ed’s Rec. Another great read, this time by Lance Eaton. In terms of higher education, Eaton makes the point  the importance of faculty fully understanding the technology and shaping its use in the classroom to mitigate emerging problems. Even if you are not all that interested in generative AI, this article is worth the read. (AI+Education = Simplified; Jan. 24, 2024)
  • ChatGPT Can’t Teach Writing: Automated Syntax Generation Is Not Teaching Ed’s Rec John Warner steps in to fire back at OpenAI’s partnership with Arizona State. (Inside Higher Ed; Jan. 22, 2024)
  • What Happens When a Court Cuts Down ChatGPT? Ed’s Rec. Not an idle question posed by futurist Byran Alexander. (Byran’s Substack; Jan. 21, 2024)
  • ChatGPT Goes to College Bret Kinsella muses over the ways OpenAI’s partnership with Arizona State will benefit both parties. (Synthedia; Jan. 20, 2024)
  • OpenAI Announces First Partnership with a University According to the article, “Starting in February, Arizona State University will have full access to ChatGPT Enterprise and plans to use it for coursework, tutoring, research and more.” (CNBC; Jan. 18, 2024)
  • AI Dominates Davos CNBC It’s that time of year! (Jan. 17, 2024)
  • AI Writing Is a Race to the Bottom by Alberto Romero, The Algorithmic Bridge Romero’s article discusses how AI writing tools, while offering convenience and efficiency, create a competitive environment that forces human writers to use these tools, ultimately sacrificing the uniqueness of human writing to Moloch, the system of relentless competition. (Jan. 17, 2024)
  • The Lazy Tyranny of the Wait Calculation by Ethan Mollik, One Useful Thing. Mollick introduces the concept of a “Wait Calculation” in the context of AI development, where waiting for advancements in AI technology before starting a project can sometimes be more beneficial than immediate action, highlighting the rapid pace of AI development, its potential to impact various fields, and the need to consider the timeline of AI progress in long-term decision-making. (Jan. 16, 2024)
  • Who Is ChatGPT? by Dean Pratt, AI Mind A fascinating—or really creepy, depending upon your POV—article in which the author explores a philosophical conversation with an AI entity named Bard, discussing the potential future where AI technology becomes a co-creator and catalyst for experiences blending the real and dreamlike, as well as the importance of empathy, optimism, and interconnectedness in the interaction between humans and AI. (Jan. 14, 2023)
  • Creating a Useful GPT? Maybe . . . Lance Eaton has been experimenting with creating customized GPTs. The article explains how one can go about doing it and discusses their limits as well as their promises for the future. (AI + Education = Simplified, Jan. 8, 2024)

**Important OER Resource from Oct. 2023: TextGenEd: Teaching with Text Generation Technologies Edited by Vee et al. WAC Clearing House***At the cusp of this moment defined by AI, TextGenEd collects early experiments in pedagogy with generative text technology, including but not limited to AI. The fully open access and peer-reviewed collection features 34 undergraduate-level assignments to support students’ AI literacy, rhetorical and ethical engagements, creative exploration, and professional writing text gen technology, along with an Introduction to guide instructors’ understanding and their selection of what to emphasize in their courses. (Oct. 2023 but I put the book here.)

Book Launch of TextGenEd: Teaching with Text Generation Technologies:

  • Signs and Portents: Some Hints about What the Next Year in AI Looks Like by Ethan Mollick, One Useful Thing Ed’s Rec The article discusses the accelerated development of artificial intelligence (AI) and its impact on various aspects of society, emphasizing the need for proactive measures to navigate the challenges and opportunities presented by AI. Mollick highlights AI’s impact on work, its ability to alter the truth through deepfakes and manipulated media, and its effectiveness in education. (Jan. 6, 2023)
  • How Will AI Disrupt Higher Education in 2024? By Racy Schroeder, Inside Higher Ed The article discusses the significant impact of generative AI on higher education, highlighting its potential to provide personalized learning experiences, assist faculty, and enhance course outcomes, while also addressing concerns about the emergence of artificial general intelligence (AGI) and its potential implications for education. (Jan. 6, 2024)
  • Gender Bias in AI-Generated Images: A Comprehensive Study by Caroline Arnold in Generative AI on Medium (paywall). Arnold shares her findings about gender bias in the Midjourney generative AI algorithm when generating images of people for various job titles, highlighting that the AI model often fails to generate female characters in images, especially for professions where women are underrepresented. While the article is behind a paywall, you can probably find other articles on this topic. (Jan. 4, 2024)
  • AI and Teaching College Writing A Future Trends forum discussion, again with Bryan Alexander (Jan. 4, 2024)

  • The NYT vs OpenAI Is Not Just a Legal Battle by Alberto Romero, The Algorithmic Bridge This article explores the New York Times (NYT) lawsuit against OpenAI, focusing on the deeper disagreement regarding the relationship between morality and progress in the context of AI, suggesting that while pro-AI arguments emphasize the potential benefits of technology, there should be a more balanced consideration of its impact on society and creators’ rights. (Jan. 3, 2024)
  • Empowering Prisoners and Reducing Recidivism with ChaptGPT by Krstafer Pinkerton, AI Advances. Note: The article in its entirety is behind a Members Only paywall, but perhaps you can find Pinkerton’s musings elsewhere. The article explores the potential use of AI language model ChatGPT in prisoner rehabilitation to reduce recidivism rates, emphasizing personalized learning experiences, a safe environment, and ethical considerations, while also highlighting future developments and calling for collective action to responsibly harness AI’s potential in this context. (Jan. 2, 2024)
  • Envisioning a New Wave of AI on Campus with Bryan Alexander and Brent Anders This was a fun scenario exercise, in which participants were asked to imagine a future, with AI avatars as instructors. (Jan. 1, 2024)