2024 Articles & Resources

 

Optimizing AI in Higher Education: SUNY FACT² Guide, Second Edition Created by SUNY faculty and Ed Tech professionals. Very, very useful! (Summer 2024)

AI Policy Toolkit: Transparency Checklist Leon Furze provides school admins and faculty with a useful checklist covering the use of AI. Although the checklist has to do with k-12 school officials using AI, it is useful. From the blog post: By following this checklist, schools can help ensure they’re aligned with the Framework’s transparency principle, and more broadly with best-practice in implementing new technologies. It’s not just about ticking boxes, though. True transparency requires an ongoing commitment to open communication and a willingness to engage with all members of the school community. (July, 2024)

Developing a Model for AI across the Curriculum: Transforming the Higher Education Landscape Via Innocation in AI Literacy OK, not a 2024 article, but one that is very insightful. From the abstract: This position paper describes one possible path to address potential gaps in AI education and integrate AI across the curriculum at a traditional research university. The University of Florida (UF) is infusing AI across the curriculum and developing opportunities for student engagement within identified areas of AI literacy regardless of student discipline.  (Computers and Edcuationa: Artificial Intelligence; Vol. 4 2023)


The Chronicle of Higher Education Archive of Ed Tech Articles

Inside Higher Ed Archive of AI Articles

NYTimes Archive of AI Articles

WSJ Archive of AI Articles and Videos


July 2024

  • Enhancing Learning through AI and Human Educators The author discusses the benefits of integrating AI with human educators, arguing that AI excels in personalization, accessibility, and scalability, while human educators provide emotional support, motivation, and creative instruction. The final recommendation is for a hybrid model leveraging both AI and human strengths. (eSchool News; July 15, 2024).
  • Calling BS on the AI Education Future John Warner argues that embracing AI in education prioritizes productivity over the learning process, potentially undermining the fundamental aspects of teaching and learning. He critiques the mindset of venture capitalists who promote AI integration, suggesting it overlooks the importance of human interaction and the educational journey. (Inside Higher Ed; July 10, 2024)
  • Animated AI TAs Come to Morehouse Morehouse College is introducing animated, AI-powered teaching assistants in five classrooms this fall. These AI avatars, trained from professors’ lectures and course materials, will provide students with 24/7 access to course-related information, enhancing the learning experience without replacing human faculty. (Inside Higher Ed; July 9, 2024)
  • How Will the Rise of AI In the Workplace Impact Liberal Arts Education? Experts predict that skills like critical thinking and creativity will be more coveted as artificial intelligence replaces some technical jobs. As advanced AI tools become more prevalent in businesses, the demand for liberal arts majors is expected to rise due to their ability to address ethical implications and manage complex human interactions that AI cannot handle. College leaders need to adapt to these workforce changes, emphasizing the unique skills liberal arts education provides, such as ethical reasoning, creative problem-solving, and the ability to synthesize information. (Higher Ed Dive; July 8, 2024)
  • Renowned Tech Analyst Urges Higher Ed Leadership in AI Tech Tech analyst and venture capitalist Mary Meeker pushes for universities to partner with businesses and adopt AI—and quickly. (Inside Higher Ed; July 8, 2024)
  • The Myth of the AI First Draft Ed’s Rec. Everyone is a writer now. Leon Furze argues that the high value placed on writing in education can be detrimental, especially with the rise of generative AI, which is often used to create “AI first drafts.” This approach can undermine the development of genuine writing skills and critical thinking, as it bypasses the process of idea formation and personal expression that is essential to meaningful writing. (Leon Furze)
  • When AI Triggers Our Impostor Syndrome Marc Watkins observes that many in academia, struggle with imposter syndrome, feeling unqualified and like they don’t belong. Embracing imposter syndrome can serve as a catalyst for empathy, curiosity, and humility in educational conversations. Modeling openness and courage is crucial in engaging students in discussions about emerging technologies like AI. (Rhetorica; July 5, 2024)
  • How I Use ChatGPT as an Intuition Engine An interesting article about using ChatGPT, along with Google, to enhance the search experience. Romero says that ChatGPT’s strength lies in providing likely answers to imprecise queries offering a broad range of responses, which people can use to then craft a more precise query for Google. (The Algorithmic Bridge: July 5, 2024)
  • Gradually, then Suddenly: Upon the Threshold Ed’s Rec. Mollick argues that small improvements have been leading to BIG changes (see images below—the prompts used in 2022 and 2023 are the same: “fashion photoshoot inspired by Van Gogh”). However, he also points out that technological change happens gradually . . . that it under certain thresholds of capability are passed. (One Useful Thing; July 4, 2024)
    GenAI generated images from 2022 and 2023
  • What Does It Mean for Students to be AI-Ready? Ed’s Rec. Not everyone wants to be a computer scientist, a software engineer or a machine learning developer. We owe it to our students to prepare them with a full range of AI skills for the world they will graduate into, writes David Joyner. (Inside Higher Ed/Times Higher Ed; July 4, 2024) Image below from the article, showing people’s responses to AI:

Find this chart and a description in the article itself.

  • Why Writing Needs Good Friction Ed’s Rec. If you are interested in GenAI, higher education, writing, and pedagogy, take a look at this wonderful article by Leon Furze. He argues that in education, the push to reduce friction with tools like generative AI for writing can undermine the learning process, as struggling with initial drafts and overcoming challenges fosters deeper understanding and creativity. He provides links to articles/blogs by others, noting that his post is a “contribution to that much larger conversation about friction that seems to have progressed in the past few months around the implications of artificial intelligence and writing.” Furze provides his readers with a wonderful overview of what people who teach writing are thinking about GenAI. (Leon Furze; July 3, 2024)
  • AI Reshapes Higher Ed and Society at Large by 2035 The writer argues that those of us in higher ed need to prepare “for the deep societywide changes that will take place in the next five to ten years.” As generative AI and autonomous agents become more prevalent, they will increasingly take on roles traditionally held by humans, prompting institutions to rethink their missions and approaches to education in a landscape where many jobs may be automated. (Inside Higher Ed; July 3, 2024)
  • No One Knows How AI Works Alberton Romero says, “Don’t believe me? Ask ChatGPT.” As many researchers (and bloggers) have pointed out, neural networks, the driving force behind AI, remain enigmatic “black boxes'” that defy human understanding despite their widespread use in various applications. (The Algorithmic Bridge; July 3, 2024)
  • Anatomy of an AI Essay The writer covers some “tells” instructors can use to identify AI-generated essays. Editor’s note: But be careful—some of these are characteristic of undergraduate writing! (Inside Higher Ed; July 2, 2024)
  • AI Economics and What All Might Mean for the Future Bryan Alexander’s article covers emerging trends in the area of AI + business. (Bryan’s Substack; July 1, 2024)
  • How Higher Ed Can Adapt to the Challenges of AI (sign into our library database to access article) Ed’s Rec. Joseph E. Aoun’s article discusses the transformative impact of artificial intelligence (AI) on human experience and emphasizes the crucial role of higher education in preparing students for an AI-driven world. He argues that universities must go beyond merely teaching technical skills and should focus on fostering a comprehensive understanding of AI’s implications across various aspects of life, including the physical, cognitive, and social selves. Aoun proposes an updated educational framework, “Humanics 2.0,” which integrates foundational AI knowledge, experiential learning, and lifelong learning to help students navigate and thrive in a rapidly evolving digital landscape. (The Chronicle; July 1, 2024)
  • Google Studied Gen Z. What They Found Is Alarming Ed’s Rec. Here is a pull quote: “Within a week of actual research, we just threw out the term information literacy,” says Yasmin Green, Jigsaw’s CEO. Gen Zers, it turns out, are “not on a linear journey to evaluate the veracity of anything.” Instead, they’re engaged in what the researchers call “information sensibility” — a “socially informed” practice that relies on “folk heuristics of credibility.” In other words, Gen Zers know the difference between rock-solid news and AI-generated memes. They just don’t care. (Business Insider; July 1, 2024)

June 2024

AI Plagiarism: Part I: Lance Eaton evaluated 3 recent scholarly articles about students and GenAI. Here are the 3 main findings of the 3 articles:

Wecks, J. O., Voshaar, J., Plate, B. J., & Zimmermann, J. (2024). Generative AI Usage and Academic PerformancearXiv preprint arXiv:2404.19699. Main Finding: The study finds that students using GenAI tools, such as ChatGPT, score on average 6.71 points lower (out of 100) than non-users. This is statistically significant and indicates a notable negative impact on academic performance. The negative effect is more pronounced among students with high learning potential, suggesting that GenAI usage might impede their learning progress.

Zhang, M., & Yang, X. (2024). Google or ChatGPT: Who is the Better Helper for University StudentsarXiv preprint arXiv:2405.00341. Main Finding: The study compared the effectiveness of ChatGPT and Google in assisting university students with academic tasks. Preference for ChatGPT: 51.7% of students preferred using ChatGPT for academic help-seeking, while 48.3% preferred Google. This indicates a slight preference for ChatGPT among the students surveyed​​.

Luo, J. (2024). How does GenAI affect trust in teacher-student relationships? Insights from students’ assessment experiencesTeaching in Higher Education, 1-16. Main Findings:

  • Erosion of Trust: The rise of GenAI has led to increased suspicion and a perceived erosion of trust between students and teachers. Students fear being wrongly accused of cheating due to AI-mediated work.
  • Transparency Issues: There is a lack of “two-way transparency” where students must declare their AI use and submit chat records, but teachers’ grading processes remain opaque. This creates a power imbalance and reinforces top-down surveillance.
  • Risk Aversion: To avoid accusations of cheating, some students avoid using AI tools entirely, even for permissible tasks like grammar checking, due to ambiguous guidelines on AI use.
  • Personal Connection: The lack of personal connection between students and teachers exacerbates distrust. Large class sizes and limited interactions prevent the development of individualized trust.

AI Plagiarism: Part II: In a follow up post, Eaton looks at how conversations with students might be handled. He begins, however, by noting: “I’m focusing this on traditional assignments. I’m also less and less and less a fan of these assignments.” Take a look at what Lance Eaton has to say about these conversations: When Students Use AI (June 20, 2024)

AI Plagiraism: Part III:Ed’s Rec. This is a great article about how to approach conversations with your students about GenAI. Lance Eaton goes into some depth about topics you might discuss. (June 26, 2024)

How to Use AI to Create Role-Play Scenarios for Your StudentsEd’s Rec. A really useful article by HBS’s Ethan and Lilach Mollick. Role-play exercises offer a unique educational tool for students, allowing them to explore and practice skills in a low-risk environment—and GenAI makes the process or creating tailored role-playing opportunities easier. (Harvard Business School; June 2024)

How to Humanize AI Content (12 Easy Steps)Ed’s Rec. This would be a good piece to share with students. Jalli’s (a blogger trying to monetize his advice about . . .blogging!) main points are true of any piece of writing, AI-generated or not. Helping students to identify where GenAI fails as a writer will help them understand what “makes” a good piece of writing. (Artturi Jalli; June 7, 2024)

Writing as “Passing”Ed’s Rec. A compelling piece by Helen Beetham noting that students have long felt exclused from academic discourse (pre-GenAI). Now, students must navigate a new landscape, where they can be accused of over-relying on GenAI. Beetham looks at ways instructors can enourage students to develop as writers. Well worth a look! (Imperfect Offerings; June 25, 2024)

A New Digital Divide: Student AI Use Surges, Leaving Faculty Behind Ed’s Rec. The title of the article says it all. (Inside Higher Ed; June 25, 2024)


  • Culture and Generative AI: An Update. Ed’s Rec. This piece by Bryan Alexander gives a snapshot of where we (Note: people in the US, mainly) are in terms of public perception and use of GenAI. There are some very interesting findings and conjectures! For instance, the divide over GenAI use in various fields is likely leading to many professionals using it . . . but being afraid to admit to its use for fear of being publically shamed. (Bryan’s Substack; June 26, 2024).

  • Building Florida’s First AI Degree Program Miami Dade College just announced a new BS in applied AI. Listen to this Podcast, with Antonio Delgado, VP of Innovation and Technology Partnerships at Miami Dade. (Campus Technology; June 24, 2024)
  • British Academics Despair as ChatGPT-Written Essays Swamp Grading Season Pull quote: ‘It’s not a machine for cheating; it’s a machine for producing crap,’ says one professor infuriated by the rise of bland essays. (Inside Higher Ed; June 21, 2024)
  • AI Doesn’t Kill Jobs? Tell that to Freelancers On the one hand, freelancers are losing income due to GenAI, but the initial appeal of AI for companies is beginning to fade as CEOs realize the inferior quality of the content, making freelancers with strong writing/editing/revising skills valuable. (WSJ; June 21, 2024)Graph showing the impact of GenAI by task type Can be found in the article
  • Can GenAI Be Used to Support “Accidental” Asynchronous Learners in HyFlex Courses? An interesting article by HyFlex champion Brian Beatty. To learn more about Hyflex—and GenAI—peruse the article. (Hyflex Learning Community; June 21, 2024)
  • Forget Cheating. Here’s the Real Question about AI in Schools. What are some of the questions? How are we thinking about redesigning learning for an AI world? What does that mean for skills for students? What does that mean for skills for teachers? What does teaching and learning need to look like in the AI world? (Education Week; June 21, 2024)
  • Racist, Robotic, and Random: More Thoughts on Generative AI Grading  Leon Furze argues against using generative AI for grading, citing concerns over bias, lack of transparency, and the potential to widen economic divides in education. He acknowledges the appeal of AI’s efficiency and consistency but emphasizes that AI cannot replace the nuanced judgment of human educators and could lead to the deprofessionalization of teaching. (Leon Furze; June 20, 2024)
  • Latent Expertise: Everyone Is in R&D Ethan Mollick makes a great point about the fact that to optimize the use of AI systems, experts need to share their knowledge to enhance the overall learning experience. Educators and ed tech experts have been engaged in various projects around the world. We will need to engage in this R&D—and share our findings—to make GenAI implementation meaningful. (One Useful Thing; June 20, 2024)
  • California Bill Would Prevent AI Replacement of Community College Faculty Well, that’s good news! (Inside Higher Ed; June 20, 2024)
  • New UNESCO Report Warns that GenAI Threatens Holocaust Memory Ed’s Rec. This is a chilling report, and of course, one may read it as applying to other historical events. According to UNESCO, generative AI poses a threat to the accurate memory of the Holocaust by potentially spreading disinformation and fabricated content. The report urges the urgent implementation of ethical AI principles to prevent distortion of historical facts and ensure that future generations receive accurate education about the Holocaust. (UNESCO; June 18, 2024).
  • Inside Barnard’s Pyramid Approach to AI Literacy Barnard College has introduced a pyramid approach to AI literacy, starting with basic understanding and gradually progressing to advanced AI applications, aiming to equip students and faculty with comprehensive AI skills while ensuring ethical considerations. (Inside Higher Ed; June 11, 2024) And here is another article from Educause about the AI Pyramid: A Framework for AI Literacy
  • AI Literacy Pyramid explained in artcle

    An AI Boost for Academic Advising  Advances in AI means that colleges can offer tailored support to students. The article discusses the way in which AI-powered tools are helping academic advisors. (Inside Higher Education; June 18, 2024)

  • AI Plagiarism: Part 1: Plagiarism Detectors (They’re Not Our Friends) Generative AI plagiarism checkers have significant flaws and can produce false positives, particularly for non-native English speakers. Lance Eaton argues that encouraging faculty to have open conversations with students about their work (instead of just relying on plagiarism checkers) helps in understanding their processes and insights. (AI + Education = Simplified; June 17, 2024)
  • AI with Dr. Jason Wrench, SUNY New Paltz (…and others)! (June 17, 2024)


  • A great overview of GenAI for non-techies. (AI Search; June 16, 2024)

  • How AI Can Catch Up with Pedagogy (Not the other way around!) The article responds to criticism that pedagogy needs to catch up with advances in GenAI. As Furze points out, a lot of the information that chatbots provide to educators about pedagogy is outdated. Furze highlights the limitations of AI in understanding complex classroom dynamics and the necessity for technology developers to engage with educators to create meaningful and effective AI tools for education. (Leon Furze; June 12, 2024)
  • Doing Faculty Consultations Leon Furze reports on recent webinars he has been doing with college faculty. While Furze acknowledges concerns about generative AI, he calls for reframing the focus beyond ‘cheating’ aspects. (June 11, 2024)

  • Chatbots STILL aren’t the future of AI in education… so what is? Leon Furze argues that chatbots are not the future of AI in education due to their limitations. He believes that more advanced, multimodal AI systems, integrating technologies like image recognition and voice capabilities, offer greater potential. (June 11, 2024)

  • Generative AI as Simulation Runner  Bryan Alexander’s article discusses how generative AI can be used to create realistic simulations for educational purposes. These simulations can model complex systems and scenarios, providing interactive and immersive learning experiences. (Bryan’s Substack; June 10, 2024)
  • I’m a Computer Programmer and Wrote Professionally about ChatGPT for 1.5 Years — Here’s How Pro Writers Can Be Okay This article (June 9, 2024) by R. Paulo Delgado is behind a Medium paywall, so here are two pull quotes that sum up his main points:

    • GenAI often provides inaccurate information and is lousy at math—always keep in mind that GenAI is not actually intelligent.
    • ChatGPT has its uses for non-writers—just like Canva and CapCut have their uses for non-designers
  • Doing Stuff with AI: Opinionated Mid-Year Edition Ethan Mollick walks his readers through what he has been doing with GenAI. He explores the evolving capabilities and playful uses of AI, emphasizing practical applications alongside fun, interactive experiences. In addition, Mollick highlights advancements in Large Language Models, their integration with tools for creating music, images, and code, and their potential for transforming educational and professional tasks. (One Useful Thing; June 6, 2024)
  • Living in the Generative Age Bryan Alexander argues that we are entering a generative age where AI tools create content on the fly, shifting from the traditional publication model that has defined information dissemination from 1400 to 2022. This transition changes user habits from searching for content to directly asking questions and receiving instant responses, raising discussions about the implications and potential short lifespan of this generative era. (Bryan’s Substack; June 1, 2024)
  • Information Age Vs Generation Age Technologies for Learning Although David Wiley shared this to his blog back in April, Bryan Alexander references Wiley’s post in his article (above), so I wanted to keep them together. According to Wiley, The shift from the Information Age to the Generation Age in technology-mediated learning involves moving from distributing perfect copies of existing resources to using generative AI to create new, dynamic content in response to specific queries, fundamentally altering pedagogical approaches and supporting infrastructure. (Improving Learning; April 29, 2024)

May 2024

ChatGPT-4o (The “O” is for “Omni”)

**Excellent Resource**: Rethinking Assessment for Generative AI Leon Furze advocates for placing an emphasis on moving away from high-stakes, written assessments such as essays and tests, which are susceptible to generative AI “cheating.” He brings UNESCO guidelines into his comprehensive guide. (Leon Furze; May 2024)

Trends Snapshot: The Emerging Role of Gen AI in Academic Research(The Chronicle)

Introducing ChatGPT Edu OpenAI’s page about “an afforable offering for universiies to responsibly bring AI to campus.”


  • New ChatGPT Version Aiming at Higher Ed OpenAI introduced ChatGPT Edu, a new AI toolset for higher education institutions focused on tutoring, writing grant applications, and résumé reviews, with privacy safeguards and input from universities like ASU, University of Pennsylvania, and Oxford. This new development is prompting both cautious optimism and worries. (Inside Higher Ed; May 31, 2024)

  • Research Insights #9: Student-Focused Studies, Part 6 Lance Eaton regularly reviews the content of research articles and then summarizes their findings. In this post, he looks at three recent scholarly journal articles. His main findings are: 1) ChatGPT provides immediate answers, saving students time compared to traditional search engines like Google; and 2) ChatGPT aids in overcoming ‘brain freeze’ by providing inspiration for starting assignments. However, there are a number of challenges including the often noted point that institutions may need to reconsider assessment methods to counteract academic misconduct facilitated by ChatGPT. (AI + Education = Simplified; May 29, 2024)
  • Don’t Use GenAI to Grade Student Work In this post, Leon Furze argues against using Generative AI (GenAI) for grading student work, highlighting issues of inconsistency, bias, and lack of true understanding. He contends that while AI can provide superficial improvements, it cannot replace the nuanced, empathetic judgment of human educators, and its use may exacerbate existing inequities in education. (Leon Furze; May 27, 2024)
  • The Great AI Challenge: We Test Five Top Bots on Useful, Everyday Skills In a detailed comparison of five top AI chatbots—OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, Perplexity, and Anthropic’s Claude—each was evaluated for their performance in various real-life tasks. The results varied across categories, with Perplexity emerging as the overall winner for its concise and accurate responses, while ChatGPT excelled in health advice and cooking but lagged behind in creative writing. Despite rapid advancements and upgrades, no single chatbot consistently outperformed the others in all areas, showcasing their unique strengths and weaknesses. (WSJ; May 25, 2024)
  • A Provocation for Generating AI Alternatives Helen Beetham’s discusses alternatives to the current AI development, emphasizing human-centric and democratic approaches. She shares insights from a workshop on generating AI alternatives, focusing on collaborative and creative methods to reframe AI’s role in society. What follows is a brief video, in which Beetham highlights concerns about AI models memorizing copyrighted materials, which impacts cultural assessments and echoes dystopian warnings about the loss of originality and freedom.
  • Most Researchers Use AI-Powered Tools Despite Distrust Despite widespread distrust of AI companies, over 75% of researchers use AI tools in their work, primarily for tasks like discovering, editing, and summarizing research. Ed’s note: Perhaps an alternate approach—as Suzanne Massie once taught Ronald Regan—doveryai, no proveryai (доверяй, но проверяй) “Trust but Verify.” (Inside Higher Ed; May 24, 2024)
  • Treat AI News Like a River, Not a Bucket  Medium writer Alberto Romero notes that it is challenging to sift through all the AI information that is being churned out by bloggers (like himself), news outlets, industry PR feed, and scholars. The solution? Adopting a mindset of treating the ‘to read’ pile like a river instead of a bucket and plucking choice items from the ‘river’ rather than feeling compelled to empty an unmanageable ‘bucket’. Point noted. (Medium; May 24, 2024)
  • How Two Professors Harnessed GenAI to Teach Students to Be Better Writers Two professors at Carnegie Mellon University developed a tool called myScribe, leveraging “restrained generative AI” to help students improve their writing by converting notes into prose, thus reducing the cognitive load of sentence generation and allowing students to focus more on their ideas and overall structure. This approach aims to prevent misuse and “hallucinations” of AI by restricting the AI’s output to the student’s own notes, enhancing the writing process without compromising the quality of thought or expression​. (FASTCompany; May 23, 2024)
  • How Generative AI Tools Assist with Lesson Planning Pull quote from a K-12 instructor: “My colleagues and I utilize MagicSchool’s suggested 80/20 approach of using artificial intelligence to help with designing a lesson. AI does the bulk of the initial work, which we review for bias and accuracy. Then we step in and take care of the rest, which amounts to about 20 percent of the task.” This article’s main focus is on MagicSchool.ai. (Edutopia; May 22, 2024) 
  • Are AI Tutors the Answer to Lingering Learning Loss? As some k-12 schools embrace chatbot tutors, other are still weighing the pros and cons. AI tutoring tools, such as Khan Academy’s Khanmigo powered by OpenAI’s GPT-4, offer significant advantages over human tutors including lower costs, which support equitable access to education. As these tools continue to improve, they may present a promising solution for addressing lingering learning loss. (Ed Tech: Focus on K-12; May 22, 2024)
  • Colleges Bootstrap Their Way to AI Literacy Goldie Blumenstyk discusses how colleges are creatively building their AI expertise without needing large budgets or extensive computer-science faculties. She cites five different institutions: Metropolitan State University of Denver, Randolph College, Hudson County Community College, Marshall University, and Camden County College. (The Edge: Chronicle of Higher Ed); May 22, 2024)
  • The AI-Augmented Nonteaching Academic in Higher Ed The article looks at the increasing role of AI in non-teaching aspects of higher education, such as administration, research, and student support services. It highlights the potential benefits, like efficiency and cost savings, but also raises concerns about job displacement and the need for educators to adapt to these changes. Overall, it emphasizes the importance of leveraging AI responsibly to enhance rather than replace human involvement in academia. (Inside Higher Ed; May 22, 2024)
  • Empowering Student Learning: Navigating AI in the College Classroom  Through student surveys, it was found that students prefer to use AI to improve their work rather than replace their own efforts. The article suggests strategies for educators to guide students in responsibly using AI, balancing AI use with critical thinking, and fostering collaboration. (Faculty Focus; May 22, 2024)
  • AI Grading Is Already as ‘Good as an Overburdened’ Teacher, but Researchers Say It Needs More Work  The article discusses early research indicating that AI systems like ChatGPT are nearly as effective as overburdened teachers at grading essays but warns that these AI tools need further improvement before being used for high-stakes grading​. (The Hechinger Report; May 20, 2024)
  • AI’s New Conversation Skills Eyed for Education The new version of ChatGPT, GPT-4o, which features enhanced human-like verbal communication. This upgrade allows for real-time conversations, emotion simulation, and improved language translation, making it a potential tool for personalized learning and tutoring. Educators see opportunities for tailored instruction, mock interviews, and deeper student engagement. (Inside Higher Ed; May 17, 2024)
  • GPTs for Scholars: Enablers of Shoddy Research? Despite resolving issues of fake citations, GPTs could enable shoddy research by allowing researchers to cite without proper engagement with the literature. The authors emphasize the need for further assessment, guidelines, and training to ensure the responsible use of these AI tools. (Inside Higher Ed; May 16, 2024)
  • Students Pitted against ChatGPT to Improve Writing New online courses at the University of Nevada, Reno, are designed to educate future educators about the limitations and potential of artificial intelligence (AI) through competitive writing tasks. Students in two courses are required to compete against ChatGPT in writing assignments, emphasizing the creative and intellectual strengths humans bring over AI tools. (Inside Higher Ed; May 15, 2024)
  • AI Has Changed Learning, Why Aren’t We Regulating It? According to Marc Watkins’s sobering analysis, Educators need to be prepared for the changes AI tools like GPT-4o bring to the learning process. Existing regulations, such as FERPA, COPPA, IDEA, CIPA, and Section 504, must be considered for AI tools in education. (Rhetorics; May 14, 2024)
  • OpenAI GPT-4o: The New Best AI Model in the World. Like in the Movies. For Free. Behind a Medium paywall. Romero notes that “GPT 40 stands out as a multimodal end-to-end model, enabling processing of text, audio, voice, video, and images simultaneously, showcasing capabilities comparable to those seen in movies.” (Alberto Romero; May 13, 2024) 
  • GenAI Strategy: Attack Your Assessment Ed’s Rec Leon Furze, as usual, is on target, this time with this call to revisit faculty approaches to assessment. The central theme is the necessity for educators, regardless of their field, to critically evaluate and, if necessary, revise their assessment strategies. This involves using GenAI tools to simulate student responses to current assessment tasks, thereby revealing vulnerabilities. The author provides practical steps for educators to engage with AI technology during faculty meetings and use it to test the robustness of their assessment tasks. (Leon Furze; May 13, 2024)
  • Experts Predict Major AI Impacts in New Report The Educause report, which includes the opinions of higher ed and tech experts, highlights the potential transformation AI can bring to teaching, learning, student support, and institutional management. It emphasizes the importance of preparing educators and institutions for this AI-driven shift in education to maximize its benefits. (Inside Higher Ed; May 13, 2024)
  • The Good, the Bad, & the Unknown of AI In a keynote at UMass, Lance Eaton focused on both the challenges and successes of generative AI. Two important points: AI can assist in navigating tedious tasks and projects, minimizing energy drain and AI’s fallibility can heighten awareness and scrutiny of the information received (AI + Education = Simplified; May 13, 2024)
  • Harnessing the Power of Generative AI: A Call to Action for Educators Ripsimé K. Bledsoe (Texas A&M) argues that if educators steer the integration of AI with intention and purpose, it can reach its potential as an immense and versatile student success tool. She offers six areas on which to focus efforts. (Inside Higher Ed; May 10, 2024)
  • No One Is Talking about AI’s Impact on Reading AI assistants can now summarize and query PDFs (this editor uses it), highlighting the potential impact on reading skills. While AI assistants can help many readers, including neurodiverse and second-language learners, these tools may wind up hindering students’ ability to critically engage with texts and form nuanced conclusions. Watkins suggests active reading assignments, social annotation tools, and group discussions can encourage students to focus on the act of reading and not rely solely on AI assistants. (Rhetorics; May 3, 2024)
  • Ditch the Detectors: Six Ways to Rethink Assessment for Generative Artificial Intelligence Ed’s Rec Leon Furze again writes a prescient article, this time about assessment. (Leon Furze; May 3, 2024)
  • GenAI Strategy for Faculty Leaders Ed’s Rec. Leon Furze offers a blueprint for developing AI guidelines. A great resource. (Leon Furze; May 1, 2024)
  • New AI Guidelines Aim to Help Research Libraries  The Association of Research Libraries announced a set of seven guiding principles for university librarians to follow in light of rising generative AI use. (Inside Higher Ed; May 1, 2024)

April 2024

AI Detection in Education Is a Dead End Researcher-PhD candidate Leon Furze’s blog post is a well-written exploration—and explanation—about AI detection software and its impact on students. For those who don’t want to wade through the study below, this is a friendly interpretation of some of the findings. (April 9, 2024)

**New Study**GenAI Detection Tools, Adversarial Techniques and Implications of Inclusivity in Higher Education From the article’s abstract: The results demonstrate that the detectors’ already low accuracy rates (39.5%) show major reductions in accuracy (17.4%) when faced with manipulated content, with some techniques proving more effective than others in evading detection. The accuracy limitations and the potential for false accusations demonstrate that these tools cannot currently be recommended for determining whether violations of academic integrity have occurred, underscoring the challenges educators face in
maintaining inclusive and fair assessment practices. However, they may have a role in supporting student learning and maintaining academic integrity when used in a non-punitive manner.

The AI-Writing Paradox Ed’s Rec. Blogger/Educator Debra Lawal’s article discusses recent cases where well-regarded professional writers have admitted to use GenAI for various tasks and asks her readers to consider not only the usefulness of AI tools such as Grammarly, but also ways in which we can develop a distinct voice in the age of AI. This blog post would be a great one to assign to students. (Debra Lawal; April 12, 2024)


March 2024

Report: The Advantages that AI Brings to Higher Ed (Link to report included) A report highlights AI’s potential to enhance higher education through student support and data analysis, emphasizing the importance of equitable access and culturally aware design to prevent a new digital divide and ensure HBCUs and MSIs benefit without falling behind. (Diverse Issues in Higher Education; March 13, 2024)


February 2024

2024 Educause AI Landscape Study Ed’s Rec. This is an important study, looking at a number of areas, including strategic planning. (Educause: Feb. 2024)

  • AI & The Copyright & Plagiarism Dilemma As Lance Eaton notes, potential lawsuits against AI companies may lead to a rethinking of copyright in the digital age. A very thoughtful parsing of legal terms, plagiarism, and “transformative use.” (AI + Education = Simplified; Feb. 16, 2023)
  • How AI Has Begun Changing University Roles, Responsibilities A survey by Educause found that more faculty members and university leaders are starting to work with artificial intelligence in their roles. A lack of formalized training in AI was observed, with only 56 percent of universities training faculty members and even lower percentages for staff and students. (Inside Higher Ed; Feb. 13, 2024).
  • AI: The Unseen Ally in Mastering Deep WorkEd’s Rec  Srinivas Rao makes a wonderful observation about how AI can enhance our capacity for deep work by helping to master complex things quickly and work at high levels of depth. Well worth the read. (Medium; Feb. 9, 2024)

A short 15-minute by ed technologist Lance Eaton about how faculty and instructional designers can approach the use of generative AI:

  • Google’s Gemini Advanced: Tasting Notes and Implications Ethan Mollick does not provide a detailed review of Gemini but makes several broad statements about its capabilities. Gemini Advanced shares similarities with GPT-4 but also has its own strengths and weaknesses and provides insight into the future of AI development and the emergence of advanced AI models. Gemini Advanced signifies the start of a wave of AI development rather than the end. It suggests the potential for future AI agents to function as powerful assistants. (One Useful Thing; Feb. 8, 2024)
  • AI Content Vs a Top 1% Writer (Dan Martin from AI Monks; behind paywall on Medium, but you should be able to see the opening paragraphs). Here is a good summary of Martin’s findings about ChatGPT at this moment in time:

    Comparison of AI-generated writing and human-written content highlights limitations and emphasizes the need for human creativity and originality in content creation:

    • AI’s Limitations in Writing:
      • AI-generated writing lacks readability and quality, and is incapable of producing new ideas and insights without heavy prompting.
      • AI writing tools like ChatGPT simply replicate what’s already out there, using different phrasing to give the illusion of being creative.
    • Use of ChatGPT for Idea Generation:
      • ChatGPT can assist in brainstorming and suggesting themes and ideas based on the user’s inputs.
      • It can also help flesh out rough drafts and provide structure, making it a valuable tool for generating content ideas.
    • Overcoming Writer’s Block with ChatGPT:
      • When struggling with writer’s block, users can input a rough outline or bullet points into ChatGPT to kickstart creativity and get the writing process moving again.
      • This demonstrates the potential of ChatGPT as a creative ally rather than a lazy shortcut.
    • ChatGPT’s Role in Content Quality Enhancement:
      • ChatGPT can also be used to proofread content, check for grammatical errors, and suggest improvements in readability, thereby enhancing content quality.
      • Users should ensure to balance ChatGPT’s outputs with their unique voice and style and verify information for ethical and quality considerations.
    • AI Content vs. Human Writing:
      • The comparison between AI-generated writing and human-written content highlights the limitations of AI in terms of context understanding, accuracy, and genuine creativity.
      • It emphasizes the need for human creativity and originality in content creation despite the assistance of AI tools like ChatGPT.
    • Differences in Writing Styles:
      • AI-generated content can be identified by specific words and phrases it overuses, such as ‘ever-evolving landscape,’ ‘harness,’ ‘delves,’ and an overuse of semi-colons.
      • Human writing exhibits perplexity and burstiness, characteristics that AI struggles to replicate, leading to more robotic-sounding content.
  • The AI Revolution in Higher Ed  Even keeping in mind that Grammarly helped to produce this booklet, one has to say it still provides some useful data and interesting ideas. (Feb. 2024)

  • Is AI Just a Tool for Lazy People? Short answer: No. Mark Herschberg’s (MIT) conclusion is that generative AI is actually being leveraged effectively by highly engaged professionals. (Medium; Feb. 7, 2023)
  • Wisdom Skills Are Hard to Teach—AI Can Help The author makes the case that Experiential learning through AI-powered games can address the shortage of extended on-the-job experience, offering the potential for unlocking big-picture cognition.(Inside Higher Ed; Feb. 7, 2024)
  • Generative AI, Bullshit as a Service As Alberto Romero points out, AI is being used for dishonest and malicious purposes, from generating disinformation to creating spam. While these uses are disturbing, Romero argues that despite dire warnings of catastrophic outcomes, AI is primarily used to create “BS.” A philosophical-lite treatise, worth a read. (The Algorithmic Bridge; Feb. 6, 2024) 
  • Pinging the scanner, early February 2024 Bryan Alexander takes a look at recent AI and tech updates from Google, Amazon and Microsoft. Yes, Co-Pilot will be ubiquitous. Also, programmers are designing “hostile AI architecture” in the hopes of addressing copyright infringement issues. Below, you will find the OG Rufus, the Welsh Corgi after which Amazon programmers have named their chatbot shopping assistant. (Bryan’s Substack; Feb. 6, 2024)

    Photgraph of welsh corgi
    The Original Rufus

Education Week: Spotlight on AI Ed’s Rec This compilation of articles about generative AI in the k-12 space is very helpful. (Education Week; Feb. 2024)

  • 7 Questions College Leaders Should Ask about AI Presidents and others should be developing strategies to ensure their institutions are positioned to respond to the opportunities and risks, writes David Weil (Brandeis). (Inside Higher Ed; Feb. 1, 2014).

January 2024

  • What Can Be Done in 59 Seconds: An Opportunity (and a Crisis) Ed’s Rec. Mollick reflects on how generative AI has proven to be a powerful productivity booster, with evidence for its effectiveness growing over the past 10 months. The wide release of Microsoft’s Copilot for Office and OpenAI’s GPTs has made AI use much easier and more normalized. (One Useful Thing; Jan. 31, 2024)
  • The Biggest AI Risk in 2024 May be behind a paywall, but I will provide a summary. Actually, Thomas Smith traces several big issues: Data Privacy (1 in 10 medical providers use ChatGPT, which means patient data is likely being compromised); Copyright Issues and Hallucinations, of course; However, Smith sees the biggest risk of generative AI is . . . pretending it doesn’t exist and not learning how to use it ethically. His focus is primarily on business, but this observation is worth considering: “Avoiding grappling with AI challenges is itself a decision.” (The Generator; Jan. 26, 2024)
  • Embracing AI in English Composition Ed’s Rec. From the abstract: A mixed-method study conducted in Fall 2023 across three sections, including one English Composition I and two English Composition II courses, provides insightful revelations. The study, comprising 28 student respondents, delved into the impact of AI tools through surveys, analysis of writing artifacts, and a best practices guide developed by an honors student. (International Journal of Changes in Education; Jan. 22, 2024)

  • Last Year’s AI Views Revisited Ed’s Rec. Another great read, this time by Lance Eaton. In terms of higher education, Eaton makes the point  the importance of faculty fully understanding the technology and shaping its use in the classroom to mitigate emerging problems. Even if you are not all that interested in generative AI, this article is worth the read. (AI+Education = Simplified; Jan. 24, 2024)
  • ChatGPT Can’t Teach Writing: Automated Syntax Generation Is Not Teaching Ed’s Rec John Warner steps in to fire back at OpenAI’s partnership with Arizona State. (Inside Higher Ed; Jan. 22, 2024)
  • What Happens When a Court Cuts Down ChatGPT? Ed’s Rec. Not an idle question posed by futurist Byran Alexander. (Byran’s Substack; Jan. 21, 2024)
  • ChatGPT Goes to College Bret Kinsella muses over the ways OpenAI’s partnership with Arizona State will benefit both parties. (Synthedia; Jan. 20, 2024)
  • OpenAI Announces First Partnership with a University According to the article, “Starting in February, Arizona State University will have full access to ChatGPT Enterprise and plans to use it for coursework, tutoring, research and more.” (CNBC; Jan. 18, 2024)
  • AI Dominates Davos CNBC It’s that time of year! (Jan. 17, 2024)
  • AI Writing Is a Race to the Bottom by Alberto Romero, The Algorithmic Bridge Romero’s article discusses how AI writing tools, while offering convenience and efficiency, create a competitive environment that forces human writers to use these tools, ultimately sacrificing the uniqueness of human writing to Moloch, the system of relentless competition. (Jan. 17, 2024)
  • The Lazy Tyranny of the Wait Calculation by Ethan Mollik, One Useful Thing. Mollick introduces the concept of a “Wait Calculation” in the context of AI development, where waiting for advancements in AI technology before starting a project can sometimes be more beneficial than immediate action, highlighting the rapid pace of AI development, its potential to impact various fields, and the need to consider the timeline of AI progress in long-term decision-making. (Jan. 16, 2024)
  • Who Is ChatGPT? by Dean Pratt, AI Mind A fascinating—or really creepy, depending upon your POV—article in which the author explores a philosophical conversation with an AI entity named Bard, discussing the potential future where AI technology becomes a co-creator and catalyst for experiences blending the real and dreamlike, as well as the importance of empathy, optimism, and interconnectedness in the interaction between humans and AI. (Jan. 14, 2023)
  • Creating a Useful GPT? Maybe . . . Lance Eaton has been experimenting with creating customized GPTs. The article explains how one can go about doing it and discusses their limits as well as their promises for the future. (AI + Education = Simplified, Jan. 8, 2024)

**Important OER Resource from Oct. 2023: TextGenEd: Teaching with Text Generation Technologies Edited by Vee et al. WAC Clearing House***At the cusp of this moment defined by AI, TextGenEd collects early experiments in pedagogy with generative text technology, including but not limited to AI. The fully open access and peer-reviewed collection features 34 undergraduate-level assignments to support students’ AI literacy, rhetorical and ethical engagements, creative exploration, and professional writing text gen technology, along with an Introduction to guide instructors’ understanding and their selection of what to emphasize in their courses. (Oct. 2023 but I put the book here.)

Book Launch of TextGenEd: Teaching with Text Generation Technologies:

  • Signs and Portents: Some Hints about What the Next Year in AI Looks Like by Ethan Mollick, One Useful Thing Ed’s Rec The article discusses the accelerated development of artificial intelligence (AI) and its impact on various aspects of society, emphasizing the need for proactive measures to navigate the challenges and opportunities presented by AI. Mollick highlights AI’s impact on work, its ability to alter the truth through deepfakes and manipulated media, and its effectiveness in education. (Jan. 6, 2023)
  • How Will AI Disrupt Higher Education in 2024? By Racy Schroeder, Inside Higher Ed The article discusses the significant impact of generative AI on higher education, highlighting its potential to provide personalized learning experiences, assist faculty, and enhance course outcomes, while also addressing concerns about the emergence of artificial general intelligence (AGI) and its potential implications for education. (Jan. 6, 2024)
  • Gender Bias in AI-Generated Images: A Comprehensive Study by Caroline Arnold in Generative AI on Medium (paywall). Arnold shares her findings about gender bias in the Midjourney generative AI algorithm when generating images of people for various job titles, highlighting that the AI model often fails to generate female characters in images, especially for professions where women are underrepresented. While the article is behind a paywall, you can probably find other articles on this topic. (Jan. 4, 2024)
  • AI and Teaching College Writing A Future Trends forum discussion, again with Bryan Alexander (Jan. 4, 2024)

  • The NYT vs OpenAI Is Not Just a Legal Battle by Alberto Romero, The Algorithmic Bridge This article explores the New York Times (NYT) lawsuit against OpenAI, focusing on the deeper disagreement regarding the relationship between morality and progress in the context of AI, suggesting that while pro-AI arguments emphasize the potential benefits of technology, there should be a more balanced consideration of its impact on society and creators’ rights. (Jan. 3, 2024)
  • Empowering Prisoners and Reducing Recidivism with ChaptGPT by Krstafer Pinkerton, AI Advances. Note: The article in its entirety is behind a Members Only paywall, but perhaps you can find Pinkerton’s musings elsewhere. The article explores the potential use of AI language model ChatGPT in prisoner rehabilitation to reduce recidivism rates, emphasizing personalized learning experiences, a safe environment, and ethical considerations, while also highlighting future developments and calling for collective action to responsibly harness AI’s potential in this context. (Jan. 2, 2024)
  • Envisioning a New Wave of AI on Campus with Bryan Alexander and Brent Anders This was a fun scenario exercise, in which participants were asked to imagine a future, with AI avatars as instructors. (Jan. 1, 2024)