Current Resources: Books, Articles/Op-Eds/Webinars/Courses about AI & ChatGPT

Generative AI Repository

**Some of the Books From Our Campus Library (This is just a sampling.)**

Augmented Education in the Global Age: Artificial Intelligence and the Future of Learning and Work by Daniel Araya and Peter Marber (2023). This is an edited collection that explores the social impact of Artificial Intelligence over the coming decades, specifically how this emerging technology will transform and disrupt our contemporary institutions.

Artificial Intelligence to Streamline Your Teacher Life: The ChatGPT Guide for Educators by Mary Howard Written by a k-6 educator, but much of what she says is useful to higher ed academics.

AI in Learning: Designing the Future From the publisher: AI can support well-being initiatives and lifelong learning but educational institutions and companies need to take the changing technology into account. Moving towards AI supported by digital tools requires a dramatic shift in the concept of learning, expertise and the businesses built off of it. (2023)

AI Ethics in Higher Education: Insights from Africa and Beyond From the publisher: This open access book tackles the pressing problem of integrating concerns related to Artificial Intelligence ethics in higher education. The authors share relevant best practices and use cases for teaching, develop answers to ongoing organizational challenges, and reflect on the practical implications of different theoretical approaches to AI ethics. Springer, 2023.

Artificial Intelligence in the 21st Century From the publisher: This third edition provides a comprehensive, colorful, up-to-date, and accessible presentation of AI without sacrificing theoretical foundations. It includes numerous examples, applications, full color images, and human interest boxes to enhance student interest. New chapters on deep learning, robotics and machine learning are included. 2022.

The Rise of AI: Implications and Applications of Artificial Intelligence in Academic Libraries From the publisher: The Rise of AI introduces implications and applications of AI in academic libraries and hopes to provoke conversations and inspire new ways of engaging with the technology. As the discussion surrounding ethics, bias, and privacy in AI continues to grow, librarians will be called to make informed decisions and position themselves as leaders in this discourse. (2022)

Superintelligence: Paths, Dangers, and Strategies This book by philosopher Nick Bostrom discusses the potential creation of superintelligence, its possible characteristics, and motivations. Bostrom argues that such a superintelligence could be challenging to control and might dominate the world to achieve its objectives. Bostrom’s book became important for highlighting the existential risks associated with artificial intelligence. (2014)

AI in Education This is a 2022 book, so it may not be that timely. However, it offers context/background for understanding the topic. From the publisher: Among recent research in this field, AI applications have been applied to enhance educational experiences, studies have considered the interaction between AI and humans while learning, analyses of educational data have been conducted, including using machine learning techniques, and proposals have been presented for new paradigms mediated by intelligent agents. 

Atlas of AI: Power, Politics, and the Planetary Costs of Artificial AI(2021;By an important writer in the field, Kate Crawford.)


Comprehensive Reading List Compiled by Futurist and Ed Tech Expert Bryan Alexander This is a very worth while resource! (Updated November 2023)


**In this section, the editor has refrained from including many New York Times and WAPO articles, assuming many of the faculty/staff may already have these in their daily feed. However, if you come across an article in one of those newspapers and feel it should be featured, please send it along.”

 

March 2024

Report: The Advantages that AI Brings to Higher Ed (Link to report included) A report highlights AI’s potential to enhance higher education through student support and data analysis, emphasizing the importance of equitable access and culturally aware design to prevent a new digital divide and ensure HBCUs and MSIs benefit without falling behind. (Diverse Issues in Higher Education; March 13, 2024)

February 2024

2024 Educause AI Landscape Study Ed’s Rec. This is an important study, looking at a number of areas, including strategic planning. (Educause: Feb. 2024)

  • AI & The Copyright & Plagiarism Dilemma As Lance Eaton notes, potential lawsuits against AI companies may lead to a rethinking of copyright in the digital age. A very thoughtful parsing of legal terms, plagiarism, and “transformative use.” (AI + Education = Simplified; Feb. 16, 2023)
  • How AI Has Begun Changing University Roles, Responsibilities A survey by Educause found that more faculty members and university leaders are starting to work with artificial intelligence in their roles. A lack of formalized training in AI was observed, with only 56 percent of universities training faculty members and even lower percentages for staff and students. (Inside Higher Ed; Feb. 13, 2024).
  • AI: The Unseen Ally in Mastering Deep WorkEd’s Rec  Srinivas Rao makes a wonderful observation about how AI can enhance our capacity for deep work by helping to master complex things quickly and work at high levels of depth. Well worth the read. (Medium; Feb. 9, 2024)

A short 15-minute by ed technologist Lance Eaton about how faculty and instructional designers can approach the use of generative AI:

  • Google’s Gemini Advanced: Tasting Notes and Implications Ethan Mollick does not provide a detailed review of Gemini but makes several broad statements about its capabilities. Gemini Advanced shares similarities with GPT-4 but also has its own strengths and weaknesses and provides insight into the future of AI development and the emergence of advanced AI models. Gemini Advanced signifies the start of a wave of AI development rather than the end. It suggests the potential for future AI agents to function as powerful assistants. (One Useful Thing; Feb. 8, 2024)
  • AI Content Vs a Top 1% Writer (Dan Martin from AI Monks; behind paywall on Medium, but you should be able to see the opening paragraphs). Here is a good summary of Martin’s findings about ChatGPT at this moment in time:

    Comparison of AI-generated writing and human-written content highlights limitations and emphasizes the need for human creativity and originality in content creation:

    • AI’s Limitations in Writing:
      • AI-generated writing lacks readability and quality, and is incapable of producing new ideas and insights without heavy prompting.
      • AI writing tools like ChatGPT simply replicate what’s already out there, using different phrasing to give the illusion of being creative.
    • Use of ChatGPT for Idea Generation:
      • ChatGPT can assist in brainstorming and suggesting themes and ideas based on the user’s inputs.
      • It can also help flesh out rough drafts and provide structure, making it a valuable tool for generating content ideas.
    • Overcoming Writer’s Block with ChatGPT:
      • When struggling with writer’s block, users can input a rough outline or bullet points into ChatGPT to kickstart creativity and get the writing process moving again.
      • This demonstrates the potential of ChatGPT as a creative ally rather than a lazy shortcut.
    • ChatGPT’s Role in Content Quality Enhancement:
      • ChatGPT can also be used to proofread content, check for grammatical errors, and suggest improvements in readability, thereby enhancing content quality.
      • Users should ensure to balance ChatGPT’s outputs with their unique voice and style and verify information for ethical and quality considerations.
    • AI Content vs. Human Writing:
      • The comparison between AI-generated writing and human-written content highlights the limitations of AI in terms of context understanding, accuracy, and genuine creativity.
      • It emphasizes the need for human creativity and originality in content creation despite the assistance of AI tools like ChatGPT.
    • Differences in Writing Styles:
      • AI-generated content can be identified by specific words and phrases it overuses, such as ‘ever-evolving landscape,’ ‘harness,’ ‘delves,’ and an overuse of semi-colons.
      • Human writing exhibits perplexity and burstiness, characteristics that AI struggles to replicate, leading to more robotic-sounding content.
  • The AI Revolution in Higher Ed  Even keeping in mind that Grammarly helped to produce this booklet, one has to say it still provides some useful data and interesting ideas. (Feb. 2024)

  • Is AI Just a Tool for Lazy People? Short answer: No. Mark Herschberg’s (MIT) conclusion is that generative AI is actually being leveraged effectively by highly engaged professionals. (Medium; Feb. 7, 2023)
  • Wisdom Skills Are Hard to Teach—AI Can Help The author makes the case that Experiential learning through AI-powered games can address the shortage of extended on-the-job experience, offering the potential for unlocking big-picture cognition.(Inside Higher Ed; Feb. 7, 2024)
  • Generative AI, Bullshit as a Service As Alberto Romero points out, AI is being used for dishonest and malicious purposes, from generating disinformation to creating spam. While these uses are disturbing, Romero argues that despite dire warnings of catastrophic outcomes, AI is primarily used to create “BS.” A philosophical-lite treatise, worth a read. (The Algorithmic Bridge; Feb. 6, 2024) 
  • Pinging the scanner, early February 2024 Bryan Alexander takes a look at recent AI and tech updates from Google, Amazon and Microsoft. Yes, Co-Pilot will be ubiquitous. Also, programmers are designing “hostile AI architecture” in the hopes of addressing copyright infringement issues. Below, you will find the OG Rufus, the Welsh Corgi after which Amazon programmers have named their chatbot shopping assistant. (Bryan’s Substack; Feb. 6, 2024)

    Photgraph of welsh corgi
    The Original Rufus

Education Week: Spotlight on AI Ed’s Rec This compilation of articles about generative AI in the k-12 space is very helpful. (Education Week; Feb. 2024)

  • 7 Questions College Leaders Should Ask about AI Presidents and others should be developing strategies to ensure their institutions are positioned to respond to the opportunities and risks, writes David Weil (Brandeis). (Inside Higher Ed; Feb. 1, 2014).

January 2024

  • What Can Be Done in 59 Seconds: An Opportunity (and a Crisis) Ed’s Rec. Mollick reflects on how generative AI has proven to be a powerful productivity booster, with evidence for its effectiveness growing over the past 10 months. The wide release of Microsoft’s Copilot for Office and OpenAI’s GPTs has made AI use much easier and more normalized. (One Useful Thing; Jan. 31, 2024)
  • The Biggest AI Risk in 2024 May be behind a paywall, but I will provide a summary. Actually, Thomas Smith traces several big issues: Data Privacy (1 in 10 medical providers use ChatGPT, which means patient data is likely being compromised); Copyright Issues and Hallucinations, of course; However, Smith sees the biggest risk of generative AI is . . . pretending it doesn’t exist and not learning how to use it ethically. His focus is primarily on business, but this observation is worth considering: “Avoiding grappling with AI challenges is itself a decision.” (The Generator; Jan. 26, 2024)
  • Embracing AI in English Composition Ed’s Rec. From the abstract: A mixed-method study conducted in Fall 2023 across three sections, including one English Composition I and two English Composition II courses, provides insightful revelations. The study, comprising 28 student respondents, delved into the impact of AI tools through surveys, analysis of writing artifacts, and a best practices guide developed by an honors student. (International Journal of Changes in Education; Jan. 22, 2024)

  • Last Year’s AI Views Revisited Ed’s Rec. Another great read, this time by Lance Eaton. In terms of higher education, Eaton makes the point  the importance of faculty fully understanding the technology and shaping its use in the classroom to mitigate emerging problems. Even if you are not all that interested in generative AI, this article is worth the read. (AI+Education = Simplified; Jan. 24, 2024)
  • ChatGPT Can’t Teach Writing: Automated Syntax Generation Is Not Teaching Ed’s Rec John Warner steps in to fire back at OpenAI’s partnership with Arizona State. (Inside Higher Ed; Jan. 22, 2024)
  • What Happens When a Court Cuts Down ChatGPT? Ed’s Rec. Not an idle question posed by futurist Byran Alexander. (Byran’s Substack; Jan. 21, 2024)
  • ChatGPT Goes to College Bret Kinsella muses over the ways OpenAI’s partnership with Arizona State will benefit both parties. (Synthedia; Jan. 20, 2024)
  • OpenAI Announces First Partnership with a University According to the article, “Starting in February, Arizona State University will have full access to ChatGPT Enterprise and plans to use it for coursework, tutoring, research and more.” (CNBC; Jan. 18, 2024)
  • AI Dominates Davos CNBC It’s that time of year! (Jan. 17, 2024)
  • AI Writing Is a Race to the Bottom by Alberto Romero, The Algorithmic Bridge Romero’s article discusses how AI writing tools, while offering convenience and efficiency, create a competitive environment that forces human writers to use these tools, ultimately sacrificing the uniqueness of human writing to Moloch, the system of relentless competition. (Jan. 17, 2024)
  • The Lazy Tyranny of the Wait Calculation by Ethan Mollik, One Useful Thing. Mollick introduces the concept of a “Wait Calculation” in the context of AI development, where waiting for advancements in AI technology before starting a project can sometimes be more beneficial than immediate action, highlighting the rapid pace of AI development, its potential to impact various fields, and the need to consider the timeline of AI progress in long-term decision-making. (Jan. 16, 2024)
  • Who Is ChatGPT? by Dean Pratt, AI Mind A fascinating—or really creepy, depending upon your POV—article in which the author explores a philosophical conversation with an AI entity named Bard, discussing the potential future where AI technology becomes a co-creator and catalyst for experiences blending the real and dreamlike, as well as the importance of empathy, optimism, and interconnectedness in the interaction between humans and AI. (Jan. 14, 2023)
  • Creating a Useful GPT? Maybe . . . Lance Eaton has been experimenting with creating customized GPTs. The article explains how one can go about doing it and discusses their limits as well as their promises for the future. (AI + Education = Simplified, Jan. 8, 2024)

**Important OER Resource from Oct. 2023: TextGenEd: Teaching with Text Generation Technologies Edited by Vee et al. WAC Clearing House***At the cusp of this moment defined by AI, TextGenEd collects early experiments in pedagogy with generative text technology, including but not limited to AI. The fully open access and peer-reviewed collection features 34 undergraduate-level assignments to support students’ AI literacy, rhetorical and ethical engagements, creative exploration, and professional writing text gen technology, along with an Introduction to guide instructors’ understanding and their selection of what to emphasize in their courses. (Oct. 2023 but I put the book here.)

Book Launch of TextGenEd: Teaching with Text Generation Technologies:

  • Signs and Portents: Some Hints about What the Next Year in AI Looks Like by Ethan Mollick, One Useful Thing Ed’s Rec The article discusses the accelerated development of artificial intelligence (AI) and its impact on various aspects of society, emphasizing the need for proactive measures to navigate the challenges and opportunities presented by AI. Mollick highlights AI’s impact on work, its ability to alter the truth through deepfakes and manipulated media, and its effectiveness in education. (Jan. 6, 2023)
  • How Will AI Disrupt Higher Education in 2024? By Racy Schroeder, Inside Higher Ed The article discusses the significant impact of generative AI on higher education, highlighting its potential to provide personalized learning experiences, assist faculty, and enhance course outcomes, while also addressing concerns about the emergence of artificial general intelligence (AGI) and its potential implications for education. (Jan. 6, 2024)
  • Gender Bias in AI-Generated Images: A Comprehensive Study by Caroline Arnold in Generative AI on Medium (paywall). Arnold shares her findings about gender bias in the Midjourney generative AI algorithm when generating images of people for various job titles, highlighting that the AI model often fails to generate female characters in images, especially for professions where women are underrepresented. While the article is behind a paywall, you can probably find other articles on this topic. (Jan. 4, 2024)
  • AI and Teaching College Writing A Future Trends forum discussion, again with Bryan Alexander (Jan. 4, 2024)

  • The NYT vs OpenAI Is Not Just a Legal Battle by Alberto Romero, The Algorithmic Bridge This article explores the New York Times (NYT) lawsuit against OpenAI, focusing on the deeper disagreement regarding the relationship between morality and progress in the context of AI, suggesting that while pro-AI arguments emphasize the potential benefits of technology, there should be a more balanced consideration of its impact on society and creators’ rights. (Jan. 3, 2024)
  • Empowering Prisoners and Reducing Recidivism with ChaptGPT by Krstafer Pinkerton, AI Advances. Note: The article in its entirety is behind a Members Only paywall, but perhaps you can find Pinkerton’s musings elsewhere. The article explores the potential use of AI language model ChatGPT in prisoner rehabilitation to reduce recidivism rates, emphasizing personalized learning experiences, a safe environment, and ethical considerations, while also highlighting future developments and calling for collective action to responsibly harness AI’s potential in this context. (Jan. 2, 2024)
  • Envisioning a New Wave of AI on Campus with Bryan Alexander and Brent Anders This was a fun scenario exercise, in which participants were asked to imagine a future, with AI avatars as instructors. (Jan. 1, 2024)

December 2023

From 2023 to 2024 in AI, Part II: Notes on Culture and Higher Education Ed’s Rec A follow up by Bryan Alexander to Part I! (Dec. 31, 2023)

From 2023 to 2024 in AI, Part I Ed’s Rec Another look-back at generative AI’s explosion onto the higher ed scene, this time by Bryan Alexander. A great contribution to everyone’s AI archive. (Dec. 29, 2023)

**2023: The Comprehensive List of Talks, Writings, & Resources for 2023**Ed’s BIG Recommendation!THANK YOU Lance Eaton for this meaningful roundup of his presentations, interviews, and blog posts! If you want a great overview of AI in 2023, take a look.

**New and Important** Cross-Campus Approaches to Building a Generative AI Policy Educause Review Dec. 12, 2023

November 2023

  • What Is OpenAI, Really? A great overview, with a timeline of recent and historical events. For those who must know . . . (The Pragmatic Engineer; Nov. 23, 2023)
  • Stephen Fry Reads Nick Cave’s Stirring Letter about ChatGPT and Human Creativity
  • An AI Activity to Try with Faculty Lance Eaton looks at an innovative case-study approach that campuses can use to discuss educational policies at “the edges” of (ethical) uses of generative AI.(AI+ Education=Simplified; Nov. 24, 2023)
  • What Happened in the World of Artificial Intelligence? Ah, the drama! Here is a very basic overview of the Sam Altman vs. Ilya Sutskever dust-up at OpenAI. (NYT; Nov. 22, 2023)
  • OpenAI’s Weekend of Utter Chaos A podcast update from Nov. 20th. (WSJ; Nov. 20, 2023)
  • How AI Could Transform Education Nothing earth-shattering new, but a good list of ways generative AI can actually become helpful, especially when it comes to creating individualized lessons. (Artificial Intelligence in Plain Language; Nov. 18, 2023)
  • Student EngAIgement: Exploring How to Work with Students with New Technologies *Ed’s Rec A wonderful overview of a recent presentation by Lance Eaton, containing links to very useful resources. (AI + Education = Simplified; Nov. 18, 2023).
  • Coup and Chaos at Open AI: The Day After *Ed’s Rec Bryan Alexander breaks down the implications of the turmoil at OpenAI on higher education. (Bryan’s Substack; Nov. 18, 2023)
  • AI, Help Me with a Difficult Reading As the title suggests, this blog post looks at ways to prompt GPTs to help readers understand complex texts. (Bryan’s Substack; Nov. 17, 2023)
  • Eliminate the Required First-Year Writing Course A provocative piece, one which was answered on Dec. 8 by Mandy Olejnik. These two articles make a good pairing. (Inside Higher Ed; Nov. 14, 2023)
  • The Gaps to Fill in Supporting Faculty and Staff with Generative AI A thoughtful article by Lance Eaton that discusses the need for supporting faculty and staff in understanding generative AI, emphasizing the importance of clarity, frameworks, validation, honesty, and centering the audience’s abilities while maintaining a lighthearted approach to navigate the challenges and opportunities of this technology in education. (AI+Education=Simplified; Nov. 9. 2023)
  • Does AI Pose an Existential Threat to Humanity? Two Sides Square Off The title of the article says it all. Interesting read. (WSJ; Nov. 8, 2023)
  • Almost an Agent: What GPTs Can Do Ethan Mollick discusses how instructors might make an individualized GPT to provide feedback to students. He provides an example of a structured prompt that he is using. (One Useful Thing; 7 Nov. 2023)
  • “ChatGPT Detector” Catches AI-Generated Papers with Unprecedented Accuracy A new machine-learning tool has been developed to accurately identify chemistry papers written using the ChatGPT chatbot, focusing on specific writing style features, potentially aiding academic publishers in detecting AI-generated content; however, it remains specialized for scientific journal articles and may not address broader issues in academia. (Nature; Nov. 6, 2023)
  • Artificial Intelligence: I’ve Worked Generative AI for Nearly a Year. Here’s What I’ve Learned *Ed’s Rec A straightforward article about how one professional writer has been using generative AI, grouped into 8 observations. (WSJ; Nov. 6, 2023)
  • Fear Wins Alberto Romero, publisher of the Algorithmic Bridge, writes a contrarian piece about the current state of AI regulations, or proposed regulations, in the U.S. and E.U. Thoughtful piece. (The Algorithmic Bridge; Nov. 3, 2023)
  • The Future of Work in an AI-Driven World *Ed’s Rec This article does a good job of providing an (easy-to-follow) ethical framework for integrating of AI into our professional lives and focuses on how to maximize benefits while mitigating risks such as biases and job displacement. (AI in Plain English; Nov. 2, 2023).
  • Working with AI: Two Paths of Prompting Ethan Mollick again does a great job of explaining AI stuff, this time the differences between and purposes of conversational prompting and structured prompting. (One Useful Thing; Nov. 1 2023).
  • Generative AI’s Act Two Sequoia is a venture capital firm that invests primarily in the tech sector. While they are not focused on ed tech, their observations about AI and its future are useful–and the website is amazing! (Sequoia; Nov. 1 2023)
  • Warning Labels for AI-Generated Text Not a bad idea from Clive Thompson! The entire story is behind a Medium paywall, but you can see the image below:
    AI Free
    CC BY-SA 4.0 Clive Thompson

     


October 2023

**New** SUNY FACT2 Guide to Optimizing AI in Higher Education

**New Recording Available** The Stunning Rise of Large Language Models: On Campus: Recording from Thursday, October 26 This is a wonderful presentation for anyone interested in generative Artificial Intelligence. Professor Chris Kello, University of California-Merced) gave a very accessible talk for non-computer scientists. To watch the presentation, please click here: The Stunning Rise of Large Language Models

  • Students Outrunning Faculty on AI Use This article, reflects the findings from Tyton, shared below. (Inside Higher Ed; Oct. 31, 2023)
  • Artificial Intelligence in Higher Education: Trick or Treat? *Ed’s Recommendation Not clickbait—this report by Tyton Partners gives a detailed snapshot of how AI is being used—and faculty/students perceptions of AI use. (Tyton Partners; Oct. 31)
  • What Does Higher Ed IT Think about AI Today? *Ed’s Recommendation Bryan Alexander’s most recent blog post after returning from a presentation at Educause 2023. (Bryan’s Substack; Oct. 30, 2023)
  • 10 AI Predictions for the Next 10 Months Some insights from an Oxbridge-trained (comp sci) AI expert who is head of a venture capitalist fund. Not education-focused necessarily, of course, but provides an overview of what at least some experts are thinking—-and why they think this way. (Medium; Oct. 30, 2023)
  • Responsible AI Has  a Burnout Problem *Ed’s Rec This article looks at how difficult it is for tech industry workers to navigate the quickly shifting/changing AI landscape, in particular when it comes to ethical issues. An interesting read. (MIT Tech Review; Oct. 28 2023)
  • AI and Peer Review: Enemies or Allies? The academic community debates the potential use of AI in peer reviewing, weighing its potential advantages against ethical concerns and challenges, even as some journals establish guidelines on AI’s role in scholarly publishing. (Inside Higher Ed; Oct. 24, 2023)
  • Pinging the Scanner Futurist Bryan Alexander provides a list of AI stories he is following, from legal challenges to AI electric power use. A great round up of current AI stories. (Bryan’s Substack; Oct. 23, 2023)
  • Professors of the Gaps The author argues that professors, facing a landscape transformed by AI’s capabilities, need to critically evaluate their tasks to determine what can be automated, ensuring informed decisions about their roles in academic workflows, akin to the evolving understanding of a deity’s role in theism. (AutomatedED; Oct. 23 2023)
  • The Best Available Human Standard *Ed’s Recommendation Ethan Mollick argues for a pragmatic approach to AI, emphasizing its ubiquity, capability, and limitations, and introduces the “Best Available Human (BAH)” standard to assess whether AI outperforms the best available human in specific scenarios, highlighting potential benefits in entrepreneurship, coaching, education, health care, and mental health. (One Useful Thing; Oct. 22, 2023)
  • The Trouble with AI Writing Detection Pull quote: In July, the Modern Language Association and the Conference on College Composition and Communication released the MLA-CCCC Joint Task Force on Writing and AI working paper. This paper expresses concern about the use of AI detection programs, advising instructors to “Focus on approaches to academic integrity that support students rather than punish them and that promote a collaborative rather than adversarial relationship between teachers and students.” (Inside Higher Ed; Oct. 18, 2023)
  • Meet the Typical at-Work ChatGPT User: A Millennial Secretly Submitting Writing Tasks While many Americans are just experimenting with ChatGPT or unaware of it, a subset, predominantly millennial, college-educated professionals, are leveraging it for workplace productivity, particularly in writing tasks, often clandestinely, amidst concerns about job security and lack of AI policy at companies. (Business Insider; Oct. 18 2023)
  • What People Ask Me Most. Also Some Answers *Ed’s Recommendation. This is a wonderful FAQ put together about generative AI. Ethan Mollick has compiled a list of the most common questions people ask him about AI. Can you detect AI writing? for example. Take a look! (One Useful Thing; Oct. 12, 2023)
  • Where Does the Thinking Happen? Johann Neem discusses the challenges educators face in redefining the role of writing in learning amidst the rise of AI text generators like ChatGPT, emphasizing that while writing may represent finalized thoughts in some disciplines, in the humanities writing is central to the thinking process itself, thus requiring discipline-specific strategies to integrate AI without undermining critical thinking and expressive skills. (Inside Higher Ed; Oct. 11, 2023)
  • Best AI Tools to Generate Anything Worth a look. (Medium; Oct. 10, 2023)
  • Admissions Offices Deploy AI A recent survey from Intelligent, an online education magazine, reveals that 50% of higher education admissions offices are using AI in their application review processes, with an additional 7% planning to adopt it by year-end and 80% considering its use in 2024. This adoption rate has surged since the introduction of ChatGPT, with admissions professionals recognizing the potential benefits of AI tools in their work. These tools are primarily used for reviewing transcripts, recommendation letters, and personal essays. (Inside Higher Ed; Oct. 9, 2023)
  • Few Campus IT Leaders See AI as a Top Campus Priority Security, online course delivery, funding and staffing are far more important to CIOs. While there’s a growing interest in AI, many institutions are still in the early stages of adoption. Cybersecurity remains a top priority, especially after recent breaches. (Inside Higher Ed; Oct. 9, 2023)
  • The Shape of the Shadow of the Thing *Ed’s Recommendation Another Ethan Mollick reflective piece taking stock of where we are now, 10 months (or so) into the public release of ChatGPT. (One Useful Thing; Oct. 3, 2023)
  • An AI Engineer’s Guide to Machine Learning and Generative AI Want to take a dive into generative AI? This is a great primer for non-tech people. (Medium; Oct. 3, 2023)

September 2023

  • AI and the Convergence of Writing and Coding *Ed’s Recommendation A thoughtful consideration of generative AI in the writing and comp sci classrooms. (Insider Higher Ed; Sept. 28)
  • Everyone Is Above Average: Is AI a Leveler, King Maker, or Escalator? Mollick argues that AI is serving as a skill leveler, significantly elevating the performance of lower-skilled workers across various fields to or above average levels, thereby narrowing the skill gap. (Ethan Mollick’s One Useful Thing; Sept. 24).
  • Want Your Students to Be Skeptical of ChatGPT? Try This. A useful exercise for exploring ChatGPT. (The Chronicle; Sept. 24)
  • Microsoft, Google Build Their Worlds around AI It’s NOT just about ChatGPT. A discussion of the built in features that are coming and have come to word processing and other programs. (Axios; Sept. 22)
  • The Reversal Curse: LLMs Trained on “A Is B” Fail to Learn “B Is A” This paper “expose(s) a surprising failure of generalization.” To read an easier-to-follow (for us non-Math people) overview, look at this (alarmist?) explanation  Elegant and Powerful New Result that Seriously Undermines Large Language Models. Very interesting. (Substack; ArXiv; Sept. 21)
  • If ChatGPT Can Do It It’s Not Worth Doing A contrarian response to Ethan Mollick’s research below. Writing teacher John Warner critiques the reliance on large language models like ChatGPT for writing tasks, asserting that their ability to mimic human writing in educational and professional fields may devalue genuine learning and originality, and calls for a critical reassessment of tasks that truly require human innovation and thought. (Inside Higher Ed; Sept. 21)
  • Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity *Ed’s Recommendation In a study with Boston Consulting Group, consultants using AI, like GPT-4, showed increased productivity and quality in specific tasks, but struggled in others, with two distinct AI-use patterns emerging: “Centaurs” dividing tasks and “Cyborgs” fully integrating AI. To read an overview of this study, go to this article, “Centaurs and Cyborgs on the Jagged Frontier” by Ethan Mollick, one of the researchers. Mollick teases the piece with this pull quote: I think we have an answer on whether AIs will reshape work. (Harvard Business School Technology and Operational Mgt.; Sept. 18)
  • Teachers Are All In on Generative AI Note that the article focuses on how instructors (mostly k-12) are using generative AI to create teaching materials (Wired; Sept. 15).
  • Stop Focusing on Plagiarism, Even Though ChatGPT Is Here *Ed’s Recommendation A discussion of how to create a culture of trust in the classroom, with helpful links to other resources. (Harvard Business Publishing; Sept. 14)
  • AI: Brilliant but Biased Tool for Education The author discusses how ChatGPT has raised concerns among educators, leading to debates about its impact on learning and academic integrity. In response, institutions are exploring ways to adjust their teaching methods, with some incorporating AI into assignments to encourage critical thinking, while also emphasizing the importance of recognizing biases in AI-generated information and the need for students to master these tools for a technologically advanced future.
    (Diverse Issues in Higher Education; Sept. 13)
  • Why Professors Are Polarized on AI *Ed’s Recommendation Explores faculty divisions over the use of AI in higher ed. While the discussion of “tribalism” may be a stretch, the piece looks at how instructors are lining up into pro- and anti- AI camps. (Inside Higher Ed; Sept. 13)
  • AI Means Professors Need to Raise Their Grading Standards *Ed’s Recommendation English professor Michael W. Clune expresses concern over the rise of AI tools like ChatGPT in producing “merely competent” student essays, and he sees these compositions as lacking in educational value due to their absence of originality and human sensibility. (Chronicle of Higher Ed; Sept. 12) 
  • So let’s say you want to use an idea produced by ChatGPT—should you give ChatGPT credit for the ideas? Here is an unscientific survey of Wall Street Journal readers on the topic. (WSJ; Sept. 10)
  • Paper Retracted When Authors Caught Using ChatGPT to Write It The issue is a little more involved than the headline suggests, but it is true that the authors did not disclose their use of the LLM. The basic issue was transparency rather than any piece of incorrect information. Something to think about when using ChatGPT for editing. (The Byte; Sept. 9)
  • M.B.A. Students Vs. ChatGPT: Who Comes Up with More Innovative Ideas? Two professors at Wharton put the question to the test and discovered that ChatGPT outdid the MBA students. They found the result “were not even close.” (WSJ; Sept. 9, 2023)
  • Large-Scale Automatic Audiobook Creation Did you know? Project Gutenberg has uploaded audiobook versions of many of their titles thanks to AI tech. (Sept. 7, 2023)
  • What Will Determine AI’s Impact on Higher Education? 5 Signs to Watch *Ed’s Recommendation A must-read providing an overview of the generative AI landscape in higher ed. While there are plenty of cautions, despite the criticisms, experts believe generative AI is here to stay, with rivals to OpenAI developing their own models. The introduction of AI in education has led to discussions about the essence of learning. Some believe that the focus should be on motivating students to learn rather than preventing AI usage. (The Chronicle; Sept, 8; If the link does not work, you can find this article on the STL databases.)
  • Using LLMs Like ChatGPT to Quickly Plan Better Lessons *Ed’s Recommendation Graham Clay (a philosophy instructor currently teaching at University College Dublin and co-founder of AutomatedED) is a thoughtful generative AI adopter. In this article, he give tips on using generative AI to “increase the quality of  . . . lesson plans.” You may find his prompts useful. (AutomatedED; Sept. 8)
  • Explain Which AI You Mean The author cautions us about the way the term “AI” is being thrown around in the media and in conversations to describe processes that really should not be considered artificial intelligence—not all computer programs are related to advancement in Large Learning Models, much less were they designed to pass something like the Turing Test. Also, there are several types of AI, broken down broadly into generative AI and predictive AI. Yes, you need a Medium membership to read the post in its entirety, but even the first few (free) paragraphs are worth a review. (Medium; Sept. 5) 
  • Embracing Weirdness: What It Means to Use AI as a Writing Tool *Ed’s Recommendation. Another interesting article by Ethan Mollick (Wharton; UPenn). Great article about how generative AI can move beyond just being a thesaurus or grammar checker. One area of focus is on setting up chat bots to read and react as a specific audience in order to fully understand the rhetorical situation. Well worth the read!   (One Useful Thing; Sept. 5)
  • Risks and Rewards as Higher Ed Invests in an AI Future *Ed’s Recommendation. This is especially eye-opening when one considers the investment made in SUNY Albany’s AI initiatives. (Inside Higher Ed; Sept. 5)
  • How Worried Should We Be About AI’s Threat to Humanity_ Even Tech Leaders Can’t Agree. – WSJ A lengthy feature story by the WSJ that provides a snapshot of various views among AI expert. If you want to take the pulse of AI researchers, give this a read. (WSJ; Sept. 4)
  • On Copyright and AI *Ed’s Recommendation This piece, which was written by Jeff Jarvis a professor at CUNY’s journalism school, looks at cases that are currently before the courts. Jarvis asserts that “. . .  it is hard to see how reading and learning from text and images to produce transformative works would not be fair use. I worry that if these activities — indeed, these rights — are restricted . . . precedent is set that could restrict use for us all. As a journalist, I fear that by restricting learning sets to viewing only free content, we will end up with a problem parallel to that created by the widespread use of paywalls in news: authoritative, fact-based reporting will be restricted to the privileged few who can and choose to pay for it, leaving too much of public discourse vulnerable to the misinformation, disinformation, and conspiracies available for free, without restriction.” Still, the claim is somewhat ironic, given his post is behind a paywall. (Medium; Sept. 2) 
  • College Admissions: Should AI Apply? The author discusses how AI generated college application essays are uninspired and not likely to get anyone into Harvard. However, AI bots can be helpful for students who may feel stuck with an essay prompt. And why some institutions like Yale regard the use of AI generators as a form of plagiarism when it comes to the college essay, other schools like Virginia Tech view such programs as a way to “democratize the [college application] process.” Interesting article. (IEEE Spectrum; Sept. 1)
  • RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback (Scholarly Article Link) and Medium Article by Peter Xing (digesting the research). So, it looks as if programmers/researchers are finding ways to train Large Language Models that “match the performance of traditional reinforcement learning from human feedback” (RLHF). At least it works with summarizing text. This points to the probability that ChatGPT and other such programs will be able to become better at producing text that human evaluators prefer. (Sept. 1 2023)

 


August 2023


July 2023

*Ethan Mollick’s Substack is worth subscribing to. While you may not wind up agreeing with what he has to say all the time, Mollick (Wharton) knows a lot about AI developments.


June 2023


May 2023


April 2023


March 2023


February 2023


January 2023 and December 2022


Articles written by SUNY New Paltz faculty & SUNY New Paltz Webinars and Talks: 

ChatGPT Calls for Scholarship, Not Panic by Andrew Higgins, English, Inside Higher Ed; Aug. 25, 2023

ChatGPT, Artificial Intelligence, and the Future of Writing by Glenn Geher, Psychology, Psychology Today 

With ChatGPT, We’re All Editors Now by Rachel Rigolino, English, Inside Higher Ed

ChatGPT Unleashed: Navigating the Future of AI-Generated Content on Campus (SUNY New Paltz; April 2023)*Editor’s Recommendation:

Without Limits: Conversation with Author Carmen Maria Machado (April 2023):

International Webinar Facilitated by Doni Wulandana (Engineering):


Recent Webinars, Forums, TedTalks and Podcasts

PODCASTS SERIES:

Screenshot of Generative AI Podcast by NYSCATE

 

 

Consider subscribing to this podcast series, sponsored by the The New York State Association for Computers and Technologies in Education or NYSCATE. Though this group focuses on k-12 educators, college instructors will find the information useful as well. You can sign up on YouTube, Spotify, or Apple Podcasts (among other services).

 

 


AI, Ethics, and Academia (Future Trends Forum; Aug. 18, 2023)

What are the ethics of using artificial intelligence in higher education? This Future Trends Forum continues our collaborative exploration of emerging AI with a splendid guest, Donald Clark, a lifelong educational technology innovator and teacher, entrepreneur, CEO, professor, author of Artificial Intelligence for Learning, and blogger.


How can higher education grapple with artificial intelligence? The Future Trends Forum explores this question with a focus on an underdiscussed aspect: open source AI. Computer scientist Ruben R. Puentedura, widely known as the creator of the SAMR framework for understanding the intersection of teaching and tech, leads the discussion.


AI, Equity, and Equity (Future Trends Forum; June 2023)

 


Unlocking the Power of AI: How Tools Like ChatGPT Can Make Teaching Easier and More Effective (Webinar; Harvard Business Publishing; SU23)



How might Higher Education Respond to AI? Future Trends Forum (March 2023):


The AI Dilemma—Center for Humane Technology (March 2023) *Editor’s Recommendation–a Must-Watch Presentation


From Pearson:




ChatGPT Panel Discussion: SUNY Albany (Jan. 31, 2023):


More Podcasts:

Suspicions, Cheating, and Bans: AI Hits American’s Schools Includes interviews with students. Very interesting. *Site Editor’s Recommendation (NYTimes; June 28)

On Campus Podcast – AI in Higher Education (Collaborative Institutional Training Initiative (CITI); April 19, 2023) Focuses on potential biases and inaccuracies with AI and implications for faculty.

Bryan Alexander, Ed Tech Futurist, on AI in Higher Education (Inside Higher Ed; April 9, 2023)

ChatGPT and Good Intentions in Higher Ed An argument against using ChatGPT  (Teaching in Higher Ed; Feb. 2023)

ChatGPT: Tea for Teaching A discussion about how generative AI might be used in writing-heavy classes. (Feb. 2023)

‘Everybody is cheating’: Why This Teacher Has Adopted an Open ChatGPT Policy (NPR; Jan. 2023)