2023 Articles & Resources

Articles written by SUNY New Paltz faculty & SUNY New Paltz Webinars and Talks: 

ChatGPT Calls for Scholarship, Not Panic by Andrew Higgins, English, Inside Higher Ed; Aug. 25, 2023

ChatGPT, Artificial Intelligence, and the Future of Writing by Glenn Geher, Psychology, Psychology Today 

With ChatGPT, We’re All Editors Now by Rachel Rigolino, English, Inside Higher Ed

Teach Talks: Session 39: ChatGPT in Social Science Writing (Indonesian Lecturers Facilitated by Doni Wulandana (Engineering)


SUNY New Paltz Conversation: ChatGPT Unleashed: Navigating the Future of AI-Generated Content on Campus April 2023

December 2023

From 2023 to 2024 in AI, Part II: Notes on Culture and Higher Education Ed’s Rec A follow up by Bryan Alexander to Part I! (Dec. 31, 2023)

From 2023 to 2024 in AI, Part I Ed’s Rec Another look-back at generative AI’s explosion onto the higher ed scene, this time by Bryan Alexander. A great contribution to everyone’s AI archive. (Dec. 29, 2023)

**2023: The Comprehensive List of Talks, Writings, & Resources for 2023**Ed’s BIG Recommendation! THANK YOU Lance Eaton for this meaningful roundup of his presentations, interviews, and blog posts! If you want a great overview of AI in 2023, take a look.

**New and Important** Cross-Campus Approaches to Building a Generative AI Policy Educause Review Dec. 12, 2023

November 2023

  • What Is OpenAI, Really? A great overview, with a timeline of recent and historical events. For those who must know . . . (The Pragmatic Engineer; Nov. 23, 2023)
  • Stephen Fry Reads Nick Cave’s Stirring Letter about ChatGPT and Human Creativity
  • An AI Activity to Try with Faculty Lance Eaton looks at an innovative case-study approach that campuses can use to discuss educational policies at “the edges” of (ethical) uses of generative AI.(AI+ Education=Simplified; Nov. 24, 2023)
  • What Happened in the World of Artificial Intelligence? Ah, the drama! Here is a very basic overview of the Sam Altman vs. Ilya Sutskever dust-up at OpenAI. (NYT; Nov. 22, 2023)
  • OpenAI’s Weekend of Utter Chaos A podcast update from Nov. 20th. (WSJ; Nov. 20, 2023)
  • How AI Could Transform Education Nothing earth-shattering new, but a good list of ways generative AI can actually become helpful, especially when it comes to creating individualized lessons. (Artificial Intelligence in Plain Language; Nov. 18, 2023)
  • Student EngAIgement: Exploring How to Work with Students with New Technologies *Ed’s Rec A wonderful overview of a recent presentation by Lance Eaton, containing links to very useful resources. (AI + Education = Simplified; Nov. 18, 2023).
  • Coup and Chaos at Open AI: The Day After *Ed’s Rec Bryan Alexander breaks down the implications of the turmoil at OpenAI on higher education. (Bryan’s Substack; Nov. 18, 2023)
  • AI, Help Me with a Difficult Reading As the title suggests, this blog post looks at ways to prompt GPTs to help readers understand complex texts. (Bryan’s Substack; Nov. 17, 2023)
  • Eliminate the Required First-Year Writing Course A provocative piece, one which was answered on Dec. 8 by Mandy Olejnik. These two articles make a good pairing. (Inside Higher Ed; Nov. 14, 2023)
  • The Gaps to Fill in Supporting Faculty and Staff with Generative AI A thoughtful article by Lance Eaton that discusses the need for supporting faculty and staff in understanding generative AI, emphasizing the importance of clarity, frameworks, validation, honesty, and centering the audience’s abilities while maintaining a lighthearted approach to navigate the challenges and opportunities of this technology in education. (AI+Education=Simplified; Nov. 9. 2023)
  • Does AI Pose an Existential Threat to Humanity? Two Sides Square Off The title of the article says it all. Interesting read. (WSJ; Nov. 8, 2023)
  • Almost an Agent: What GPTs Can Do Ethan Mollick discusses how instructors might make an individualized GPT to provide feedback to students. He provides an example of a structured prompt that he is using. (One Useful Thing; 7 Nov. 2023)
  • “ChatGPT Detector” Catches AI-Generated Papers with Unprecedented Accuracy A new machine-learning tool has been developed to accurately identify chemistry papers written using the ChatGPT chatbot, focusing on specific writing style features, potentially aiding academic publishers in detecting AI-generated content; however, it remains specialized for scientific journal articles and may not address broader issues in academia. (Nature; Nov. 6, 2023)
  • Artificial Intelligence: I’ve Worked Generative AI for Nearly a Year. Here’s What I’ve Learned *Ed’s Rec A straightforward article about how one professional writer has been using generative AI, grouped into 8 observations. (WSJ; Nov. 6, 2023)
  • Fear Wins Alberto Romero, publisher of the Algorithmic Bridge, writes a contrarian piece about the current state of AI regulations, or proposed regulations, in the U.S. and E.U. Thoughtful piece. (The Algorithmic Bridge; Nov. 3, 2023)
  • The Future of Work in an AI-Driven World *Ed’s Rec This article does a good job of providing an (easy-to-follow) ethical framework for integrating of AI into our professional lives and focuses on how to maximize benefits while mitigating risks such as biases and job displacement. (AI in Plain English; Nov. 2, 2023).
  • Working with AI: Two Paths of Prompting Ethan Mollick again does a great job of explaining AI stuff, this time the differences between and purposes of conversational prompting and structured prompting. (One Useful Thing; Nov. 1 2023).
  • Generative AI’s Act Two Sequoia is a venture capital firm that invests primarily in the tech sector. While they are not focused on ed tech, their observations about AI and its future are useful–and the website is amazing! (Sequoia; Nov. 1 2023)
  • Warning Labels for AI-Generated Text Not a bad idea from Clive Thompson! The entire story is behind a Medium paywall, but you can see the image below:
    AI Free
    CC BY-SA 4.0 Clive Thompson


October 2023

**New** SUNY FACT2 Guide to Optimizing AI in Higher Education

**New Recording Available** The Stunning Rise of Large Language Models: On Campus: Recording from Thursday, October 26 This is a wonderful presentation for anyone interested in generative Artificial Intelligence. Professor Chris Kello, University of California-Merced) gave a very accessible talk for non-computer scientists. To watch the presentation, please click here: The Stunning Rise of Large Language Models

  • Students Outrunning Faculty on AI Use This article, reflects the findings from Tyton, shared below. (Inside Higher Ed; Oct. 31, 2023)
  • Artificial Intelligence in Higher Education: Trick or Treat? *Ed’s Recommendation Not clickbait—this report by Tyton Partners gives a detailed snapshot of how AI is being used—and faculty/students perceptions of AI use. (Tyton Partners; Oct. 31)
  • What Does Higher Ed IT Think about AI Today? *Ed’s Recommendation Bryan Alexander’s most recent blog post after returning from a presentation at Educause 2023. (Bryan’s Substack; Oct. 30, 2023)
  • 10 AI Predictions for the Next 10 Months Some insights from an Oxbridge-trained (comp sci) AI expert who is head of a venture capitalist fund. Not education-focused necessarily, of course, but provides an overview of what at least some experts are thinking—-and why they think this way. (Medium; Oct. 30, 2023)
  • Responsible AI Has  a Burnout Problem *Ed’s Rec This article looks at how difficult it is for tech industry workers to navigate the quickly shifting/changing AI landscape, in particular when it comes to ethical issues. An interesting read. (MIT Tech Review; Oct. 28 2023)
  • AI and Peer Review: Enemies or Allies? The academic community debates the potential use of AI in peer reviewing, weighing its potential advantages against ethical concerns and challenges, even as some journals establish guidelines on AI’s role in scholarly publishing. (Inside Higher Ed; Oct. 24, 2023)
  • Pinging the Scanner Futurist Bryan Alexander provides a list of AI stories he is following, from legal challenges to AI electric power use. A great round up of current AI stories. (Bryan’s Substack; Oct. 23, 2023)
  • Professors of the Gaps The author argues that professors, facing a landscape transformed by AI’s capabilities, need to critically evaluate their tasks to determine what can be automated, ensuring informed decisions about their roles in academic workflows, akin to the evolving understanding of a deity’s role in theism. (AutomatedED; Oct. 23 2023)
  • The Best Available Human Standard *Ed’s Recommendation Ethan Mollick argues for a pragmatic approach to AI, emphasizing its ubiquity, capability, and limitations, and introduces the “Best Available Human (BAH)” standard to assess whether AI outperforms the best available human in specific scenarios, highlighting potential benefits in entrepreneurship, coaching, education, health care, and mental health. (One Useful Thing; Oct. 22, 2023)
  • The Trouble with AI Writing Detection Pull quote: In July, the Modern Language Association and the Conference on College Composition and Communication released the MLA-CCCC Joint Task Force on Writing and AI working paper. This paper expresses concern about the use of AI detection programs, advising instructors to “Focus on approaches to academic integrity that support students rather than punish them and that promote a collaborative rather than adversarial relationship between teachers and students.” (Inside Higher Ed; Oct. 18, 2023)
  • Meet the Typical at-Work ChatGPT User: A Millennial Secretly Submitting Writing Tasks While many Americans are just experimenting with ChatGPT or unaware of it, a subset, predominantly millennial, college-educated professionals, are leveraging it for workplace productivity, particularly in writing tasks, often clandestinely, amidst concerns about job security and lack of AI policy at companies. (Business Insider; Oct. 18 2023)
  • What People Ask Me Most. Also Some Answers *Ed’s Recommendation. This is a wonderful FAQ put together about generative AI. Ethan Mollick has compiled a list of the most common questions people ask him about AI. Can you detect AI writing? for example. Take a look! (One Useful Thing; Oct. 12, 2023)
  • Where Does the Thinking Happen? Johann Neem discusses the challenges educators face in redefining the role of writing in learning amidst the rise of AI text generators like ChatGPT, emphasizing that while writing may represent finalized thoughts in some disciplines, in the humanities writing is central to the thinking process itself, thus requiring discipline-specific strategies to integrate AI without undermining critical thinking and expressive skills. (Inside Higher Ed; Oct. 11, 2023)
  • Best AI Tools to Generate Anything Worth a look. (Medium; Oct. 10, 2023)
  • Admissions Offices Deploy AI A recent survey from Intelligent, an online education magazine, reveals that 50% of higher education admissions offices are using AI in their application review processes, with an additional 7% planning to adopt it by year-end and 80% considering its use in 2024. This adoption rate has surged since the introduction of ChatGPT, with admissions professionals recognizing the potential benefits of AI tools in their work. These tools are primarily used for reviewing transcripts, recommendation letters, and personal essays. (Inside Higher Ed; Oct. 9, 2023)
  • Few Campus IT Leaders See AI as a Top Campus Priority Security, online course delivery, funding and staffing are far more important to CIOs. While there’s a growing interest in AI, many institutions are still in the early stages of adoption. Cybersecurity remains a top priority, especially after recent breaches. (Inside Higher Ed; Oct. 9, 2023)
  • The Shape of the Shadow of the Thing *Ed’s Recommendation Another Ethan Mollick reflective piece taking stock of where we are now, 10 months (or so) into the public release of ChatGPT. (One Useful Thing; Oct. 3, 2023)
  • An AI Engineer’s Guide to Machine Learning and Generative AI Want to take a dive into generative AI? This is a great primer for non-tech people. (Medium; Oct. 3, 2023)

September 2023

  • AI and the Convergence of Writing and Coding *Ed’s Recommendation A thoughtful consideration of generative AI in the writing and comp sci classrooms. (Insider Higher Ed; Sept. 28)
  • Everyone Is Above Average: Is AI a Leveler, King Maker, or Escalator? Mollick argues that AI is serving as a skill leveler, significantly elevating the performance of lower-skilled workers across various fields to or above average levels, thereby narrowing the skill gap. (Ethan Mollick’s One Useful Thing; Sept. 24).
  • Want Your Students to Be Skeptical of ChatGPT? Try This. A useful exercise for exploring ChatGPT. (The Chronicle; Sept. 24)
  • Microsoft, Google Build Their Worlds around AI It’s NOT just about ChatGPT. A discussion of the built in features that are coming and have come to word processing and other programs. (Axios; Sept. 22)
  • The Reversal Curse: LLMs Trained on “A Is B” Fail to Learn “B Is A” This paper “expose(s) a surprising failure of generalization.” To read an easier-to-follow (for us non-Math people) overview, look at this (alarmist?) explanation  Elegant and Powerful New Result that Seriously Undermines Large Language Models. Very interesting. (Substack; ArXiv; Sept. 21)
  • If ChatGPT Can Do It It’s Not Worth Doing A contrarian response to Ethan Mollick’s research below. Writing teacher John Warner critiques the reliance on large language models like ChatGPT for writing tasks, asserting that their ability to mimic human writing in educational and professional fields may devalue genuine learning and originality, and calls for a critical reassessment of tasks that truly require human innovation and thought. (Inside Higher Ed; Sept. 21)
  • Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity *Ed’s Recommendation In a study with Boston Consulting Group, consultants using AI, like GPT-4, showed increased productivity and quality in specific tasks, but struggled in others, with two distinct AI-use patterns emerging: “Centaurs” dividing tasks and “Cyborgs” fully integrating AI. To read an overview of this study, go to this article, “Centaurs and Cyborgs on the Jagged Frontier” by Ethan Mollick, one of the researchers. Mollick teases the piece with this pull quote: I think we have an answer on whether AIs will reshape work. (Harvard Business School Technology and Operational Mgt.; Sept. 18)
  • Teachers Are All In on Generative AI Note that the article focuses on how instructors (mostly k-12) are using generative AI to create teaching materials (Wired; Sept. 15).
  • Stop Focusing on Plagiarism, Even Though ChatGPT Is Here *Ed’s Recommendation A discussion of how to create a culture of trust in the classroom, with helpful links to other resources. (Harvard Business Publishing; Sept. 14)
  • AI: Brilliant but Biased Tool for Education The author discusses how ChatGPT has raised concerns among educators, leading to debates about its impact on learning and academic integrity. In response, institutions are exploring ways to adjust their teaching methods, with some incorporating AI into assignments to encourage critical thinking, while also emphasizing the importance of recognizing biases in AI-generated information and the need for students to master these tools for a technologically advanced future.
    (Diverse Issues in Higher Education; Sept. 13)
  • Why Professors Are Polarized on AI *Ed’s Recommendation Explores faculty divisions over the use of AI in higher ed. While the discussion of “tribalism” may be a stretch, the piece looks at how instructors are lining up into pro- and anti- AI camps. (Inside Higher Ed; Sept. 13)
  • AI Means Professors Need to Raise Their Grading Standards *Ed’s Recommendation English professor Michael W. Clune expresses concern over the rise of AI tools like ChatGPT in producing “merely competent” student essays, and he sees these compositions as lacking in educational value due to their absence of originality and human sensibility. (Chronicle of Higher Ed; Sept. 12) 
  • So let’s say you want to use an idea produced by ChatGPT—should you give ChatGPT credit for the ideas? Here is an unscientific survey of Wall Street Journal readers on the topic. (WSJ; Sept. 10)
  • Paper Retracted When Authors Caught Using ChatGPT to Write It The issue is a little more involved than the headline suggests, but it is true that the authors did not disclose their use of the LLM. The basic issue was transparency rather than any piece of incorrect information. Something to think about when using ChatGPT for editing. (The Byte; Sept. 9)
  • M.B.A. Students Vs. ChatGPT: Who Comes Up with More Innovative Ideas? Two professors at Wharton put the question to the test and discovered that ChatGPT outdid the MBA students. They found the result “were not even close.” (WSJ; Sept. 9, 2023)
  • Large-Scale Automatic Audiobook Creation Did you know? Project Gutenberg has uploaded audiobook versions of many of their titles thanks to AI tech. (Sept. 7, 2023)
  • What Will Determine AI’s Impact on Higher Education? 5 Signs to Watch *Ed’s Recommendation A must-read providing an overview of the generative AI landscape in higher ed. While there are plenty of cautions, despite the criticisms, experts believe generative AI is here to stay, with rivals to OpenAI developing their own models. The introduction of AI in education has led to discussions about the essence of learning. Some believe that the focus should be on motivating students to learn rather than preventing AI usage. (The Chronicle; Sept, 8; If the link does not work, you can find this article on the STL databases.)
  • Using LLMs Like ChatGPT to Quickly Plan Better Lessons *Ed’s Recommendation Graham Clay (a philosophy instructor currently teaching at University College Dublin and co-founder of AutomatedED) is a thoughtful generative AI adopter. In this article, he give tips on using generative AI to “increase the quality of  . . . lesson plans.” You may find his prompts useful. (AutomatedED; Sept. 8)
  • Explain Which AI You Mean The author cautions us about the way the term “AI” is being thrown around in the media and in conversations to describe processes that really should not be considered artificial intelligence—not all computer programs are related to advancement in Large Learning Models, much less were they designed to pass something like the Turing Test. Also, there are several types of AI, broken down broadly into generative AI and predictive AI. Yes, you need a Medium membership to read the post in its entirety, but even the first few (free) paragraphs are worth a review. (Medium; Sept. 5) 
  • Embracing Weirdness: What It Means to Use AI as a Writing Tool *Ed’s Recommendation. Another interesting article by Ethan Mollick (Wharton; UPenn). Great article about how generative AI can move beyond just being a thesaurus or grammar checker. One area of focus is on setting up chat bots to read and react as a specific audience in order to fully understand the rhetorical situation. Well worth the read!   (One Useful Thing; Sept. 5)
  • Risks and Rewards as Higher Ed Invests in an AI Future *Ed’s Recommendation. This is especially eye-opening when one considers the investment made in SUNY Albany’s AI initiatives. (Inside Higher Ed; Sept. 5)
  • How Worried Should We Be About AI’s Threat to Humanity_ Even Tech Leaders Can’t Agree. – WSJ A lengthy feature story by the WSJ that provides a snapshot of various views among AI expert. If you want to take the pulse of AI researchers, give this a read. (WSJ; Sept. 4)
  • On Copyright and AI *Ed’s Recommendation This piece, which was written by Jeff Jarvis a professor at CUNY’s journalism school, looks at cases that are currently before the courts. Jarvis asserts that “. . .  it is hard to see how reading and learning from text and images to produce transformative works would not be fair use. I worry that if these activities — indeed, these rights — are restricted . . . precedent is set that could restrict use for us all. As a journalist, I fear that by restricting learning sets to viewing only free content, we will end up with a problem parallel to that created by the widespread use of paywalls in news: authoritative, fact-based reporting will be restricted to the privileged few who can and choose to pay for it, leaving too much of public discourse vulnerable to the misinformation, disinformation, and conspiracies available for free, without restriction.” Still, the claim is somewhat ironic, given his post is behind a paywall. (Medium; Sept. 2) 
  • College Admissions: Should AI Apply? The author discusses how AI generated college application essays are uninspired and not likely to get anyone into Harvard. However, AI bots can be helpful for students who may feel stuck with an essay prompt. And why some institutions like Yale regard the use of AI generators as a form of plagiarism when it comes to the college essay, other schools like Virginia Tech view such programs as a way to “democratize the [college application] process.” Interesting article. (IEEE Spectrum; Sept. 1)
  • RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback (Scholarly Article Link) and Medium Article by Peter Xing (digesting the research). So, it looks as if programmers/researchers are finding ways to train Large Language Models that “match the performance of traditional reinforcement learning from human feedback” (RLHF). At least it works with summarizing text. This points to the probability that ChatGPT and other such programs will be able to become better at producing text that human evaluators prefer. (Sept. 1 2023)


August 2023

AI, Ethics, and Academia The Future Trends Forum with Bryan Alexander

Open Source AI for Higher Education The Future Trends Forum with Bryan Alexander

July 2023

*Ethan Mollick’s Substack is worth subscribing to. While you may not wind up agreeing with what he has to say all the time, Mollick (Wharton) knows a lot about AI developments.

June 2023

AI, Academia, and Equity The Future Trends Forum with Bryan Alexander

May 2023

Transformative Conversations: ChatGPT and AI Writing: “The What, The Why, and Oh My!” Gardner Institute

April 2023

March 2023

How Might Higher Ed Respond to AI? Future Trends Forum with Bryan Alexander

The AI Dilemma—Center for Humane Technology (March 2023) *Editor’s Recommendation–a Must-Watch Presentation

Webinar: Leveraging Social Annotation in the Age of AI  Hypothes.is

February 2023

ChatGPT, AI, and the Future of Higher Education: John Hopkins Panel Discussion

January 2023 and December 2022

ChatGPT Panel Discussion at University of Albany Facilitated by Robert P. Griffin PhD

Suspicion, Cheating and Bans: A.I Hits America’s Schools (Podcast; NY Times)