Current Resources: Articles/Op-Eds/Webinars/Courses about AI & ChatGPT

Generative AI Repository


September 2023

  • Everyone Is Above Average: Is AI a Leveler, King Maker, or Escalator? Mollick argues that AI is serving as a skill leveler, significantly elevating the performance of lower-skilled workers across various fields to or above average levels, thereby narrowing the skill gap. (Ethan Mollick’s One Useful Thing; Sept. 24).
  • Want Your Students to Be Skeptical of ChatGPT? Try This. A useful exercise for exploring ChatGPT. (The Chronicle; Sept. 24)
  • Microsoft, Google Build Their Worlds around AI It’s NOT just about ChatGPT. A discussion of the built in features that are coming and have come to word processing and other programs. (Axios; Sept. 22)
  • The Reversal Curse: LLMs Trained on “A Is B” Fail to Learn “B Is A” This paper “expose(s) a surprising failure of generalization.” To read an easier-to-follow (for us non-Math people) overview, look at this (alarmist?) explanation  Elegant and Powerful New Result that Seriously Undermines Large Language Models. Very interesting. (Substack; ArXiv; Sept. 21)
  • If ChatGPT Can Do It It’s Not Worth Doing A contrarian response to Ethan Mollick’s research below. Writing teacher John Warner critiques the reliance on large language models like ChatGPT for writing tasks, asserting that their ability to mimic human writing in educational and professional fields may devalue genuine learning and originality, and calls for a critical reassessment of tasks that truly require human innovation and thought. (Inside Higher Ed; Sept. 21)
  • Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity *Ed’s Recommendation In a study with Boston Consulting Group, consultants using AI, like GPT-4, showed increased productivity and quality in specific tasks, but struggled in others, with two distinct AI-use patterns emerging: “Centaurs” dividing tasks and “Cyborgs” fully integrating AI. To read an overview of this study, go to this article, “Centaurs and Cyborgs on the Jagged Frontier” by Ethan Mollick, one of the researchers. Mollick teases the piece with this pull quote: I think we have an answer on whether AIs will reshape work. (Harvard Business School Technology and Operational Mgt.; Sept. 18)
  • Teachers Are All In on Generative AI Note that the article focuses on how instructors (mostly k-12) are using generative AI to create teaching materials (Wired; Sept. 15).
  • Stop Focusing on Plagiarism, Even Though ChatGPT Is Here *Ed’s Recommendation A discussion of how to create a culture of trust in the classroom, with helpful links to other resources. (Harvard Business Publishing; Sept. 14)
  • AI: Brilliant but Biased Tool for Education The author discusses how ChatGPT has raised concerns among educators, leading to debates about its impact on learning and academic integrity. In response, institutions are exploring ways to adjust their teaching methods, with some incorporating AI into assignments to encourage critical thinking, while also emphasizing the importance of recognizing biases in AI-generated information and the need for students to master these tools for a technologically advanced future.
    (Diverse Issues in Higher Education; Sept. 13)
  • Why Professors Are Polarized on AI *Ed’s Recommendation Explores faculty divisions over the use of AI in higher ed. While the discussion of “tribalism” may be a stretch, the piece looks at how instructors are lining up into pro- and anti- AI camps. (Inside Higher Ed; Sept. 13)
  • AI Means Professors Need to Raise Their Grading Standards *Ed’s Recommendation English professor Michael W. Clune expresses concern over the rise of AI tools like ChatGPT in producing “merely competent” student essays, and he sees these compositions as lacking in educational value due to their absence of originality and human sensibility. (Chronicle of Higher Ed; Sept. 12) 
  • So let’s say you want to use an idea produced by ChatGPT—should you give ChatGPT credit for the ideas? Here is an unscientific survey of Wall Street Journal readers on the topic. (WSJ; Sept. 10)
  • Paper Retracted When Authors Caught Using ChatGPT to Write It The issue is a little more involved than the headline suggests, but it is true that the authors did not disclose their use of the LLM. The basic issue was transparency rather than any piece of incorrect information. Something to think about when using ChatGPT for editing. (The Byte; Sept. 9)
  • M.B.A. Students Vs. ChatGPT: Who Comes Up with More Innovative Ideas? Two professors at Wharton put the question to the test and discovered that ChatGPT outdid the MBA students. They found the result “were not even close.” (WSJ; Sept. 9, 2023)
  • Large-Scale Automatic Audiobook Creation Did you know? Project Gutenberg has uploaded audiobook versions of many of their titles thanks to AI tech. (Sept. 7, 2023)
  • What Will Determine AI’s Impact on Higher Education? 5 Signs to Watch *Ed’s Recommendation A must-read providing an overview of the generative AI landscape in higher ed. While there are plenty of cautions, despite the criticisms, experts believe generative AI is here to stay, with rivals to OpenAI developing their own models. The introduction of AI in education has led to discussions about the essence of learning. Some believe that the focus should be on motivating students to learn rather than preventing AI usage. (The Chronicle; Sept, 8; If the link does not work, you can find this article on the STL databases.)
  • Using LLMs Like ChatGPT to Quickly Plan Better Lessons *Ed’s Recommendation Graham Clay (a philosophy instructor currently teaching at University College Dublin and co-founder of AutomatedED) is a thoughtful generative AI adopter. In this article, he give tips on using generative AI to “increase the quality of  . . . lesson plans.” You may find his prompts useful. (AutomatedED; Sept. 8)
  • Explain Which AI You Mean The author cautions us about the way the term “AI” is being thrown around in the media and in conversations to describe processes that really should not be considered artificial intelligence—not all computer programs are related to advancement in Large Learning Models, much less were they designed to pass something like the Turing Test. Also, there are several types of AI, broken down broadly into generative AI and predictive AI. Yes, you need a Medium membership to read the post in its entirety, but even the first few (free) paragraphs are worth a review. (Medium; Sept. 5) 
  • Embracing Weirdness: What It Means to Use AI as a Writing Tool *Ed’s Recommendation. Another interesting article by Ethan Mollick (Wharton; UPenn). Great article about how generative AI can move beyond just being a thesaurus or grammar checker. One area of focus is on setting up chat bots to read and react as a specific audience in order to fully understand the rhetorical situation. Well worth the read!   (One Useful Thing; Sept. 5)
  • Risks and Rewards as Higher Ed Invests in an AI Future *Ed’s Recommendation. This is especially eye-opening when one considers the investment made in SUNY Albany’s AI initiatives. (Inside Higher Ed; Sept. 5)
  • How Worried Should We Be About AI’s Threat to Humanity_ Even Tech Leaders Can’t Agree. – WSJ A lengthy feature story by the WSJ that provides a snapshot of various views among AI expert. If you want to take the pulse of AI researchers, give this a read. (WSJ; Sept. 4)
  • On Copyright and AI *Ed’s Recommendation This piece, which was written by Jeff Jarvis a professor at CUNY’s journalism school, looks at cases that are currently before the courts. Jarvis asserts that “. . .  it is hard to see how reading and learning from text and images to produce transformative works would not be fair use. I worry that if these activities — indeed, these rights — are restricted . . . precedent is set that could restrict use for us all. As a journalist, I fear that by restricting learning sets to viewing only free content, we will end up with a problem parallel to that created by the widespread use of paywalls in news: authoritative, fact-based reporting will be restricted to the privileged few who can and choose to pay for it, leaving too much of public discourse vulnerable to the misinformation, disinformation, and conspiracies available for free, without restriction.” Still, the claim is somewhat ironic, given his post is behind a paywall. (Medium; Sept. 2) 
  • College Admissions: Should AI Apply? The author discusses how AI generated college application essays are uninspired and not likely to get anyone into Harvard. However, AI bots can be helpful for students who may feel stuck with an essay prompt. And why some institutions like Yale regard the use of AI generators as a form of plagiarism when it comes to the college essay, other schools like Virginia Tech view such programs as a way to “democratize the [college application] process.” Interesting article. (IEEE Spectrum; Sept. 1)
  • RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback (Scholarly Article Link) and Medium Article by Peter Xing (digesting the research). So, it looks as if programmers/researchers are finding ways to train Large Language Models that “match the performance of traditional reinforcement learning from human feedback” (RLHF). At least it works with summarizing text. This points to the probability that ChatGPT and other such programs will be able to become better at producing text that human evaluators prefer. (Sept. 1 2023)

 


August 2023


July 2023

*Ethan Mollick’s Substack is worth subscribing to. While you may not wind up agreeing with what he has to say all the time, Mollick (Wharton) knows a lot about AI developments.


June 2023


May 2023


April 2023


March 2023


February 2023


January 2023 and December 2022


Articles written by SUNY New Paltz faculty & SUNY New Paltz Webinars and Talks: 

ChatGPT Calls for Scholarship, Not Panic by Andrew Higgins, English, Inside Higher Ed; Aug. 25, 2023

ChatGPT, Artificial Intelligence, and the Future of Writing by Glenn Geher, Psychology, Psychology Today 

With ChatGPT, We’re All Editors Now by Rachel Rigolino, English, Inside Higher Ed

ChatGPT Unleashed: Navigating the Future of AI-Generated Content on Campus (SUNY New Paltz; April 2023)*Editor’s Recommendation:

Without Limits: Conversation with Author Carmen Maria Machado (April 2023):

International Webinar Facilitated by Doni Wulandana (Engineering):


Recent Webinars, Forums, TedTalks and Podcasts

PODCASTS SERIES:

Screenshot of Generative AI Podcast by NYSCATE

 

 

Consider subscribing to this podcast series, sponsored by the The New York State Association for Computers and Technologies in Education or NYSCATE. Though this group focuses on k-12 educators, college instructors will find the information useful as well. You can sign up on YouTube, Spotify, or Apple Podcasts (among other services).

 

 


AI, Ethics, and Academia (Future Trends Forum; Aug. 18, 2023)

What are the ethics of using artificial intelligence in higher education? This Future Trends Forum continues our collaborative exploration of emerging AI with a splendid guest, Donald Clark, a lifelong educational technology innovator and teacher, entrepreneur, CEO, professor, author of Artificial Intelligence for Learning, and blogger.


How can higher education grapple with artificial intelligence? The Future Trends Forum explores this question with a focus on an underdiscussed aspect: open source AI. Computer scientist Ruben R. Puentedura, widely known as the creator of the SAMR framework for understanding the intersection of teaching and tech, leads the discussion.


AI, Equity, and Equity (Future Trends Forum; June 2023)

 


Unlocking the Power of AI: How Tools Like ChatGPT Can Make Teaching Easier and More Effective (Webinar; Harvard Business Publishing; SU23)



How might Higher Education Respond to AI? Future Trends Forum (March 2023):


The AI Dilemma—Center for Humane Technology (March 2023) *Editor’s Recommendation–a Must-Watch Presentation


From Pearson:




ChatGPT Panel Discussion: SUNY Albany (Jan. 31, 2023):


More Podcasts:

Suspicions, Cheating, and Bans: AI Hits American’s Schools Includes interviews with students. Very interesting. *Site Editor’s Recommendation (NYTimes; June 28)

On Campus Podcast – AI in Higher Education (Collaborative Institutional Training Initiative (CITI); April 19, 2023) Focuses on potential biases and inaccuracies with AI and implications for faculty.

Bryan Alexander, Ed Tech Futurist, on AI in Higher Education (Inside Higher Ed; April 9, 2023)

ChatGPT and Good Intentions in Higher Ed An argument against using ChatGPT  (Teaching in Higher Ed; Feb. 2023)

ChatGPT: Tea for Teaching A discussion about how generative AI might be used in writing-heavy classes. (Feb. 2023)

‘Everybody is cheating’: Why This Teacher Has Adopted an Open ChatGPT Policy (NPR; Jan. 2023)