Eng160 Final Draft

Aidan Ward

10/24/2023

Heather Christy Robinson

ENG 160

The Importance of Regulatory Methods regarding Artificial Intelligence

 

While this could become an incredibly positive force in human progress, the potential negatives bring thoughts of dystopian novels to mind. Due to this paradigm, and the profound consequences of a regulatory inaction, regulation is needed. Although not solving the root of the problem, one of the most compelling methods of regulation currently on the table is that of copyright. Due to the nature of Generative Artificial Intelligence (G.A.I.) requiring copious amounts of unique human-created inputs, the originality of generated content is called into question. This paper will firstly address the current status and scope of copyright, denoted by current legislation and precedent.  Furthermore,

Copyright Law is designed to protect an individual’s intellectual property whether it’s music, art, literature, or other art forms. Copyright law is designed to protect creators from intellectual property theft. However, within copyright law lays a caveat titled “Fair Use”, which allows use of copyrighted materials under specific conditions. The qualifier for “Fair Use” is a transformative use of the content. While this seems vague in nature (which it intentionally is to be open to interpretation), it can be largely filtered to one of two categories: commentary and criticism, or parody (What Is Fair Use?). The implications this has on G.A.I. are cloudy, as the law was clearly not written around this new technology yet is the most applicable regulatory mechanism in contemporary American law. This paired with the inherent expansive nature of transformative necessitates legal precedent to curate adequate applications to generated content. While the explicit details of this G.A.I. dilemma lay in uncharted waters, similar issues have been explored in past cases like Authors Guild v. Google and Andy Warhol Foundation for the Visual Arts v. Goldsmith (Jassin). The former outlined an acceptable case for GAI to be used in Google’s processing of books into their searchable databases, allowing individuals to read small portions of books without consent of the publisher or author. While not exactly applicable to the current state of GAI, it has led to a “wild west” mentality of intellectual property, incentivizing technology companies to disregard any claims to copyright. Contrary to this is the recent case of Andy Warhol Foundation for the Visual Arts v. Goldsmith. This case established precedent that if an original work is significantly harmed economically by another that would otherwise be qualified as fair use, fair use status is not applicable. With the increasing prominence of generated A.I. content, the oversaturation of the market would lead to inherent devaluation. This may prove a useful outlet for future litigation regarding copyright and G.A.I., but does not directly implicate any G.A.I. firms rendering it ineffective. These precedents illustrate that copyright may be a potential outlet of regulation but are not currently prepared to address these novel issues. Because of this, specific regulation is further required.

These issues of non-regulation hit especially close to home for journalists. Journalism is predominantly dominated in a text-based medium, meaning the proliferation of text-based generative A.I. such as Chat-GPT poses an existential threat to the journalist profession. This view is echoed throughout the entire journalist community, with the effects of early automation efforts being felt by mass layoffs. Early adopters such as Microsoft (A company whose investment in Open A.I., the creator of Chat GPT has caused a rejuvenating stock rally, nearly becoming the most valuable stock in the world(Karaian)) have begun implementing artificial intelligence on their website. This change will influence the millions of Americans that utilize windows products, and as such are prompted to use one of the worlds most influential news sources (O’Sullivan). The labour aspect is not the only reason behind the dismay of the artificial reckoning, as the quality of reporting also suffers. This new journalistic revolution has been plagued by fake news, unreliable sources, and self-reported “Low quality articles” (O’Sullivan). These issues call for systemic reform, reform that is likely to be outpaced by the innovations in this sector. Some news organizations are attempting to integrate artificial intelligence in a productive yet controlled manor. Such is the case with the Associated Press, a news organization who has long sat on the peak of journalistic integrity and decorum who is actively leading a movement for reform. The A.P. has established a set of internal regulations that regulate use A.I., restricting use away from content generation and instead as unvetted source material (David), effectively treating A.G.I. as a whistleblower. While this is a net positive, it does not go far enough.  A wide coalition of news organizations recognize the unsustainability behind this method of regulation, and are pleading for transparency reforms; explicitly stating that “these technologies can threaten the sustainability of the media ecosystem” (Preserving public trust in media through unified AI regulation and practices). This begins to illustrate the potential issue with non-regulation, but darker clouds are on the horizon, threatening the solvency of international democracy and cooperation.

As illuded to in the beginning of this essay, the ramifications of a free reign development philosophy may create new complex issues. Potential issues include both the individual level threats of labour rights as discussed, alongside the state level concerns involving constituent based public policy decisions. This becomes obvious when we examine how decisions in government are made. decisions are based on consensus, and consensus is thus derived from informed viewpoints constructed through independent research, most often through reliable journalism. A core tenant of civil society is the participation in government by its people, and while state actors may have the knowledge and resources to avoid falling into this trap of accidental ignorance, the voter base who elects these actors to power may not be. To avoid this trap, and to ensure a trustworthy and integrous news-ecosystem, we must achieve a balance of algorithmic utilization and ethical morality (Ahmad). Within the last year, we have seen a proliferation of low-quality content farms that tip the scales towards an algorithmic dependence (Brewster). Many of these articles are plagiaristic in nature and lack a manual review process (O’Sullivan). While I contend that this unregulated space is ultimately unhealthy, others argue that we are merely in a state of development that must be explored first, and then regulated.

This idea of delayed regulation is parroted by titans of the tech industry, such as the “Father of ChatGPT” Sam Altman (Bello). During Altman’s development of the intelligence framework, he has operated under a mentality similar to the “move fast break things” mantra that Mark Zuckerburg once coined (Nolan). Altman has described his development goals as an “A.I. arms race” against competing startups, a sentiment echoed by its business partner Microsoft in an official memo. This negligent attitude was explicitly stated by Sam Schillace, Microsoft’s chief technology officer with the statement it would be an “absolutely fatal error in this moment to worry about things that can be fixed later.” (Weise) The rationale behind this decision is to compete with other firms attempting to harness similar technology, a methodology akin to Facebook during its inception. Although likely a profitable move, this has historically been a harmful approach as was the case with Facebook. Facebook notably suffered numerous crisis due to their “move fast and break things” mantra, with the Cambridge Analytica scandal being the most notable (Nolan). While we cannot rewrite the mistakes of our past, I believe it is our responsibility to prevent history from reoccurring, especially considering the elevated stakes that come with this new technology.

These issues as discussed are sure to change, as the dynamic landscape we find ourselves in is subject to the ebbs and flows of technological advancement alongside potential government oversight. While there is a distinct lack of government intervention at the current moment, we have past precedents regarding copyright that may prove to be effective in regulation of this novel technology. While current fair-use statutes, and the law at large are unprepared for this new age of technology, the potential avenues of regulation are written into the law and may be levied to provide guard rails. These guard rails have proven necessary, as the current climate projects a grim outlook on the future of journalistic integrity and morality for many major news organizations. This technology has further proven to be a potential threat to exercises of civil society, and we should not trust those who are beholden to capital to independently regulate their own potential gains. As we have seen with countless instances in the past, if left to their own behest, capital corrupts, favouring expansion over empathy. For this reason, I hold that further regulation is of prescient concern to all individuals who interact within society at large.