Alexis Atwater  

Instructor Camilleri 

English 170  

28 November 2023 

The Threat of Artificial Intelligence 

Imagine growing up in a world where humans are no longer the most intelligent force. Instead, machines are the dominant intellectual force, doing everything that humans can do but quicker and more efficiently. This is probably pretty easy to imagine because it is happening right now. All around you, computers and machines are becoming smarter and smarter and more human-like. These machines that mimic the intelligence of humans and do human-like tasks are called artificial intelligence. “Artificial intelligence (AI) is a computer’s ability to do tasks commonly associated with human intelligence” (Britannica). It has taken over in the world of technology due to its efficient and quick data learning. According to Briticanna, the first AI programs were developed by Alan Turing in the 1940s when he published his work “Computer Machinery Intelligence” (Briticanna). Since then, there have been growing advances in what AI can do and it is now smarter and more prominent than ever. According to the article “What is Machine Learning”, the artificial intelligence programs that are used today use machine learning by using data and algorithms to imitate the way humans learn which leads to the machine being more intelligent and improving its accuracy with more data it consumes (What Is Machine Learning). This technology is seen all around us today. It is becoming more prominent in the media, businesses, medical institutions, and education. The growing integration of artificial intelligence in our society poses a great threat to our population due to the threat of job displacement and wage decreases, AI-generated false media and miscommunications, algorithmic bias and discrimination, and the increase in dependency on artificial intelligent technologies leading to laziness and the loss of human intelligence. This research paper aims to bring awareness to the huge threat and the consequences AI poses to our population, advocating for people to be cautious of the power it has and the damage it can do. 

Artificial intelligence poses a huge threat to the workforce. Due to AI’s ability to perform automated routine tasks quicker and more efficiently than humans, it threatens the jobs of many workers. According to an article by Business Insider, written by Aaron Mok and Jacob Zinkula, the jobs that will be most affected by automation are market research analysts, tech jobs, media jobs, legal industry jobs, market research analysts, finance jobs, traders, graphic designers, accountants, and customer service agents (Mok). This demonstrates the copious number occupations and people who will be negatively affected by this technology, showing the true threat and power that artificial intelligence has as it becomes more and more advanced and implemented. According to a research paper called The Impact of AI and Machine Learning on Job Displacement and Employment Opportunities, by Rudra Tiwari, artificial intelligence is expected to lead to job losses and unemployment and a decline in the wages of the workers in the affected jobs (Tiwari). This poses a threat because companies will turn to the cheaper and more efficient alternative, which is artificial intelligence, leaving many people out of work. There is already high poverty and unemployment in the world, so with this technology becoming more advanced and used every day, it will only exacerbate these issues. With an increase in unemployment and poverty, the economy will change drastically, affecting everyone. Overall, artificial intelligence is a huge threat because it has a significant potential to put people out of jobs and decrease workers’ wages, ultimately leaving people financially vulnerable.  

Privacy concerns with technology have always been a major concern in our society due to advancing technology. With artificial intelligence becoming more used today and their algorithms becoming more advanced, privacy concerns have skyrocketed. One reason why privacy is such a major problem with artificial intelligence is because AI relies on vast amounts of personal data from its users to machine learn and generate answers. According to an academic journal written by Yaou Hu, people’s personal information such as biometric data, biographic details, and behavior logs, can be collected by AI and stored without the knowledge of the customers. This data can then be used by AI to other users in unauthorized ways or even be subject to security breaches (Hu). This is a major concern because a lot of businesses and medical institutions use artificial intelligence to generate data about their patients or clients. “If the training data contains sensitive data, like medical records, financial information, or other identifiers, there’s a risk of unintentionally generating sensitive information that violates privacy regulations across jurisdictions and puts individuals at risk” (Noss).  People could then get ahold of this information and cause serious harm or damage to an individual. According to an article by Paolo Passari, AI programs like ChaGPT and “OpenAI” have already had several instances where customer’s personal data including their name, last name, email address, payment address, the last four digits of their credit card number, and chat history were visible to other users (Passeri). This is a huge threat to people’s privacy because there is a risk of personal information getting into the wrong hands and being used in unethical ways. Another major concern about the privacy of our society is the usage of artificial intelligence by hackers to launch high-profile data breaches on companies, businesses, and other institutions. “With AI, hackers can find patterns in the data that help them launch more targeted attacks” (Farrelly). According to an article called “High-Profile Company Data Breaches 2023”, by Jessica Farrelly, hackers can tailor their attacks to exploit specific weaknesses or trick certain individuals. This makes it easier for them to steal valuable information from companies (Farrelly). Artificial intelligence algorithms aid hackers in launching these attacks on companies, individuals, institutions, etc. This poses a huge threat because companies like these have very important data about their company, people in their company, their clients, customers, or patients. If hackers could get ahold of this information with the help of AI, they could do a lot of damage with it. Overall, artificial intelligence poses a huge threat to our population’s privacy because personal information can be gathered and exposed to others, and AI can aid hackers in cyber-attacks.  

Artificial intelligence not only threatens the privacy of its users, but there is also a risk of spreading misinformation and miscommunication to the public through fake images, videos, false media, and voice recordings. “AI algorithms can generate realistic-looking images, videos, and articles that appear to be from reputable sources, making it difficult for the average person to distinguish between real and fake information” (Team). This poses a threat because it can manipulate public opinion by making people think that the media, they are seeing is real. This can then lead to the spread of false media and information to others. According to an article about AI and the threat of misleading voters by David Klepper and Ali Swenson, these false images, videos, and voice recordings generated by artificial intelligence algorithms are called deepfakes. Deepfakes can take the form of highly realistic images, videos, and voice recordings (Klepper). This fake content can then circulate the media and spread fast, leading to the potential of getting someone in trouble, humiliating an individual, using it against someone like in a political campaign, or manipulating the public’s opinion. If used in malicious ways, deepfake media created by artificial intelligence can cause a lot of damage to an individual, a company, an organization, career, or be used to scam people. Another way in which artificial intelligence poses a threat is through miscommunication by AI chatbots. People often use AI chatbots to gain information for problems and questions they have. They can also be used to have casual chats back and forth, in which AI could give you the wrong information or advice. “Humans often accept an AI system’s recommended decision even when it is wrong – a conundrum called AI overreliance. This means people could be making erroneous decisions in important real-world contexts such as medical diagnosis” (Miller). As noted earlier, a lot of big companies and medical institutions use AI to help them solve crucial and complex problems. If the answers being produced are false or misleading, it could lead to detrimental decisions being made. Overall, artificial intelligence has the power to influence public opinion and the decisions made by people in harmful and dangerous ways. The power that AI has to manipulate information and data, can also lead to the threat of bias and discrimination in the data it produces. 

Artificial intelligence is programmed by regular people who create algorithms. These people have the power to program these intelligent technologies to restrict or censor certain information. This poses a threat because AI technologies have the possibility of having biased or discriminating algorithms. This can lead to people making medical, employment, business, or everyday decisions in an unethical and biased or discriminatory manner based on AI’s information. This bias and discrimination happen when the algorithm in the AI technology makes an unfair outcome that is unjustifiable and biased towards a certain group. According to Vinchent Le, a senior legal counsel of technology equity, this bias and discrimination happen when the training data in the algorithm reflects decisions or data made by humans in the past or reflects the bias and discrimination that was trained by the creator of the algorithm (Le). A lot of companies and public institutions use these algorithms to make decisions about employment, education, health care, investments, and distribution of government resources. There is a strong chance that the AI technologies that are being used could have an underlying bias in the training data, leading to discrimination of certain people or groups. This has already happened and is happening with many big companies. According to Vinhecent Le, in 2014, Amazon used a recruiting algorithm to predict what job applicants would do the best. This led to the algorithm favoring men over women because the training data learned the patterns of the underrepresentation of women in Amazon’s employees and the dominance of men (Le). This is very threatening because our society already struggles with discrimination in employment. Now that AI is becoming more used by companies and being implemented to make company decisions like employment, there is still a chance of these biases and discriminations happening. This can be very threatening and even life-threatening when hospitals and health care systems use poorly written algorithms containing biases. This could lead to the discrimination of certain people getting the care they need. Overall, artificial intelligence poses a huge threat to the population because it can be trained by previous data or programmed by its makers to have underlying bias and discrimination. 

With more and more integration and usage of artificial intelligence, people are becoming dependent and reliant on these technologies to think for them, make decisions for them, create things for them, and do their homework for them. This dependency has the potential to lead to loss of creativity and intelligence, along with the increase in laziness. Over the past year, there has been a growing concern that students are using AI programs such as ChatGPT to do their homework assignments for them. People use ChatGPT by putting in an assignment from their class with a series of steps and prompts. AI will then generate an answer, and students then submit this work to their teachers and get credit for it. Students nowadays don’t even have to think about the work they are doing because they can just go to an AI website and get the answer for them. This leads to students not effectively learning and therefore leads to the loss of intelligence and the ability to problem solve and think abstractly and logically. This has a good chance of harming our society because if kids grow up relying on AI to give them answers and think for them, they will be unable to think deeply and logically for themselves. This can lead to a decrease in intelligence, leading to a decrease in students going into professions such as doctors, lawyers, and scientists that require intelligence, problem-solving, and hard work. According to a study done to show the linkage between AI and the loss of human decision-making, the use of AI leads to laziness in humans (Ahmad). “When the usage and dependency of AI are increased, this will automatically limit the human brain’s thinking capacity. This, as a result, rapidly decreases the thinking capacity of humans” (Ahmad). This poses a great concern for schools, businesses, and institutions integrating AI into their everyday work, as it will lead to a decrease in intelligent thinking in humans. With artificial intelligence becoming more and more used and integrated every day, there is an increasing chance of this happening. Overall, artificial intelligence poses a huge threat to the intelligence of our population and the ability to think for ourselves.  

What does this mean for the future? With these growing threats and concerns of artificial intelligence and the increase in the advancement and implementation of AI programs and systems, it is important to stay alert and use them as little as possible. It is important to be aware of these concerns because it can help you avoid privacy concerns and concerns about dependency and intelligence decline. In terms of the biases, misinformation, and miscommunication that AI programs can produce, it is important to be aware that there is a possibility of viewing false or biased media created by artificial intelligence. With advanced artificial intelligence programs being more relevant, I suspect that there will be talk of government regulations to make sure that the algorithms are unbiased and a better way to tell if the media is false or not. Conversely, if the government begins to regulate these programs, it could lead them to program the algorithms in a biased way to use them for political gain or other political and control purposes.  

In conclusion, artificial intelligence poses a significant threat to the current and future of our society. As artificial intelligence becomes more influential and prominent in our society, the more we have to fear. People’s jobs, privacy, opinions, and intelligence are at risk as AI becomes more powerful and more used. If we keep going at the pace we are today with these AI technologies, the future may look a lot different. It is important that we use these advanced technologies with caution and in ethical ways, to avoid the significant damage they can cause.  

 

 

 

Works Cited 

Ahmad, Sayed Fayaz, et al. “Impact of artificial intelligence on human loss in decision making, Laziness and Safety in Education.” Humanities and Social Sciences Communications, vol. 10, no. 1, 2023, https://doi.org/10.1057/s41599-023-01787-8 

Copeland, B. J. “Artificial Intelligence.” Encyclopædia Britannica, Encyclopædia Britannica, inc., 4 Dec. 2023, www.britannica.com/technology/artificial-intelligence 

Farrelly, Jessica. “High-Profile Company Data Breaches 2023.” Electric, 10 July 2023, www.electric.ai/blog/recent-big-company-data-breaches 

Hu, Yaou, and Hyounae (Kelly) Min. “The Dark Side of Artificial Intelligence in service: The ‘watching-eye’ effect and privacy concerns.” International Journal of Hospitality Management, vol. 110, Feb. 2023, p. 103437, https://doi.org/10.1016/j.ijhm.2023.103437 

Klepper, David, and Ali Swenson. “Ai-Generated Disinformation Poses Threat of Misleading Voters in 2024 Election.” PBS, Public Broadcasting Service, 15 May 2023, www.pbs.org/newshour/politics/ai-generated-disinformation-poses-threat-of-misleading-voters-in-2024-election 

Le, Vinhcent. “Algorithmic Bias Explained: How Automated Decision-Making Becomes Automated Discrimination.” The Greenlining Institute, 20 Dec. 2022, greenlining.org/publications/algorithmic-bias-explained/?gclid=CjwKCAiAvJarBhA1EiwAGgZl0PNrAWze5USXhuSfv_kyIlO-HiHtF5k7-z4Q9kuaf0bGuXfCkL5aSRoCuqYQAvD_BwE.  

Miller, Katharine. “Ai Overreliance Is a Problem. Are Explanations a Solution?” Stanford HAI, Stanford University, 13 Mar. 2023, hai.stanford.edu/news/ai-overreliance-problem-are-explanations-solution.  

Mok, Aaron. “Chatgpt May Be Coming for Our Jobs. Here Are the 10 Roles That AI Is Most Likely to Replace.” Business Insider, Business Insider, 4 Sept. 2023, www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificial-intelligence-ai-labor-trends-2023-02 

Noss, Sam. “Generative AI and Its Impact on Privacy Issues.” DataGrail, DataGrail, Inc., 8 Sept. 2023, www.datagrail.io/blog/data-privacy/generative-ai-privacy-issues/ 

Passeri, Paolo. “Analyzing the Risk of Accidental Data Exposure by Generative AI.” Infosecurity Magazine, Reed Exhibitions Ltd., 16 Aug. 2023, www.infosecurity-magazine.com/blogs/accidental-data-exposure-gen-ai/#:~:text=For%20example%2C%20at%20the%20end,the%20generative%20AI%20app%20offline 

Team, AIContentfy. “The Impact of AI on Content Authenticity.” AIContentfy, AIContentfy Oy, 1 Feb. 2023, aicontentfy.com/en/blog/impact-of-ai-on-content-authenticity#:~:text=The%20ease%20with%20which%20AI%20can%20generate%20convincing%20fake%20news,even%20that%20which%20is%20authentic.  

Tiwari, Rudra. “The Impact of AI and Machine Learning on Job Displacement and Employment Opportunities.” International Journal of Scientific Research in Engineering and Management, vol. 07, no. 01, Jan. 2023, pp. 1–8, https://doi.org/https://doi.org/10.55041/IJSREM17506 

“What Is Machine Learning?” IBM, IBM, www.ibm.com/topics/machine-learning#:~:text=the%20next%20step-,What%20is%20machine%20learning%3F,learn%2C%20gradually%20improving%20its%20accuracy. Accessed 4 Dec. 2023.