Ethical Constraints of AI

Author: Lily Nelsen

AI Overview:

Generative Artificial Intelligence (GAI) is taking the world by storm, 44% of Americans interact with AI as of December 12, 2024, and that number is steadily increasing. [1] With AI development operating at lightning speed, the US government has been hesitant to derail progress with limiting statutes. So what does that mean for folks who use generative AI, and those who are opposed to it?

First, let’s go over what generative AI is. Generative Artificial Intelligence is an AI that utilizes machine learning models to generate different texts, images, among other forms of content. These machine learning models tend to output what is statistically more probable to be true based on the information the model is “fed”. [2] In addition, if an output the AI generates is incorrect, a human must intervene to correct it.

Image Source: Link

AI creators also utilize outside sources to train their models, such as google queries and image searches. Using real life photographs and artworks is a major point of contention for people who don’t approve of their works being used to train such models. 

Machine vs. Deep Learning:

A machine learning model is a computational program that, as mentioned, uses probability and requires human intervention to learn. For example, if ChatGPT says something incorrect, the person interacting with it needs to tell the algorithm that what it is calculating to be true, isn’t. There are quite a few unique methods to machine learning. Listed below are different methods to machine learning.

  • Neural networks [3]
  • Linear regression [4]
  • Logistic regression [5]
  • Clustering [6]
  • Decision trees [7]
  • Random forests [8]

I will not go into detail about each and every machine learning algorithm mentioned, however, I encourage you, the reader, to look into these learning algorithms if you are interested in learning the technical details of how a machine learning model works in each of these scenarios. 

Deep learning models do not need human intervention. Though helpful, deep learning models do not require datasets to inform its decisions either. “The deep learning process can ingest unstructured data in its raw form (e.g., text or images), and it can automatically determine the set of features which distinguish different categories of data from one another.” [9]

Struggling With AI:

Several communities have struggled with the rise of generative AI, including but not limited to: visual artists, writers of both fiction and nonfiction prose, poets, songwriters and pretty much any creative field you can think of. In addition to folks in the humanities and arts, teachers of all subjects have also been struggling to keep up with the change from grading papers with content in mind to grading papers based on if the student actually wrote it. Although plagiarism has been an issue for years, it used to take actual effort and convincing to cheat off of another person. 

There is a rising fear in almost every career that AI will eventually become the best friend of corporations, that people will no longer be needed in jobs that others need to survive, especially in the US. 

Art:

The topic of AI and art has been a hot topic for the folks who make art to begin with. Corporations have attempted and continue to attempt replacing human artists with an overall “cheaper” option, that option being generative AI models. The main reasons for artists being extremely anti-AI is that a lot of generative AI models are trained off of copyrighted artists works, not volunteer works that were approved by the artist to be used.

“It’s overnight an almost billion-dollar industry. They just took our work, and suddenly we’re seeing our names being used thousands of times, even hundreds of thousands of times.” said Karla Ortiz, an artist that filed a lawsuit against Stability AI, the London based creators of Stable Diffusion for copyright infringement. [8] This is an issue in the art community due to the lack of financial compensation for artists and the use of their pieces as well as general copyright infringement.

Image Source

Education:

Many students in both K-12 and higher education have utilized AI for their learning endeavors. However, teachers and researchers alike have been concerned about the ability to retain information amongst the use of AI. From research papers to coding samples, AI can do it all without the person using it having to do any work.

This also raises the question, if AI is trained and spitting up answers on other people’s work with no original thought, does that count as plagiarism and therefore, result in academic misconduct filed against that student. It is also incredibly unfair to the folks who choose/don’t have access to the resources to utilize AI. 

“AI is only as good as the algorithms it is based on. If the data it draws from is inaccurate or biased, then the information it creates will be inaccurate or biased. Students need to learn how to evaluate and think critically about the information they come across and not just accept it at face value.” [10]

Image Source

Media & Publicity:

Deep fakes and fake propaganda have been a mass spread issue since generative AI started sweeping the nation. You may have even seen some on your social media feed. Ever seen the videos replicating past and current presidents’ voices playing on a minecraft server? Yeah, that counts as a deep fake. 

These AI generated deep fakes are oftentimes pornographic in nature, and there rarely laws to protect the individual from their likeness being used for AI generation.

How AI deepfakes threaten elections across the world in 2024

Image Source

Environmental Impact:

One AI prompt can use up to 2.4 gallons of water just for cooling purposes. Though this information isn’t entirely accurate due to the lack of transparency from data centers that house the resources for AI models, we can still estimate from the knowledge we do have. 

There is a counterargument against the unsustainable use of generative AI and that is a recent development from a Chinese company that created DeepSeek. DeepSeek is a large language model that uses significantly less power to use and maintain. In normal circumstances, you can simply run it off of your home computer. 

However, this brings us to a politically motivated issue that results when you get information from a human trained AI. That is the fact that the AI is trained, when asked politically charged questions, to answer based on the country it was trained in and the values of the person(s) who trained it. Considering the company who created DeepSeek is located in China, you can see the concern some US citizens may have as a result of that fact. 

Image Source

Privacy & Security:

AI developing malware and viruses. Stealing information through AI prompts. “As GAI  (Generative Artificial Intelligence) rapidly progresses, it outstrips the current pace of cybersecurity protocols and regulatory frameworks, leading to a paradox wherein the same innovations meant to safeguard digital infrastructures also enhance the arsenal available to cyber criminals.” [14] 

Obviously AI poses a threat to our cyberspace. Although the code for the malware the AI is making may not be perfect, it provides an easy template for your average cyber-criminal to create malware at higher speeds than ever before. Some of the most common AI Cyberattacks are social engineering attacks, phishing, smishing, and deep fakes, as discussed in a previous section. In some cases, ransomware has been AI-assisted, which creates a very scary, very real threat to users. [22]

Image Source

Conclusion

I know you, as the reader, likely use AI on a somewhat regular basis and think it’s a useful tool for what it is. In normal circumstances, I would agree, however, these ethical constraints hopefully have demonstrated how AI is being used not so sparingly at a cost to our information retention, our originality, and our planet. Don’t believe me? Well, ask the AI itself. Next time you think of using ChatGPT or any other prompt based generative AI, ask it using the prompt “What are the ethical concerns of using a generative AI model such as yourself?” and see what answer it spits back up. 

I did just that, and here is the answer(s) it gave me:

Using a generative AI model like myself raises several ethical concerns, which are important to consider to ensure responsible and fair use. Some of the key concerns include:

  1. Bias and Fairness
  • Issue: AI models are trained on large datasets that may include biased or unrepresentative data. As a result, AI models can unintentionally reflect or amplify these biases, leading to unfair outcomes in areas like hiring, criminal justice, or healthcare.
  • Concern: If a model like mine is used to assist in decision-making, there’s a risk that biased outputs could perpetuate inequality or harm certain groups.
  1. Misinformation and Disinformation
  • Issue: Generative AI models can create highly convincing text, images, or other content that is indistinguishable from human-generated content.
  • Concern: This can be misused to spread false information, manipulate opinions, or deceive people. For example, generating fake news, misleading advertisements, or impersonating individuals.
  1. Privacy
  • Issue: AI models can inadvertently generate or retrieve sensitive information from the data they’ve been trained on, which may include personal data or confidential information.
  • Concern: If these models are used irresponsibly, there could be privacy violations or security risks, especially if the models are used to track or monitor individuals without their consent.
  1. Accountability
  • Issue: It can be difficult to determine who is responsible for the actions of an AI model, particularly if it is used to make critical decisions (e.g., medical diagnoses, hiring decisions).
  • Concern: If something goes wrong or harmful decisions are made, accountability can be unclear—whether it lies with the developers, the users, or the AI itself.
  1. Job Displacement and Economic Impact
  • Issue: Automation and AI technologies have the potential to replace human workers in a variety of fields, from customer service to creative industries.
  • Concern: While AI can boost productivity, it could also lead to significant job losses, economic inequality, and shifts in workforce dynamics.
  1. Autonomy and Human Control
  • Issue: As AI systems become more advanced, there is a risk that they may operate in ways that are difficult for humans to understand or control.
  • Concern: In critical areas like healthcare or military applications, overreliance on AI could undermine human oversight, leading to situations where AI makes decisions that humans cannot easily intervene with or understand.
  1. Manipulation and Ethical Use
  • Issue: Generative AI can be used to create persuasive content that influences people’s behavior, opinions, or emotions.
  • Concern: There’s a potential for AI to be used unethically for manipulation, such as creating propaganda, engaging in targeted political manipulation, or generating content that exploits people’s vulnerabilities.
  1. Environmental Impact
  • Issue: Training large AI models requires significant computational resources, which can have a large carbon footprint.
  • Concern: As AI becomes more widespread, the environmental impact of running and maintaining these models must be considered, particularly in terms of energy consumption and the sustainability of these practices.
  1. Lack of Transparency (Black Box Problem)
  • Issue: Many AI models, especially large ones, operate as “black boxes,” meaning their decision-making process is not easily understandable by humans.
  • Concern: This lack of transparency can be a significant ethical issue, particularly when AI is used in high-stakes areas like healthcare, law, or finance, where understanding how decisions are made is crucial.
  1. Intellectual Property
  • Issue: Generative models can create content that mimics or replicates existing works, potentially infringing on intellectual property rights.
  • Concern: It raises questions about the ownership of AI-generated content and whether creators should be compensated for the works that inspire AI models.

Mitigating Ethical Concerns:

To address these concerns, it’s important for organizations and developers to implement robust safeguards, such as:

  • Conducting bias audits to ensure fairness and inclusivity.
  • Creating transparency in how AI models are built and used.
  • Promoting responsible AI use through guidelines and regulations.
  • Encouraging human oversight and accountability in AI decision-making.

Ethics in AI is a rapidly evolving area, and ongoing dialogue and action will be necessary to navigate these challenges effectively. [13]

Sources:

[1] AI Statistics 2024 · AIPRM

[2] What is generative AI? – IBM Research

[3] What is a Neural Network? | IBM

[4] What is Linear Regression? | IBM

[5] What is Logical Regression? | IBM

[6] What is Clustering? | IBM

[7] What is a Decision Tree? | IBM

[8] What is a Random Forest? | IBM

[9] What Is Machine Learning (ML)? | IBM

[10] AI in Schools: Pros and Cons | Illinois

[11] AI’s Challenging Waters | Civil & Environmental Engineering

[12] THE STATE OF DEEPFAKES

[13] ChatGPT

[14] [2405.01674] Generative AI in Cybersecurity

[15] Deep Learning vs. Machine Learning – What’s The Difference?

[16] Summary Artificial Intelligence 2024 Legislation

[17] Generative AI Regulations – What Your Business Needs To Know for 2025

[18] Ethical concerns mount as AI takes bigger decision-making role — Harvard Gazette

[19] The ethical dilemmas of AI | USC Annenberg School for Communication and Journalism

[20] Visual artists fight back against artificial intelligence companies for repurposing their work | PBS News

[21] The Dark Side of AI in Cybersecurity — AI-Generated Malware[22] Most Common AI-Powered Cyberattacks | CrowdStrike


Posted

in

by

Tags: