Ethical Concerns of AI: What Are People Still Worried About?
July 4, 2024 - Ellie Gabel
Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Learn more here.
Since ChatGPT’s release, more and more companies are integrating artificial intelligence into their operations or releasing their own models. The corporate world is excited about the many possibilities of automation and generative AI. However, consumers and employees are still grappling with the ethical concerns of AI.
Many fear the software will take their jobs, either by commandeering their positions or directly taking their work. What do brands need to keep in mind when using or creating their own AI?
What Are the Ethical Issues of AI in 2024?
People are still worried about many aspects of AI use. Here are some of the top causes of concern in 2024.
Bias
Bias is still very much a possibility despite experts’ assertions that AI removes it. The data the software is trained on can hold preconceived ideas the developers may not recognize. Examples include underrepresenting specific groups or judgments on where a person lives. These factors can influence whether someone receives a job offer, a bank loan or admission to a school.
For instance, IBM has highlighted cases where AI shows high-paying positions to men more often than women and image generation only displays older men in the workplace. There was even an instance where Amazon had to remove its automated hiring bot because it favored the wording on men’s applications. There are also dire problems, such as potential issues with predictive policing that can reinforce racial profiling and health care algorithms proving more inaccurate for Black patients.
Security Risks
Because AI is so new, there are various security risks surrounding it. First, people’s security may be in jeopardy if developers do not properly anonymize the data they use to train the program. Doing so may inadvertently expose personal information, or hackers may steal it from corporate databases for their own gains. Using tools to obscure the data and ensuring robust cybersecurity are critical to ethical AI use.
Other concerns are malicious use of algorithms to enact distributed denial of service attacks, the software having too much agency on decisions humans should make, and harmful third-party plugins and publicly available software. Enterprises should prepare for these programs to threaten them through hackers’ own models, someone tampering with software to create breaches and leaks it can accidentally create itself.
Sustainability
AI has a host of potentially positive effects on the environment. It can inspire eco-friendly changes in notably destructive sectors, make electricity use and waste reduction more efficient, predict the impact of climate change, and develop better materials. However, the energy it takes to perform those functions is currently too high, especially considering many places still use fossil fuels for power generation.
The problem of bias may contribute to ideas that prioritize profit over the environment. There are also concerns that AI use in e-commerce to speed up deliveries could increase unnecessary consumption and decrease farm biodiversity in favor of cash crops. Finally, some worry it could increase natural resource utilization to the point of depletion and e-waste.
Traceability and Transparency
Two similar and major ethical concerns of AI use are traceability and transparency. Businesses must be honest about where they get the information they use to train their programs and be wary of AI hallucinations. The former can help avoid lawsuits like the one the New York Times launched against OpenAI after it discovered ChatGPT was trained on paywalled content that it cited to users. Additionally, doing so ensures people can fact-check the bot’s claims rather than over-relying on it.
That ability is crucial when AI is known to make up information. There have been instances of it generating websites that do not exist for citations, false data for marketing purposes and simply incorrect basic facts. Telling users where the knowledge came from assists in side-stepping legal troubles and inaccurate data.
Use in Warfare
Automation could prove highly valuable for analyzing landscape images and predicting where enemy bases or traps may be located. However, many worry about the use of autonomous weapons.
Can AI truly make the right decisions in battle, and should nations rely on software to take human life? If civilian injury occurs, who takes the blame?
Job Loss
Of course, the major factor the public is worried about is losing their work to AI. Many artists have had their work amalgamated into generative bots and automation is rampant in industries from manufacturing to restaurants.
Aaron Allen & Associates — an international restaurant consulting firm — believes 82% of restaurant jobs could be automated. How will job-seekers make a living if no basic positions exist anymore? If work is only available for those with a college degree, it could make earning money difficult for many.
What Are the Ethical Guidelines for AI?
Anyone training an AI model must ensure the information it learns from has as little bias as possible and is also scrubbed of personally identifiable information. Ensuring a diverse team examines the data and developers properly anonymize it helps remove those possibilities. They should enable users to see where the software’s conclusions came from, as well.
IT teams must also ensure cybersecurity is up to snuff to ward off bad actors who want to steal the details and examine plug-ins to avoid potential breaches from maliciously designed programs. Those who use the algorithm for marketing or research must avoid over-relying on it and remember to fact-check the material it comes up with.
Ideally, enterprises will act in workers’ best interests and utilize automation alongside human staff to provide jobs while making processes easier. Doing so often makes sound business sense, as these programs mostly focus on the logical outcome without considering unique solutions. Only time will tell if innovations will make AI use more sustainable and if its presence in combat will ever be ethical.
Understanding the Ethical Concerns of AI
Employees and consumers worldwide worry about what prolific AI use could do to their experiences and roles in corporate settings. To ease those fears, brands must prioritize using these models effectively rather than focusing on quick profits. Such attention can help them dodge expensive lawsuits, reputation damage and poor hiring decisions.
Revolutionized is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Learn more here.
Author
Ellie Gabel
Ellie Gabel is a science writer specializing in astronomy and environmental science and is the Associate Editor of Revolutionized. Ellie's love of science stems from reading Richard Dawkins books and her favorite science magazines as a child, where she fell in love with the experiments included in each edition.