AI Ethics: What It Is and Why It Matters

Written by Coursera Staff • Updated on

AI ethics are a growing topic, given the rapid adoption of technological advancements in artificial intelligence. Learn how it can help foster a world with less bias and more fairness.

[Featured image]An AI engineer in a blue shirt reviews code while sitting in front of a desktop computer monitor.

As artificial intelligence (AI) becomes increasingly important to society, experts in the field have identified a need for ethical boundaries when creating and implementing new AI tools. While the government in the United Kingdom has devised a framework for regulating AI, it has yet to codify it into law. In the interim, many technology companies have adopted their version of AI ethics or an AI code of conduct.

AI ethics are the moral principles companies use to guide responsible and fair development and use of AI. Explore what ethics in AI are, why they matter, and some challenges and benefits of developing an AI code of conduct.

What are AI ethics?

Stakeholders, ranging from engineers to government officials, use AI ethics as guiding principles to ensure responsible development and use of artificial intelligence technology. These principles require a safe, secure, humane, and environmentally friendly approach to AI. 

A strong AI code of ethics can include avoiding bias, ensuring the privacy of users and their data, and mitigating environmental risks. Two main ways to implement AI ethics are introducing a code of ethics in companies and government-led regulatory frameworks. By covering global and national ethical AI issues and laying the policy groundwork for ethical AI in companies, both approaches help regulate AI technology.

More broadly, the discussion surrounding AI ethics has progressed from centring around academic research and non-profit organisations. Today, big tech companies like IBM, Google, and Meta have assembled teams to tackle ethical issues that arise from collecting massive amounts of data. At the same time, government and intergovernmental entities have begun to devise regulations and ethics policies based on academic research.

Stakeholders in AI ethics

Developing ethical AI use and development principles requires industry actors to work together. Stakeholders must examine how social, economic, and political issues intersect with AI and determine how machines and humans coexist harmoniously.

Each actor plays an essential role in ensuring less bias and risk for AI technologies.

  • Academics: Researchers and professors develop theory-based statistics, research, and ideas that can support governments, corporations, and non-profit organisations.

  • Government: Agencies and committees within the government in the UK and other countries can help facilitate AI ethics in a nation. A good example of this is the Government Communication Headquarters’ report, Pioneering a New National Security, which explores AI’s potential in the UK and delves into ethical challenges and potential solutions. 

  • Intergovernmental entities: Entities like the United Nations and the World Bank work to raise awareness and draft agreements on AI ethics globally. For example, UNESCO’s 193 member states adopted the first-ever global agreement on the Ethics of AI in November 2021 to promote human rights and dignity.

  • Non-profit organisations: Non-profit organisations like AI for Good create AI technology, partner with local non-governmental organisations and charities, and lead the conversation surrounding AI ethics.

  • Private companies: Executives at Google, Meta, and other tech companies, as well as banking, consulting, healthcare, and other private sector industries that use AI technology, are responsible for creating ethics teams and codes of conduct. It often sets a standard for companies to follow.

Why are AI ethics important?

AI ethics are vital given AI technology’s capacity to augment or replace human intelligence. However, when technology's design closely mirrors human life, the same issues that can cloud human judgement can seep into the technology.

AI projects built on biased or inaccurate data can have harmful consequences, particularly for underrepresented or marginalised groups and individuals. Furthermore, engineers and product managers may find it unmanageable to correct learned biases if they build AI algorithms and machine learning models too hastily. To mitigate future risks, it's easier to incorporate a code of ethics during the development process.

AI ethics in film and TV

Science fiction—in books, films, and television—has toyed with the notion of ethics in artificial intelligence for some time now. For example, in Spike Jonze’s 2013 film Her, a computer user falls in love with his operating system because of her seductive voice. It’s entertaining to imagine how machines could influence human lives and push the boundaries of “love,” but it also highlights the need for thoughtfulness around these developing systems.

Placeholder

Examples of AI ethics

It may be easiest to illustrate the ethics of artificial intelligence with real-life examples. In December 2022, the app Lensa AI used artificial intelligence to generate cool, cartoon-looking profile photos from people’s regular images. From an ethical standpoint, some criticised the app for failing to credit or adequately compensate artists who created the original digital art used to train the AI. According to The Independent, Lensa’s training came from vast amounts of photos, many of which the artists never consented to contribute [1]. 

Another example is the generative AI model ChatGPT, which enables users to interact with it by asking questions. ChatGPT scours the internet for data and answers with a poem, Python code, or a proposal. One ethical dilemma is that people use ChatGPT to win coding contests or write essays. It also raises questions similar to those in the Lensa situation but with text rather than images [2].

In 2023, the Writers Guild of America strike lasted 148 days and received support from UK writers. It was part of broader Hollywood labour disputes alongside the 2023 Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) strike involving job protections against artificial intelligence (AI). Established actors worry about losing control over their likenesses, while lesser-known actors fear outright replacement. Conversely, writers grapple with the prospect of sharing or losing credit to automated GenAI systems.

These are just three notable examples of AI ethics. As AI has grown in recent years, influencing nearly every industry and substantially impacting sectors like healthcare, AI ethics has become even more salient. How do we ensure bias-free AI? How can we mitigate risks in the future? Potential solutions could emerge, but stakeholders must act responsibly and collaboratively to create positive global outcomes.

Ethical challenges of AI

Various real-life challenges can help illustrate AI's potential downside and the need for established ethical guidelines. Below are just a few examples.

AI and bias

If AI doesn’t collect data that accurately represents the population, its decisions might be susceptible to bias. In 2018, Amazon was under fire for its AI recruiting tool, which downgraded CVs that featured anything related to “women” (such as “Women’s International Business Society”)  [3]. The AI tool discriminated against women and caused legal risks for the tech giant.

AI and privacy

As mentioned earlier with the Lensa AI example, AI relies on data from internet searches, social media photos and comments, online purchases, and more. While this helps to personalise the customer experience, questions arise around the apparent need for valid consent for these companies to access personal information. 

AI and the environment

Some AI models are large and require significant energy to train on data. Although research continues in efforts to devise methods for energy-efficient AI, policymakers could do more to incorporate environmental and ethical concerns into AI-related policies.

How to create more ethical AI

Creating more ethical AI requires examining the ethical implications of policy, education, and technology. Regulatory frameworks can ensure that technologies benefit society rather than harm it. Globally, governments are beginning to enforce policies for ethical AI, including how companies should deal with legal issues if bias or other harm arises. 

Anyone encountering AI should understand the risks and potential negative impact of unethical or fake AI. The creation and dissemination of accessible resources can mitigate these types of risks.

Using technology to detect unethical behaviour in other forms of technology may seem counterintuitive. Still, AI tools can help you effectively determine whether video, audio, or text (hate speech on Facebook, for example) is fake. They can also detect unethical data sources and biases better and more efficiently than humans.

Keep learning

Many different groups, including researchers, government agencies, and global organisations, play essential roles in ensuring that AI development is ethical and fair. Ultimately, ethical AI means considering how it affects people and taking steps to prevent harm. The ultimate question society must answer is this: How do we control machines that are more intelligent than we are? Lund University’s Artificial Intelligence: Ethics & Societal Challenges explores AI technologies' ethical and societal impact. With topics ranging from algorithmic bias and surveillance to AI in democratic versus authoritarian regimes, this course can help you learn about AI ethics and why they matter in society.

Article sources

1

The Independent. “Lensa AI: Tool that turns your photos into stunning portraits hit by growing criticism, https://www.independent.co.uk/tech/lensa-ai-photo-portrait-app-download-how-to-b2242408.html.” Accessed 29 July 2024. 

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.