
Artificial Intelligence (AI) is the buzzword. Just as human actions are evaluated against a set of standards, a machine that simulates human intelligence must also undergo the same, but stricter, quality check. Modern AI systems excel in many areas, but one area that is often overlooked is ethics.
Ethics is a system of moral principles that includes ideas about right and wrong, and how people should (or should not) behave in general and specific cases. Why should a machine be ethical? Let’s look at some interesting stories where it wasn’t:
- “Ghiblification” (2025) – It was an unconsented training on the copyrighted art; it was a contradiction of Hayao Miyazaki’s philosophy, where he has described art generated by AI as an insult to the human race.
- Amazon’s Biased Hiring Tool (2014/2018): Amazon had to scrap an AI recruiting tool that taught itself to prefer male candidates, as it was trained on resumes submitted to the company over 10 years, most of which came from men.
These examples explain why ethics are important for AI. Let’s explore the moral implications of AI:
- Biased AI: AI systems can inherit human biases from their training data, which can lead to wrong interpretations and results.
- AI in law: AI-based decisions may lack transparency, neutrality, and accountability, potentially resulting in discrimination.
- Privacy Concerns: AI uses a huge amount of data to train, and most of it is personal, high-security data.
- Accountability: When AI makes a wrong move, who is responsible – the developer or AI?
- Job and Economic Impact: AI-driven automation can widen economic inequality by replacing human jobs with machines.
Given these concerns, we know why AI must be ethical and why it is a serious concern; several government and non-government organizations have proposed frameworks and guidelines.
- The European Union (EU) Artificial Intelligence Act is a landmark regulation that sets standards for how AI systems must be developed, deployed, and used across all EU member states.
- UNESCO’s Global Recommendation on the Ethics of AI, which encourages multi-stakeholders’ involvement to come up with rules and regulations.
- OECD’s AI principles, which focus on human – centred values and fairness, transparency, and explainability
- IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems -Technical standards and ethical frameworks for AI developers.
In conclusion, efforts have been made to make AI ethical and responsible, and they must continue proactively. As the saying goes, “Prevention is better than cure.” Establishing strong ethical standards for AI is not optional—it is necessary to ensure technology benefits humanity without causing harm.








