The “Ethics Guidelines for Trustworthy ArtificialI Intelligence” is a document prepared by the independent high-level expert group on artificial intelligence set up by the European Commission, the High-Level Expert Group on Artificial Intelligence (AI HLEG) presented its ethics guidelines for trustworthy artificial intelligence.
According to the AI HLEG, Trustworthy AI has three components: it should be lawful, ethical, and robust (both from a technical and social perspective since AI systems can cause unintentional harm).
The Guidelines aim at setting out a framework for achieving Trustworthy AI.
Chapter I identifies the ethical principles that must be respected in the development and use of AI systems. The principles are (i) respect for human autonomy; (ii) prevention of harm; (iii) Fairness, and (iv) Explicability.
Chapter II lists seven requirements that AI systems should meet in order to develop Trustworthy AI, which include:
- human agency and oversight;
- technical robustness and safety: AI systems need to be resilient and secure;
- ensure privacy and data governance mechanisms;
- transparency of data, system and AI business models;
- diversity, non-discrimination and fairness: AI systems should be accessible to all, regardless of any disability;
- societal and environmental well-being to benefit all human beings, including future generations;
- accountability for AI systems and their outcomes.
Chapter III provides a concrete assessment list aimed at operationalizing the key requirements set out in Chapter II.
The final section of the document aims at spelling out some of the issues “by offering examples of beneficial opportunities that should be pursued, and critical concerns raised by AI systems that should be carefully considered.”
The Ethics Guidelines for Trustworthy ArtificialI Intelligence (April 8, 2019) are available at https://ec.europa.eu…