Artificial Intelligence (AI) has emerged as a powerful catalyst for technological advancement and societal change. However, its rapid growth also raises critical ethical and legal questions. To navigate these complexities effectively, the European Commission has established Guidelines to ensure the development and deployment of trustworthy AI.
What Makes AI Trustworthy?
The Guidelines articulate three fundamental pillars that define trustworthy AI:
- Lawful: AI systems must comply with all applicable laws and regulations, ensuring that their operation is within the legal framework.
- Ethical: These systems should adhere to ethical principles and values, placing human dignity and rights at the forefront.
- Robust: AI must be resilient, not only from a technical standpoint but also within its broader social context.

Core Requirements for Trustworthy AI
The Guidelines further specify seven key requirements for AI systems to be deemed trustworthy:
- Human Agency and Oversight: It is essential that AI systems empower individuals, allowing them to make informed choices while ensuring effective oversight mechanisms. This can involve a variety of approaches, including human-in-the-loop, human-on-the-loop, and human-in-command systems.
- Technical Robustness and Safety: AI solutions should be secure, resilient, and include fallback plans for unexpected situations. They must be accurate, reliable, and reproducible to minimize potential harm.
- Privacy and Data Governance: Respect for privacy and data protection is paramount. AI systems must have robust data governance frameworks that ensure quality and integrity, along with legitimate access rights to data.
- Transparency: The algorithms, data, and business models behind AI should be transparent. Traceability mechanisms are essential, and users must be informed about the capabilities and limitations of these systems.
- Diversity, Non-discrimination, and Fairness: AI must proactively avoid biases that could lead to negative societal consequences, including discrimination against vulnerable groups. It is crucial that these systems foster diversity and inclusivity.
- Societal and Environmental Well-being: AI should contribute positively to society and the environment. This means developing sustainable solutions that consider future generations and the environmental impact of AI technologies.
- Accountability: Clear accountability mechanisms must be established for AI systems. This involves enabling audits of algorithms, data, and design processes, especially in critical applications.
A Definition of AI
The AI High-Level Expert Group (AI HLEG) has also developed a comprehensive definition of Artificial Intelligence specifically for these Guidelines. Understanding this definition is crucial for applying the Guidelines effectively in practice.
Download the Definition of AI in Your Language
BG |
CS |
DE |
DA |
EL |
EN |
ES |
ET |
FI |
FR |
HR |
HU |
IT |
LT |
LV |
MT |
NL |
PL |
PT |
RO |
SK |
SL |
SV
Piloting the Guidelines
The operationalization of these requirements is facilitated through an assessment list designed to guide implementation. Beginning June 26, 2019, this list entered a piloting phase that invited feedback from diverse stakeholders.
Feedback was collected through multiple channels:
- An open survey for quantitative analysis, targeting those who registered for the piloting process.
- In-depth interviews with a selection of organizations from various sectors, providing qualitative insights.
- A continuous platform for feedback and best practices via the European AI Alliance.
Completion of the Piloting Phase
The piloting phase concluded on December 1, 2019. Based on the gathered feedback, the AI HLEG refined and presented the final Assessment List for Trustworthy AI (ALTAI) in July 2020. This checklist is a practical tool that translates the Ethics Guidelines into a dynamic self-assessment format, designed for developers and deployers of AI.
ALTAI is available as a prototype web-based tool and in PDF format.