Building on the work of the group of independent experts appointed in June 2018, the Commission is today launching a pilot phase to ensure that the ethical guidelines for Artificial Intelligence (AI) development and use can be implemented in practice. The Commission invites industry, research institutes and public authorities to test the detailed assessment list drafted by the High-Level Expert Group, which complements the guidelines.
Today's plans are a deliverable under the AI strategy of April 2018, which aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust.
Vice-President for the Digital Single Market Andrus Ansip said: “I welcome the work undertaken by our independent experts. The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”
Commissioner for Digital Economy and Society Mariya Gabriel added: “Today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI."
Artificial Intelligence (AI) can benefit a wide-range of sectors, such as healthcare, energy consumption, cars safety, farming, climate change and financial risk management. AI can also help to detect fraud and cybersecurity threats, and enables law enforcement authorities to fight crime more efficiently. However, AI also brings new challenges for the future of work, and raises legal and ethical questions.
The Commission is taking a three-step approach: setting-out the key requirements for trustworthy AI, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric AI.
1.Seven essentials for achieving trustworthy AI
Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:
- Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
- Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
- Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency: The traceability of AI systems should be ensured.
- Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
- Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
2. Large-scale pilot with partners
In summer 2019, the Commission will launch a pilot phase involving a wide range of stakeholders. Already today, companies, public administrations and organisations can sign up to the European AI Alliance and receive a notification when the pilot starts. In addition, members of the AI high-level expert group will help present and explain the guidelines to relevant stakeholders in Member States.
3. Building international consensus for human-centric AI
The Commission wants to bring this approach to AI ethics to the global stage because technologies, data and algorithms know no borders. To this end, the Commission will strengthen cooperation with like-minded partners such as Japan, Canada or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organisations
Members of the AI expert group will present their work in detail during the third Digital Day in Brussels on 9 April. Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps.
Furthermore, to ensure the ethical development of AI, the Commission will by the autumn 2019: launch a set of networks of AI research excellence centres; begin setting up networks of digital innovation hubs; and together with Member States and stakeholders, start discussions to develop and implement a model for data sharing and making best use of common data spaces.
The Commission is facilitating and enhancing cooperation on AI across the EU to boost its competitiveness and ensure trust based on EU values. Following its European strategy on AI, published in April 2018, the Commission set up the High-Level Expert Group on AI, which consists of 52 independent experts representing academia, industry, and civil society. They published a first draft of the ethics guidelines in December 2018, followed by a stakeholder consultation and meetings with representatives from Member States to gather feedback. This follows the coordinated plan with Member States to foster the development and use of AI in Europe, also presented in December 2018.
For more information
Communication: “Building trust in human-centric artificial intelligence”
AI ethics guidelines
Factsheet artificial intelligence
High-Level Expert Group on AI
European AI Alliance
Artificial Intelligence: A European Perspective
Artificial Intelligence Watch