Artificial intelligence ethics is a whole subject area unto itself and something where multiple stakeholders have to collaborate to weed out any bias at any stage.
Bias in AI algorithms is real and it can lead to disastrous outcomes — racial discrimination, misinterpreting context, gender bias and much more. It can have an adverse effect on various industries such as consumer technology, banking, insurance, education and many more.
The European Union has just published seven guidelines for trustworthy AI based on around 500 comments received following the publication of a draft on ethics guidelines in December 2018. Around 52 experts were consulted for enlisting these seven guidelines for responsible AI.
The seven guidelines are not quite like Issac Asimov’s ‘Three laws of robotics‘ which essentially state that robots should never harm humans (through inaction or allow a human being to come to harm) and must always obey instructions given by humans unless they conflict with the first law.
With artificial intelligence, the net is spread wide as there is no one single AI entity. AI in its multiple forms is implemented across industries, and these guidelines are meant to act as a playbook for preventing the introduction of any bias in the development of these AI algorithms and also to allow the general public to know how the AI algorithms work. Data privacy and governance is also a key factor in these guidelines.
The seven guidelines are as follows:
- Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
- Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
- Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency: The traceability of AI systems should be ensured.
- Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
- Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
According to The Verge, these guidelines are not legally binding but could play a role in any future legislation on the matter which could be drafted by the EU.
The EU website states that in Summer 2019, the European Commission will launch a pilot phase involving a wide range of stakeholders and those interested can register for the European AI Alliance.
“Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps,” said the EU report.