Placeholder canvas

UN CALLS FOR MORATORIUM ON ARTIFICIAL INTELLIGENCE TECH THAT THREATENS HUMAN RIGHTS

The UN called Wednesday for a moratorium on artificial intelligence systems like facial recognition technology that threaten human rights until “guardrails” are in place against violations. UN High Commissioner for Human Rights Michelle Bachelet warned that “AI technologies can have negative, even catastrophic effects if they are used without sufficient regard to how they affect people’s human rights.”

She called for assessments of how great a risk various AI technologies pose to rights to privacy and freedom of movement and expression.

She said countries should ban or heavily regulate the ones that pose the greatest threats.

But while such assessments are underway, she said that “states should place moratoriums on the use of potentially high-risk technology.”

Presenting a fresh report on the issue, she pointed to profiling and automated decision-making technologies.

She acknowledged that “the power of AI to serve people is undeniable.”

“But so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility,” she said.

“Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”

Damage human lives

The report, which was called for by the UN Human Rights Council, looked at how countries and businesses have often hastily implemented AI technologies without properly evaluating how they work and what impact they will have.

The report found that AI systems are used to determine who has access to public services, job recruitment and impact what information people see and can share online, Bachelet said.

Faulty AI tools have led to people being unfairly denied social security benefits, while innocent people have been arrested due to flawed facial recognition.

“The risk of discrimination linked to AI-driven decisions –- decisions that can change, define or damage human lives –- is all too real,” Bachelet said.

Discriminatory data

The report highlighted how AI systems rely on large data sets, with information about people collected, shared, merged, and often analyzed opaquely.

The data sets themselves can be faulty, discriminatory, or out of date and thus contribute to rights violations, it warned.

For instance, they can erroneously flag an individual as a likely terrorist.

The report raised particular concern about law enforcement’s increasing use of AI, including as forecasting tools.

When AI and algorithms use biased historical data, their profiling predictions will reflect that, for instance, by ordering increased deployments to communities already identified, rightly or wrongly, as high-crime zones.

Remote real-time facial recognition is also increasingly deployed by authorities across the globe, the report said, potentially allowing the unlimited tracking of individuals.

Such “remote biometric recognition technologies” should not be used in public spaces until authorities prove they comply with privacy and data protection standards and do not have significant accuracy or discriminatory issues, it said.

“We cannot afford to continue playing catch-up regarding AI –- allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact,” Bachelet said.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Newsletter

Follow Us

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed