News

UN calls for a moratorium on the use of AI that threatens human rights

UN Human Rights Chief Michelle Bachelet has called for a moratorium on the use of AI technologies that pose a serious risk to human rights, including face scanning systems that track people in public places.

The UN High Commissioner for Human Rights also said that national governments should explicitly ban AI applications that do not comply with international human rights law.

Michelle Bachelet
Michelle Bachelet

Applications that should be banned include government “social assessment” systems and certain AI-based tools that divide people into groups, such as based on the ethnicity or gender.

The UN Human Rights Office released a report on Wednesday warning of the risks associated with artificial intelligence technology and emphasizing that while artificial intelligence can serve for good, it can also be disastrous if used irresponsibly.

The report says that some countries and companies have rushed to use artificial intelligence systems that affect people’s lives, without creating adequate safeguards to prevent discrimination and other harm.

The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society.the UN report states.

Bachelet did not call for a complete ban on facial recognition technology, but said governments should stop real-time face scanning until they can demonstrate that the technology is accurate and meets certain privacy and data protection standards.

The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.Michelle Bachelet stressed.
The report also expressed caution with tools that attempt to determine the emotional and mental state of people by analyzing their facial expressions or body movements. According to experts, such technologies are subject to bias, misinterpretation and lack of scientific basis.

Let me remind you that we talked about how the Chinese authorities use AI to analyze emotions of Uyghur prisoners, and also that China officially legalized the “Social Credit System”.

Sending
User Review
0 (0 votes)
Comments Rating 0 (0 reviews)

Daniel Zimmermann

Daniel Zimmermann has been writing on security and malware subjects for many years and has been working in the security industry for over 10 years. Daniel was educated at the Saarland University in Saarbrücken, Germany and currently lives in New York.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Sending

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button