2025-04-21
Artificial intelligence technology has a profound impact on human work and life, and has many positive meanings for the protection of human rights However, as artificial intelligence technology is continuously promoted in various fields, its application also faces human rights risks that are either realistic or potential, which we must face up to. How deal with human rights risks through the path of legalization is a must-answer question in the process of artificial intelligence development.
Human rights risks in AI applications
AI may pose human rights risks in data, algorithms and other aspects, infringing on people's to life, health, property, privacy, personal freedom and other rights and interests.
First, there are human rights risks related to data, including excessive data collection, data, data cross-border transmission and so on, all of which may lead to human rights issues.
In terms of excessive data collection and abuse, some enterprises collect personal data as location information and browsing records without users' knowledge, or transfer user data to other companies or institutions without consent for commercial activities such as advertising and market research, which violates users' to know and privacy. If it is used for illegal purposes such as fraud and malicious harassment by illegal elements, it will bring mental distress and property loss to users, infringing on personal rights and property rights.
In terms of data cross-border transmission, due to significant differences in data protection regulations among different countries, data faces a dilemma of protection during cross-border flow. In addition, certain countries may take the opportunity of data cross-border transmission to carry out intelligence collection activities for specific purposes, which may infringe on privacy rights of citizens of other countries.
Second, there are human rights risks related to algorithms, including data bias, algorithm logic errors, and algorithmic black boxes
Regarding data bias, the training data sets of AI algorithms may contain bias, leading to distorted analysis results, which in turn perpetuate or amplify existing inequalities decision-making, affecting the realization of rights for certain groups, such as gender or age discrimination in recruitment algorithms.
As for algorithm logic errors, during the design or process, logic errors can lead to unintended behavior or incorrect results, which can then infringe upon human rights. For example, algorithmic defects in autonomous driving scenarios can lead to failing to correctly handle sudden road conditions (such as a sudden appearance of pedestrians), causing accidents that directly affect the right to life and health of others.
With regard algorithmic black boxes, due to the lack of transparency, knowability, and explainability of the inputs, outputs, and running processes of algorithms in AI or big data analytics, algorithmic monopolistic behaviors can harm consumers' rights, and users' right to know and right to make autonomous decisions face the risk of being deprived.
Third there are human rights issues resulting from the replacement of human decision-making.
Smart recommendation systems, through algorithmic architectures, reshape users' information access paths,riving users of their right to choose. For instance, video websites recommend relevant content based on users' historical behavior, e-commerce platforms place high-profit goods at the forefront, some social platforms' "turn off recommendations" function requires multiple steps of operation, all of which result in a gradual narrowing of users' range of information exposure and a decrease in right to choose.
Moreover, while improving efficiency, AI tools may lead human decision-makers to become "technological dependents," weakening professionals' critical thinking causing decision-making processes to deviate from "human-machine collaboration" to "machine dominance," such as the "dual misdiagnosis risk" (AI misdiosis doctor blindly following) in the medical field.
Human employment rights are also at risk. The World Economic Forum's "2025 Future of Report" predicts that by 2030, 92 million jobs will be replaced by AI globally, while about 170 million new jobs may be created during same period, forming a significant "job replacement tide." In addition, as employment thresholds increase, low-skilled workers will find it difficult to adapt to the new employment market and the digital divide caused by AI will lead to a higher risk of job loss in developing countries compared to developed countries, potentially causing issues such as widening wealth gaps between different countries
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com