AI recruitment cannot exceed legal boundaries
2024-07-23
Currently, in the recruitment process of enterprises, more and more job seekers are facing artificial intelligence (AI) interviewers instead of human interviewers during interviews. According to the "Research Report on the Development of China's Online Recruitment Market in 2023", the application scenarios of AI video interviews have accounted for 31.8%. Behind the strong promotion of artificial intelligence recruitment system development and application by professional recruitment platforms is a huge market demand. The reason why AI recruitment is favored is due to its various advantages. Firstly, efficiency is crucial. Faced with thousands of resumes, AI computing power is much more efficient than human screening. Secondly, objectivity is important as AI eliminates subjective biases and evaluates job seekers in a more neutral manner. Finally, low cost, AI recruitment saves a lot of time, manpower, and material costs. In the process of continuous development, AI recruitment systems are becoming increasingly intelligent. From initial resume screening to job matching, and then to AI interviews, AI can complete more and more tasks in recruitment. In practice, the latest AI interview systems already have personalized questioning functions. However, while enjoying the technological dividends brought by AI recruitment, we should also be wary of its potential negative effects. The cold thinking that must be made in the face of the wave of intelligent recruitment is: Is it legally permissible to do what is technically possible? AI recruitment is not a lawless place and must operate within the legal framework, which is often overlooked in the context of rapid technological development. The author summarizes the legal risks that may exist in the AI recruitment process under the current legal framework and attempts to explore ways to address them. Firstly, the issue of transparency. Many job seekers do not understand why they were eliminated due to low scores from AI? This reflects an algorithmic black box problem. The algorithm operation mechanism for AI to evaluate and make decisions on job seekers should be transparent and interpretable. Recruiters have a legal obligation to publicly disclose key algorithms and explain them in a way that job seekers can understand. From the perspective of job seekers, the law has given them the power to interpret algorithms, and they can take up legal weapons to actively protect their own rights. Secondly, the issue of fairness. AI may make unfair evaluations of job seekers and even trigger algorithmic discrimination. Algorithmic discrimination mainly stems from two factors: one is that recruiters intentionally implant discriminatory parameters when designing algorithms, deliberately excluding certain job seeking groups; Secondly, there are scientific flaws in the data training or algorithm design itself. The right to employment is a fundamental human right, and AI recruitment is related to whether job seekers can obtain jobs, so recruitment algorithms belong to high-risk algorithms. Designers and users of AI recruitment systems not only continuously strengthen data training and improve algorithms to enhance their scientificity, but also have a legal obligation to conduct periodic risk assessments of recruitment algorithms, and should identify and eliminate discriminatory risks through assessments. For job seekers who have suffered unfair treatment, anti discrimination remedies can be advocated, and existing anti discrimination laws still apply to AI recruitment. Thirdly, the issue of personal information and protection of human dignity. The application of technologies such as facial recognition, speech analysis, and emotion recognition in AI interviews inevitably requires the collection of a large amount of personal information from job seekers, including sensitive personal information such as facial information and voiceprint information. The collection and processing of this information is subject to strict legal restrictions. Recruiters must fulfill the obligations of personal information processors in accordance with the provisions of the Personal Information Protection Law. Firstly, the collection of job seeker information must be limited to clear recruitment purposes and cannot be used for other purposes. The second is to follow the principle of minimization, with the necessary limit of meeting the recruitment purpose, collecting personal information as little as possible, and not permanently storing the collected information. It should be destroyed at a reasonable time after the recruitment is completed. The third is to fulfill the obligation of informing job seekers and obtaining their consent regarding the collection of personal information. The fourth is to fulfill the obligation of personal information protection risk assessment. The fifth is to adopt confidentiality measures such as desensitization, de identification, and anonymization for the collected information. Job seekers have the right to access, copy, and delete personal information collected during AI interviews in accordance with the law. In addition to personal information protection, AI recruitment may also involve issues of personal dignity. It is worth noting that according to the recently enacted EU Artificial Intelligence Act, artificial intelligence systems that analyze employee emotions are prohibited due to their unacceptable risk level, which is precisely the advanced technology claimed by many AI recruitment systems. Although there is no clear legislation prohibiting this in our country, we should be cautious about the challenges that AI recruitment may bring to personality rights. Fourthly, the issue of human supervision over AI. After endowing AI with recruitment functions, it is worth reflecting on what role humans should play. Due to the limitations of AI, there may be risks such as inaccurate algorithms and discrimination. Recruiters cannot simply stand by and watch, but should act as AI supervisors. With the upgrading of technology, the role of AI in recruitment has gradually shifted from auxiliary work to independent and automated decision-making. Article 24 of the Personal Information Protection Law stipulates that if a decision is made solely through automated decision-making and has a significant impact on an individual's rights, the individual has the right to refuse the decision. The automated decision-making in recruitment affects the employment rights of workers, which clearly complies with this regulation. This means that recruiters should manually review the automated decisions formed by AI. AI recruitment is a result of technological development that has improved work efficiency, but it must operate within the boundaries set by law. The actual recruiters should be the supervisors and ultimate decision-makers. (New Society)
Edit:Xiong Dafei Responsible editor:Li Xiang
Source:China.org.cn
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com