It is timely for artificial intelligence to learn how to deceive and deceive, and to explore the boundaries of "feeding"
2024-11-28
Recently, some short video platforms have emerged a large number of videos roast by a well-known entrepreneur. In the video, the entrepreneur made jokes about popular topics such as traffic jams, rest breaks, and games, and even inserted vulgar vocabulary, sparking heated discussions. In fact, this is a voiceover generated by netizens using AI software to collect the entrepreneur's original voice, but the realistic effect has made many people believe it to be true. It has only been a few years since AI generated information went from being "fake at a glance" to "deeply forged". A few years ago, games that uploaded personal photos to generate various themed posters or videos were popular on the internet, which raised people's expectations for the integration of AI into daily life. For example, the funny video "Ant Hey Hey" achieved through AI face swapping once dominated the hot search list. At the same time, AI technology is applied in fields such as weather forecasting, news broadcasting, healthcare, and mobile maps, making life more convenient. However, some new problems have also arisen. Someone created an intimate video using their own photos and footage of a certain female star, which sparked controversy; A well-known office software using user documents as AI training text has sparked opposition; Some people's voices and faces are also learned by AI, generating false information for telecommunications fraud and spreading rumors... While AI provides people with new experiences, the associated risk costs are increasing, not only infringing on the legitimate rights and interests of some people, but also disrupting the order of cyberspace. For example, the misuse of AI generated audio technology may infringe upon an individual's personality rights. The voice of a natural person has uniqueness and distinctiveness, and is an important component of their personality. In April this year, the Beijing Internet Court announced the country's first case of AI generated voice infringement, and awarded the infringer 250000 yuan in compensation to the plaintiff. For example, AI generated fake audio and video can easily cause misunderstandings and even have a negative impact, disrupting the credibility of network information and the order of cyberspace. In September of this year, negative "drunk" audio from the person in charge of a certain online platform circulated online, causing controversy for the platform. As a result, the police investigation found that the audio content was forged by AI. Standardizing the development of AI technology is a common call from the public. To this end, various sectors of society have put forward suggestions, such as improving laws and regulations, clarifying the scope of use of AI generation technology, and increasing the crackdown and punishment on the abuse of AI technology; Technology R&D practitioners should take action in identifying and filtering false content, establishing effective screening and auditing mechanisms to prevent technology from being abused, and so on. These standardized methods of "chasing AI" are essential, but from the logic of AI generated content, it is still necessary to make efforts from the front-end, that is, to do necessary regulations in the process of "feeding" materials to train AI big models. Autonomous learning of AI requires a large number of "learning objects" as the foundation, including sound, text, grammar, images, heat distribution, etc. The richer these materials are, the more AI will "eat" and the more natural and realistic the generated content will be. Currently, when netizens use AI software to upload personal biometric information such as voice and face, they are almost unguarded. In addition, AI still lacks certain standards when obtaining training materials, such as whether materials uploaded by users for an entertainment activity can be stored and reused multiple times? How should relevant information be used? All should have clear statements. Taking excessive claims on mobile apps as an example, in recent years, many app developers and operators have been criticized for excessive claims on the backend in order to obtain richer user data. At the beginning of 2021, the Ministry of Industry and Information Technology issued a document clarifying the two basic principles of personal information protection: "informed consent" and "minimum necessity". It requires that when engaging in APP personal information processing activities, users should voluntarily and clearly express their intention, and must not engage in personal information processing activities beyond the scope of user consent or unrelated to the service scenario. Similarly, AI autonomous learning should also be defined with clear and specific boundaries. For example, there should be regulations such as "informed consent" and "minimum necessary" regarding the purpose, storage, and use of information uploaded by users. In short, the process of "feeding" materials from the source is crucial, and AI self-learning and application will be more upward and beneficial. At present, the development of AI technology is booming, and the future is full of infinite opportunities and possibilities. For this reason, it is equally important to not let technology be compromised and trust be lost, as it is to the development of technology itself. (New Society)
Edit:Rina Responsible editor:Lily
Source:
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com