Zidong Taichu 2.0 Release: Exploring Another Leap Forward in General Artificial Intelligence

2023-06-26

Recently, at the Artificial Intelligence Framework Ecological Summit 2023, Xu Bo, director of the Institute of Automation of the Chinese Academy of Sciences, officially released the "Zidong Taichu" full mode big model, which demonstrated the powerful functions of the big model in music understanding and generation, 3D scene navigation, signal understanding, multi-mode dialogue, etc. in real time. It is understood that the "Zidong Taichu" full modal large model is an upgraded version 2.0 based on the "Zidong Taichu" 1.0 multi modal large model with hundreds of billions of parameters. On the basis of voice, image and text three modes, it adds video, signal, 3D point cloud and other modal data, breaks through key technologies such as multimodal association for cognitive enhancement, and has full modal understanding, generation and association capabilities. Moving from multimodal to full modal cognitive ability is crucial. Starting from 2019, the Institute of Automation of the Chinese Academy of Sciences, based on the research and application of single modal large models such as speech, text, and image, has focused on the field of multimodal large models and started joint research. In 2021, the 100 billion parameter multimodal large model "Zidong Taichu" 1.0 was officially released, promoting artificial intelligence from "one specialty, one capability" to "multiple specialties, multiple capabilities". Xu Bo stated that human learning and interaction are carried out using multimodality, and in order to achieve a higher level of intelligence, multimodal capabilities are necessary. Therefore, the "Zidong Taichu" large model was carried out along the multimodal Technology roadmap at the beginning. In the process of continuously strengthening the application of the 'Zidong Taichu' 1.0 model, we have discovered many new requirements. For example, from the perspective of industrial intelligence, there are many parameters that need to be processed, such as temperature, humidity, pressure, liquid level measurement, etc. From the perspective of medical scenarios, there are many physical examination structural data and heterogeneous medical imaging data. Through analyzing these structured and unstructured data, we realize that only by simplifying these data can we Only by raising the level of collection, statistics, and analysis to the level of understanding of these data can we truly lead us towards an intelligent society, and also understand and change the world on a broader and higher dimension Xu Bo mentioned. Therefore, seizing the crucial role of "cognitive ability", the "Zidong Taichu" 2.0 full modal large model has achieved comprehensive upgrades. We have achieved full modal open access to structured and unstructured data in terms of technical architecture, breaking through multimodal grouping cognitive encoding and decoding technology, cognitive enhancement multimodal association technology, and significantly improved multimodal cognitive ability. At the conference on integrating multiple resources to explore the industrialization path of universal artificial intelligence, Xu Bo presented the "Zidong Taichu" full modal cognitive model. Through "Moonlight Song", he talked about Beethoven's story and achieved precise positioning in three-dimensional scenes, completing scene analysis through the combination of images and sound. Compared to the "Zidong Taichu" 1.0 model, 2.0 focuses on improving decision-making and judgment abilities, achieving a leap from perception, cognition to decision-making. This means that in practical application scenarios, it will be able to create greater value for the industry. Referring to the current application of "Zidong Taichu" full mode large model in the industry, he mentioned that a series of leading and exemplary applications have been opened in the fields of Neurosurgery surgery navigation, legal consultation, medical multimodal differential diagnosis, traffic violation image study, etc

Edit:XiaoWanNing    Responsible editor:YingLing

Source:Xinhua News Agency

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Return to list

Recommended Reading Change it

Links

Submission mailbox:lwxsd@liaowanghn.com Tel:020-817896455

粤ICP备19140089号 Copyright © 2019 by www.lwxsd.com.all rights reserved

>