Notice: Undefined index: OS in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/const.inc.php on line 64 Notice: Undefined variable: siters in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 2414 Notice: Undefined index: User in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/const.inc.php on line 108 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3607 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3612 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 70 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 74 Notice: Undefined index: User in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 158 Notice: Undefined index: SID in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 177 Notice: Undefined index: UID in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 179 Notice: Undefined variable: UserName in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 180 Notice: Undefined variable: Mobile in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 181 Notice: Undefined variable: Email in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 182 Notice: Undefined variable: Num in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 183 Notice: Undefined variable: keyword in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 184 Notice: Undefined index: ac in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 189 Notice: Undefined index: CHtml in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 191 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/common.php on line 201 Notice: Undefined index: t in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/info_view.php on line 40 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3607 Notice: Undefined offset: 0 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3612 Notice: Undefined variable: strimg in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3612 Notice: Undefined offset: 1 in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 617 Notice: Undefined index: enseo in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/Include/function.inc.php on line 3076 Notice: Undefined variable: TPath in /usr/home/wh-as5ubll29rj6kxf8oxm/htdocs/pcen/info_view.php on line 125 Global debut! Wuwen Xinqiong releases a thousand card scale heterogeneous chip hybrid training platform, significantly improving computing power utilization-瞭望新时代网

Sci-Tech

Global debut! Wuwen Xinqiong releases a thousand card scale heterogeneous chip hybrid training platform, significantly improving computing power utilization

2024-07-08   

According to incomplete statistics, it has been announced that there are no less than 100 computing power clusters in China with a scale of kilocalories, and the vast majority of clusters have or are transitioning from homogeneous to heterogeneous. The existence of "ecological silos" has deterred most enterprises and developers from achieving effective integration and utilization, even if there is a large amount of computing power, which is undoubtedly a waste of computing resources. The "ecological vertical well" has not only become the biggest difficulty in building AI Native infrastructure, but also an important reason for the current "computing power shortage" faced by the large model industry. Building an AI Native infrastructure that adapts to multi model and multi chip patterns, the underlying solution of the no question core dome is to provide a user-friendly computing platform that efficiently integrates heterogeneous computing resources, as well as middleware that supports joint optimization and acceleration of software and hardware, enabling heterogeneous chips to truly transform into large computing power. Behind this series of research and production progress is the strong support of the R&D team of Wuwenxin Qiong in heterogeneous chip computing optimization and cluster system design. Recently, a joint research team from Wuwen Xinqiong, Tsinghua University, and Shanghai Jiao Tong released HETHUB, a heterogeneous distributed hybrid training system for large-scale models. This is the first time in the industry that cross mixing training between six different brand chips has been achieved, and the engineering completion is high. Xia Lixue introduced that the original intention of engineering this technology is to integrate more heterogeneous computing power, continue to push the upper limit of high model technology capabilities, and continuously reduce the landing cost of large model applications by opening up the heterogeneous chip ecosystem. The construction of AI Native infrastructure, which leads the "MxN" ecological pattern, ensures that there is no difficult AI computing power in the world. Currently, the development of the large model industry is entering the stage of large-scale industrial landing, and the diverse application scenarios have brought about an increasingly urgent demand for large model training. The huge market prospects have led to a rapid increase in players in the basic model and computing chip industries. Building AI Native infrastructure in the era of big models not only provides AI developers with a more universal, efficient, and convenient research and development environment, but also serves as a key cornerstone for achieving effective integration of computing resources and supporting the sustainable development of the AI industry. Wuwenxin Qiong possesses top-notch AI computing optimization capabilities and computing power solution capabilities, as well as forward-looking judgments on the industry landscape of "M models" and "N chips". It has taken the lead in building an ecological pattern of the "MxN" middle layer, achieving efficient and unified deployment of multiple large model algorithms on multiple chips. As of now, Infini-AI has supported more than 30 models including Qwen2, GLM4, Llama3, Gemma, Yi, Baichuan2, ChatGLM3 series, as well as more than 10 computing cards including AMD, Huawei Ascend, Biren, Cambrian, Suiyuan, Haiguang, Tiantian Zhixin, Muxi, Moore Thread, NVIDIA, etc. Wuwen Xinqiong is committed to becoming a leader in AI Native infrastructure. In the future, it will continue to break through the technical limits of heterogeneous computing power optimization and cluster system design, continuously expand the upstream and downstream ecological partners of the model layer and chip layer, and jointly achieve the effective connection, utilization, and integration of "MxN". We will build AI Native infrastructure that truly adapts to multiple models and chips, so that there is no difficult to use AI computing power in the world, and help promote the application innovation of large models in various industries. "The push up of the technology ceiling is not contradictory to the implementation and diffusion of technology, and it depends on how we are determined to approach this technology." Xia Lixue said that today we are talking about reducing the cost of large models by 10000 times, just like 30 years ago we were talking about making every household electrified. Excellent infrastructure is such a "magic" that when the marginal cost drops to a critical point, more people can embrace new technologies. (Lai Xin She)

Edit:Xiong Dafei Responsible editor:Li Xiang

Source:WHB

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links