AI is imitating the human brain! 10 top papers in 2021: the brain is also learning "unsupervised"

2022-01-04

Will neuroscience become the key to the "hyperevolution" of artificial intelligence? As long as the brain is simulated, neural networks can obtain similar or the same intelligence. Recently, neuroscientist Patrick mineault summarized and reviewed the brain model of unsupervised learning in 2021. Will neuroscience become the key to the "hyperevolution" of artificial intelligence? Today, when deep learning and deep neural network are popular all over the world, it is inseparable from the study of the brain. Although we have not yet explored how the brain works, such brain structures can indeed produce "intelligence". As long as it is simulated, neural networks can obtain similar or the same intelligence. The matrix that can really learn human beings may not be far away. Recently, neuroscientist Patrick mineault summarized and reviewed the brain model of unsupervised learning in 2021. One of the most convincing findings in the study of neural artificial intelligence (neuro AI) is to train the artificial neural network to perform the related tasks of matching individual neurons and collective signals in the brain. A typical example is ventral stream. DNNS are trained on Imagenet for object recognition. Supervision and task optimization networks connect two important interpretation forms: ecological correlation and interpretability of neural activities. 「What can 5.17 billion regression fits tell us about artificial models of the human visual system? 」 This paper answers the question of what brain regions are used for. However, as Jess Thompson points out, this is not the only form of interpretation. In particular, task optimization networks are generally considered biologically unreasonable because traditional Imagenet training uses 1m images. Even in order for infants to recognize a task, they must receive a new supervision label every 5 seconds. For example, parents point to a duck and say "duck" to their children for more than one year, three hours a day. What about non-human primates and mice? Therefore, the search for biologically similar neural networks matching the human brain continues. Jess Thompson's neural artificial intelligence hypothesis space What are the self-monitoring training methods This year, we have seen great progress in unsupervised training, gradually replacing some methods of self supervised training. Some methods of self-monitoring training are as follows: · Unsupervised learning is designed to represent data distribution. One of the most commonly used technologies in this field is variational self encoder (VAE). · Self supervised training aims to find a good data representation by solving the agent task. Nowadays, self supervised training is almost commonly used in language models, such as Bert and gpt-3. · Contrastive learning is a special form of self supervised learning. Its agent task is to predict whether the samples come from positive or negative (or interference items). There are many different styles of comparative learning: Moco, infonce, simclr, CPC, etc. There are also some closely related non contrast methods that can eliminate negative samples, including byol and barlowtwins. · Multimodal learning is another special form of self supervised training. Its purpose is to predict the common subspace of two different modes (such as vision, text, audio, etc.) or the common subspace of one mode. Clip is a typical example. All of these methods allow us to learn representation without supervision. In fact, the combination of self-monitoring and unsupervised methods is biologically more reasonable than using only supervised methods. In this regard, mineault reviewed this year's main, neurips, CCN meetings and other preprints, and made a summary of the brain model of unsupervised learning. Unsupervised neural network models of the ventral visual stream This paper has just been published in the top issue of PNAs and has been cited more than 60 times. Thesis address: https://www.pnas.org/content/118/3/e2014196118 The authors found that the representation of unsupervised and self supervised learning was consistent with the implementation of ventral flow (V1, V4, it) neurons. The abstract points out that: Primates show extraordinary recognition. This ability is achieved through ventral visual flow, which is a brain region associated with multiple levels. The best quantitative model in these fields is the deep neural network trained by artificial markers. However, these models need to receive more markers than infants, so that they cannot achieve the ventral flow development model. Recently, progress in unsupervised learning has largely filled this gap. We found that the prediction accuracy obtained by the latest unsupervised learning neural network in ventral flow is equal to or better than the best model today. These results illustrate the use of unsupervised learning to model the brain system and provide a powerful alternative to sensory learning computing theory. In particular, the authors found that simclr and other comparative learning methods can explain ventral neurons almost like supervised learning methods. This paper is a very strong proof that labels are not necessary when training models. Beyond category-supervision: Computational support for domain-general pressures guiding human visual system representation Thesis address: https://www.biorxiv.org/content/10.1101/2020.06.15.153247v3 Konkle and Alvarez raised a question similar to that of Zhang et al Can ventral information flow be explained by a network without supervised learning training? They used functional magnetic resonance imaging instead of recording individual neurons to assess this. They found that the results were generally consistent with Zhang's article, and had their own unique examples - comparing the results of self-monitoring and other similar interpretations of fMRI data. Your head is there to move you around: Goal-driven models of the primate dorsal pathway This paper was written by neuroscientist Patrick mineault and has been published in 2021 neurips. As previously discussed by the authors, ventral neurons are selective to shape. However, the information output from the visual cortex will be transmitted to two channels, one is ventral flow and the other is dorsal flow. What's the matter? By comparing many self-monitoring 3D networks and different dorsal flow regions, the authors found that they could not explain the response of single neurons in non-human primates. Thesis address: https://your-head-is-there-to-move-you-around.netlify.app/ Therefore, mineault drafted an agent task. Living creatures in the world must determine the parameters of their own motion according to the image pattern falling on their retina. The resulting network looks like a dorsal flow, which is true both qualitatively and quantitatively. At present, the training of the model is supervised, but from the perspective of agents, it is self supervised multimodal learning: agents learn to predict their self motion parameters (vestibule, efferent copy) from another mode (vision), which may be biologically reasonable. Function of visual cortex The functional specialization of visual cortex is similar to self-monitoring predictive learning training. Project home page: https://ventral-dorsal-model.netlify.app/ Both papers of Bakhtiari et al. In 2021 were awarded neurips spotlights. Bakhtiari believes that the visual units of mammals have dorsal and ventral streams, whether in humans, non-human primates or mice. So, can artificial neural network explain the two? By training the comparative predictive coding (CPC) network on the movie clips, Shahab found that if the neural network contains two independent parallel paths, the neural network will spontaneously form dorsal and ventral streams. The dorsal flow path was well matched with the dorsal region of mice, while the ventral flow was well matched with the ventral flow of mice. On the other hand, a network trained by supervision, or a network with only one path, cannot match the mouse's brain. Shallow unsupervised model Researchers recently found that the shallow unsupervised model can best predict the neural response of mouse visual cortex. Thesis address: https://www.biorxiv.org/content/10.1101/2021.06.16.448730v2.full.pdf The paper of nayebi et al in 2021 shows that deep neural network is a great model inspired by the visual cortex of primates, but it is not so applicable to mice. They used mouse visual cortex data (still images) and compared it with different architectures of monitoring and self-monitoring networks. An interesting finding is that the shallow network with parallel branches can better explain the data of mouse visual cortex. This confirms Shahab's findings. The argument put forward by the nayebi team is that the mouse's visual brain is a shallow "universal" visual machine, which can be competent for various tasks. Unlike the deep neural network in the human brain, it is particularly proficient in only one task. Conwell team published another paper on self-regulated learning of mouse visual cortex in neurips 2021, and reached the same conclusion as the previous two papers. Thesis address: https://proceedings.neurips.cc/paper/2021/file/2c29d89cc56cdb191c60db2f0bae796b-Paper.pdf Beyond human beings Geirhos et al's paper in neurips 2021 shows that humans are very good at classifying images under distortion (such as noise, contrast change, rotation, etc.). Thesis address: https://arxiv.org/pdf/2106.07411.pdf In this paper, they found that the robustness of the new self-monitoring and multimodal model in image classification tasks has been comparable to that of humans. An important factor behind it is how much data the training network uses: the model trained with more data is more robust. That's because the new models are less sensitive to texture and more sensitive to shape, which means they seem to take fewer shortcuts. Of course, the new model will still make obvious mistakes. Multimodal neural network Thesis address: https://openreview.net/pdf?id=6dymbuga7nL At the svhrm seminar in 2021, choksi team proposed that the hippocampus of the human brain contains multimodal "concept cells" (such as Jennifer Aniston cells), which will respond to the textual representation of concepts or images. Interestingly, clip does the same. In fact, in this paper, the author uses the published fMRI data to show that multimodal networks, including clip, can best explain the data of big brain horse body. Unsupervised deep learning Storrs et al. Introduced their use of unsupervised learning to predict human perception of gloss in a paper published in nature human behavior. They trained pixel VAE on a set of textures and looked for similarities between pixel VAE and the way humans perceive the surface of objects. They found that VAE naturally decouples different texture components, which is very consistent with human perception. In addition, they found that the supervised learning network did not perform well in this task. Thesis address:

Edit:Li Ling    Responsible editor:Chen Jie

Source:Net Ease News

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Return to list

Recommended Reading Change it

Links

Submission mailbox:lwxsd@liaowanghn.com Tel:020-817896455

粤ICP备19140089号 Copyright © 2019 by www.lwxsd.com.all rights reserved

>