极市导读
统计了近三年引用量最高的 100 篇论文,我们发现…… >>加入极市CV技术交流群,走在计算机视觉的最前沿

1、AlphaFold Protein Structure Database: massively expanding the structural coverage of protein-sequence space with high-accuracy models
论文链接:https://academic.oup.com/nar/article/50/D1/D439/6430488
机构:DeepMind
引用次数:1372
主题:Using AlphaFold to augment protein structure database coverage.
2、ColabFold: making protein folding accessible to all
论文链接:https://www.nature.com/articles/s41592-022-01488-1
引用次数:1162
主题:An open-source and efficient protein folding model.
3、Hierarchical Text-Conditional Image Generation with CLIP Latents
论文链接:https://arxiv.org/abs/2204.06125
机构:OpenAI
引用次数:718
主题:DALL・E 2, complex prompted image generation that left most in awe
4、A ConvNet for the 2020s
论文链接:https://arxiv.org/abs/2201.03545
机构:Meta,UC 伯克利
引用次数:690
主题:A successful modernization of CNNs at a time of boom for Transformers in Computer Vision
5、PaLM: Scaling Language Modeling with Pathways
论文链接:https://arxiv.org/abs/2204.02311
机构:谷歌
引用次数:452
主题:Google's mammoth 540B Large Language Model, a new MLOps infrastructure, and how it performs
-
论文链接:https://www.nature.com/articles/s41586-021-03819-2 -
机构:DeepMind -
引用次数:8965 -
主题:AlphaFold, a breakthrough in protein structure prediction using Deep Learning
-
论文链接:https://arxiv.org/abs/2103.14030 -
机构:微软 -
引用次数:4810 -
主题:A robust variant of Transformers for Vision
-
论文链接:https://arxiv.org/abs/2103.00020 -
机构:OpenAI -
引用次数:3204 -
主题:CLIP, image-text pairs at scale to learn joint image-text representations in a self supervised fashion
-
论文链接:https://dl.acm.org/doi/10.1145/3442188.3445922 -
机构:U. Washington, Black in AI, The Aether -
引用次数:1266 -
主题:Famous position paper very critical of the trend of ever-growing language models, highlighting their limitations and dangers
-
论文链接:https://arxiv.org/pdf/2104.14294.pdf -
机构:Meta -
引用次数:1219 -
主题:DINO, showing how self-supervision on images led to the emergence of some sort of proto-object segmentation in Transformers
-
论文链接:https://arxiv.org/abs/2010.11929 -
机构:谷歌 -
引用次数:11914 -
主题:The first work showing how a plain Transformer could do great in Computer Vision
-
论文链接:https://arxiv.org/abs/2005.14165 -
机构:OpenAI -
引用次数:8070 -
主题:This paper does not need further explanation at this stage
-
论文链接:https://arxiv.org/abs/2004.10934 -
机构:Academia Sinica, Taiwan -
引用次数:8014 -
主题:Robust and fast object detection sells like hotcakes
-
论文链接:https://arxiv.org/abs/1910.10683 -
机构:谷歌 -
引用次数:5906 -
主题:A rigorous study of transfer learning with Transformers, resulting in the famous T5
-
论文链接:https://arxiv.org/abs/2006.07733 -
机构:DeepMind,Imperial College -
引用次数:2873 -
主题:Showing that negatives are not even necessary for representation learning
公众号后台回复“极市直播”获取100+期极市技术直播回放+PPT


