Old Web
English
Sign In
Acemap
>
Paper
>
Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations.
Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations.
2022
Josh Beal
Hao Yu Wu
Dong Huk Park
Andrew Zhai
Dmitry Kislyuk
Correction
Cite
Save
Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI
[]