Old Web
English
Sign In
Acemap
>
authorDetail
>
Jai Gupta
Jai Gupta
Google
Language model
Computer science
de facto
context
Machine learning
3
Papers
16
Citations
0.00
KQI
Citation Trend
Filter By
Interval:
1900~2024
1900
2024
Author
Papers (3)
Sort By
Default
Most Recent
Most Early
Most Citation
No data
Journal
Conference
Others
Are Pretrained Convolutions Better than Pretrained Transformers
2021
ACL | Meeting of the Association for Computational Linguistics
Yi Tay
Mostafa Dehghani
Jai Gupta
Vamsi Aribandi
Dara Bahri
Zhen Qin
Donald Metzler
Show All
Source
Cite
Save
Citations (1)
OmniNet: Omnidirectional Representations from Transformers
2021
ICML | International Conference on Machine Learning
Yi Tay
Mostafa Dehghani
Vamsi Aribandi
Jai Gupta
Philip Pham
Zhen Qin
Dara Bahri
Da-Cheng Juan
Donald Metzler
Show All
Source
Cite
Save
Citations (1)
Are Pre-trained Convolutions Better than Pre-trained Transformers?.
2021
arXiv: Computation and Language
Yi Tay
Mostafa Dehghani
Jai Gupta
Dara Bahri
Vamsi Aribandi
Zhen Qin
Donald Metzler
Show All
Source
Cite
Save
Citations (14)
1