728x90

https://link.medium.com/gGrhTcYayeb
'관심있는 주제 > 뉴럴넷 질문' 카테고리의 다른 글
| Paper) Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning 확인해보기 (0) | 2021.06.11 |
|---|---|
| MLP-mixer 이해하기 (0) | 2021.05.08 |
| Google’s RFA: Approximating Softmax Attention Mechanism in Transformers 간단하게 알아보기 (0) | 2021.03.01 |
| Aleatory Overfitting vs. Epistemic Overfitting (3) | 2020.12.24 |
| [TIP] CNN) BatchNormalization, Dropout, Pooling 적용 순서 (0) | 2020.10.31 |