GNN) GAT Layer Implementation
GAT는 Attention Aggregator를 사용하는 방식으로 위치는 다음과 같다. GAT Layer 구현한 것을 따라 시행해봤다. Equation (1) is a linear transformation of the lower layer embedding $h_i^{(l)}$ and $W^{(l)}$ is its learnable weight matrix. This transformation is useful to achieve a sufficient expressive power to transform input features (in our example one-hot vectors) into high-level and dense features. Equation (2) computes a pai..
2021.07.03