Moozonian
Web Images Developer News Books Maps Shopping Moo-AI Generate Art
Showing results for Attention
GitHub Repo https://github.com/jadore801120/attention-is-all-you-need-pytorch

jadore801120/attention-is-all-you-need-pytorch

A PyTorch implementation of the Transformer model in "Attention is All You Need".
GitHub Repo https://github.com/philipperemy/keras-attention

philipperemy/keras-attention

Keras Attention Layer (Luong and Bahdanau scores).
GitHub Repo https://github.com/fla-org/flash-linear-attention

fla-org/flash-linear-attention

🚀 Efficient implementations for emerging model architectures
GitHub Repo https://github.com/Dao-AILab/flash-attention

Dao-AILab/flash-attention

Fast and memory-efficient exact attention
GitHub Repo https://github.com/cmhungsteve/Awesome-Transformer-Attention

cmhungsteve/Awesome-Transformer-Attention

An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
GitHub Repo https://github.com/thu-ml/SageAttention

thu-ml/SageAttention

[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
GitHub Repo https://github.com/zhang0jhon/AttentionOCR

zhang0jhon/AttentionOCR

Scene text recognition
GitHub Repo https://github.com/xmu-xiaoma666/External-Attention-pytorch

xmu-xiaoma666/External-Attention-pytorch

🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐
GitHub Repo https://github.com/AMLab-Amsterdam/AttentionDeepMIL

AMLab-Amsterdam/AttentionDeepMIL

Implementation of Attention-based Deep Multiple Instance Learning in PyTorch
GitHub Repo https://github.com/bojone/attention

bojone/attention

some attention implements