Moozonian

About 114 results
AI Overview
Generating...
Sponsored • AdSense Integration Active
💡
Did you mean: transformes
Corrected by Entity Network
arxiv.org
arxiv.org › abs › 2105.13677v5
ResT: An Efficient Transformer for Visual Recognition
This paper presents an efficient multi-scale vision Transformer, called ResT, that capably served as a general-purpose backbone for image recognition. Unlike existing Transformer methods, which employ...
arxiv.org
arxiv.org › abs › 2305.07270v4
SSD-MonoDETR: Supervised Scale-aware Deformable Transformer for Monocular 3D Object Detection
Transformer-based methods have demonstrated superior performance for monocular 3D object detection recently, which aims at predicting 3D attributes from a single 2D image. Most existing transformer-ba...
www.reddit.com
reddit.com › r › Local...r_v4_novel_on_log_n ›
Wave Field Transformer V4 — Novel O(n log n) attention architecture, 825M model trained from scratch on 1.33B tokens. Weights on HuggingFace.
Hey everyone, I've been building a new transformer architecture from scratch called Wave Field Transformer. Instead of standard O(n²) dot-product attention, it uses FFT-based wave interferenc...
Sponsored • AdSense Integration Active
arxiv.org
arxiv.org › abs › 2512.07806v1
Multi-view Pyramid Transformer: Look Coarser to See Broader
We propose Multi-view Pyramid Transformer (MVP), a scalable multi-view transformer architecture that directly reconstructs large 3D scenes from tens to hundreds of images in a single forward pass. Dra...
arxiv.org
arxiv.org › abs › 2501.12829v1
A transformer-based deep q learning approach for dynamic load balancing in software-defined networks
This study proposes a novel approach for dynamic load balancing in Software-Defined Networks (SDNs) using a Transformer-based Deep Q-Network (DQN). Traditional load balancing mechanisms, such as Round...
arxiv.org
arxiv.org › abs › 2203.14709v1
MSTR: Multi-Scale Transformer for End-to-End Human-Object Interaction Detection
Human-Object Interaction (HOI) detection is the task of identifying a set of triplets from an image. Recent work proposed transformer encoder-decoder architectures that successfully eliminated the ne...
github.com
github.com › aflah02 › Tra...from-Scratch-PyTorch
aflah02/Transformer-Implementation-from-Scratch-PyTorch
Custom Implementation of the famous Transformer Architecture from scratch based on the Seminal Paper Attention is All You Need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jone...
arxiv.org
arxiv.org › abs › 2407.15002v2
GET-Zero: Graph Embodiment Transformer for Zero-shot Embodiment Generalization
This paper introduces GET-Zero, a model architecture and training procedure for learning an embodiment-aware control policy that can immediately adapt to new hardware changes without retraining. To do...
arxiv.org
arxiv.org › abs › 2501.16394v1
Transformer^-1: Input-Adaptive Computation for Resource-Constrained Deployment
Addressing the resource waste caused by fixed computation paradigms in deep learning models under dynamic scenarios, this paper proposes a Transformer$^{-1}$ architecture based on the principle of dee...
arxiv.org
arxiv.org › abs › 2106.00197v2
Multilingual Speech Translation with Unified Transformer: Huawei Noah's Ark Lab at IWSLT 2021
This paper describes the system submitted to the IWSLT 2021 Multilingual Speech Translation (MultiST) task from Huawei Noah's Ark Lab. We use a unified transformer architecture for our MultiST model, ...
arxiv.org
arxiv.org › abs › 2305.19957v2
DeepSolo++: Let Transformer Decoder with Explicit Points Solo for Multilingual Text Spotting
End-to-end text spotting aims to integrate scene text detection and recognition into a unified framework. Dealing with the relationship between the two sub-tasks plays a pivotal role in designing effe...
arxiv.org
arxiv.org › abs › 2309.05503v1
Long-Range Transformer Architectures for Document Understanding
Since their release, Transformers have revolutionized many fields from Natural Language Understanding to Computer Vision. Document Understanding (DU) was not left behind with first Transformer based m...
arxiv.org
arxiv.org › abs › 2409.13975v1
ProTEA: Programmable Transformer Encoder Acceleration on FPGA
Transformer neural networks (TNN) have been widely utilized on a diverse range of applications, including natural language processing (NLP), machine translation, and computer vision (CV). Their widesp...
arxiv.org
arxiv.org › abs › 2501.17088v1
Mamba-Shedder: Post-Transformer Compression for Efficient Selective Structured State Space Models
Large pre-trained models have achieved outstanding results in sequence modeling. The Transformer block and its attention mechanism have been the main drivers of the success of these models. Recently, ...
arxiv.org
arxiv.org › abs › 2109.05611v2
Levenshtein Training for Word-level Quality Estimation
We propose a novel scheme to use the Levenshtein Transformer to perform the task of word-level quality estimation. A Levenshtein Transformer is a natural fit for this task: trained to perform decoding...
workspace.google.com
workspace.google.com › › intl › fr › industries › retail
Transformer le retail grâce à l'IA | Google Workspace avec Gemini
Transformez votre activité retail en améliorant la gestion des stocks, les prévisions de ventes et plus encore grâce à des outils intuitifs optimisés par l'IA dans Google Workspace avec Gemini.
workspace.google.com
workspace.google.com › › intl › fr_ch › industries › retail
Transformer le retail grâce à l'IA | Google Workspace avec Gemini
Transformez votre activité retail en améliorant la gestion des stocks, les prévisions de ventes et plus encore grâce à des outils intuitifs optimisés par l'IA dans Google Workspace avec Gemini.
research.google
research.google › blog › t...nguage-understanding
Transformer: A Novel Neural Network Architecture for Language Understanding
Posted by Jakob Uszkoreit, Software Engineer, Natural Language Understanding Neural networks, in particular recurrent neural networks (RNNs), are n...
github.com
github.com › parti-renai...sformer.en-marche.fr
parti-renaissance/transformer.en-marche.fr
Follow the progress of Emmanuel Macron's governement (⭐ 13 | JavaScript)
arxiv.org
arxiv.org › abs › 2111.10480v6
TransMorph: Transformer for unsupervised medical image registration
In the last decade, convolutional neural networks (ConvNets) have been a major focus of research in medical image analysis. However, the performances of ConvNets may be limited by a lack of explicit c...