CoMeT: Collaborative Memory Transformer for Efficient Long Context Modeling
Abstract
The Collaborative Memory Transformer introduces a novel architecture that enables large language models to process arbitrarily long sequences with constant memory usage and linear time complexity through a dual-memory system and efficient fine-tuning strategies.
The quadratic complexity and indefinitely growing key-value (KV) cache of standard Transformers pose a major barrier to long-context processing. To overcome this, we introduce the Collaborative Memory Transformer (CoMeT), a novel architecture that enables LLMs to handle arbitrarily long sequences with constant memory usage and linear time complexity. Designed as an efficient, plug-in module, CoMeT can be integrated into pre-trained models with only minimal fine-tuning. It operates on sequential data chunks, using a dual-memory system to manage context: a temporary memory on a FIFO queue for recent events, and a global memory with a gated update rule for long-range dependencies. These memories then act as a dynamic soft prompt for the next chunk. To enable efficient fine-tuning on extremely long contexts, we introduce a novel layer-level pipeline parallelism strategy. The effectiveness of our approach is remarkable: a model equipped with CoMeT and fine-tuned on 32k contexts can accurately retrieve a passkey from any position within a 1M token sequence. On the SCROLLS benchmark, CoMeT surpasses other efficient methods and achieves performance comparable to a full-attention baseline on summarization tasks. Its practical effectiveness is further validated on real-world agent and user behavior QA tasks. The code is available at: https://anonymous.4open.science/r/comet-B00B/
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Out of the Memory Barrier: A Highly Memory Efficient Training System for LLMs with Million-Token Contexts (2026)
- Fast KVzip: Efficient and Accurate LLM Inference with Gated KV Eviction (2026)
- SONIC: Segmented Optimized Nexus for Information Compression in Key-Value Caching (2026)
- Stacked from One: Multi-Scale Self-Injection for Context Window Extension (2026)
- AllMem: A Memory-centric Recipe for Efficient Long-context Modeling (2026)
- Dynamic Long Context Reasoning over Compressed Memory via End-to-End Reinforcement Learning (2026)
- MiniCPM-SALA: Hybridizing Sparse and Linear Attention for Efficient Long-Context Modeling (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2602.01766 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper