Why Variable Sequence Length Breaks DDP Throughput

Why Variable Sequence Length Breaks DDP Throughput How to reproduce, measure, and fix token skew in transformer training with length bucketing and token-budget batching. TL;DR In transformer training, DDP can look balanced by sample count while being badly imbalanced by actual work. I built a small one-machine lab that uses a tiny transformer-like model with variable sequence lengths and four distributed ranks. The headline result was simple: uniform 128-token batches: 250,959 tokens/s variable lengths with fixed sample count: 122,006 tokens/s variable lengths with length bucketing: 208,668 tokens/s variable lengths with token-budget batching: 193,289 tokens/s The bad case was not a kernel problem. It was a batching problem: ...

March 12, 2026 · 8 min · Duo An

Building a Truly Scalable Multimodal Data Pipeline: A Streaming-First View

Most “Scalable” Multimodal Pipelines Don’t Survive Foundation-Model Scale A lot of multimodal pipelines claim to scale. In practice, they often depend on at least one of the following: global shuffles (groupBy/join/repartition), materializing massive intermediate datasets, centralized coordination that becomes a bottleneck, or brittle recovery logic (rerun-the-world on failure). That works for demos. It breaks at foundation-model scale. This series is about a different design point: A streaming-first multimodal pipeline that scales linearly with data and hardware — with no global shuffle, and resumable at partition granularity. ...

December 22, 2025 · 4 min · Duo An