Posted inAI and Machine learning Tech
Sarvam AI Launches 30B and 105B Models With MoE Architecture and 128K Context
Sarvam AI has announced two large language models—Sarvam 30B and Sarvam 105B—built from scratch using a Mixture of Experts (MoE) architecture. The release signals a clear push toward enterprise-grade AI…












