Cohere Launches A Family Of Open Multilingual Models - 2wks ago

 

Enterprise AI company Cohere has unveiled a new family of multilingual language models designed to bring powerful generative AI to everyday devices while broadening support for underrepresented languages.

The models, collectively branded Tiny Aya, are open-weight systems whose underlying parameters are publicly available for developers and researchers to inspect, adapt, and deploy. Cohere says Tiny Aya supports more than 70 languages and is optimized to run efficiently on consumer hardware such as laptops, enabling fully offline use cases where connectivity is limited or sensitive data cannot leave the device.

Developed by Cohere Labs, the company’s research arm, Tiny Aya places particular emphasis on South Asian languages, including Bengali, Hindi, Punjabi, Urdu, Gujarati, Tamil, Telugu, and Marathi. That focus reflects the needs of large, linguistically diverse populations where English-first AI tools often fall short.

At the core of the release is a 3.35-billion-parameter base model, a relatively compact architecture by current industry standards but one Cohere positions as a sweet spot between capability and efficiency. On top of this foundation, the company has introduced TinyAya-Global, a version fine-tuned to better follow user instructions across many languages for general-purpose applications.

Cohere has also created regional variants tailored to specific linguistic ecosystems. TinyAya-Earth targets African languages, TinyAya-Fire is tuned for South Asian languages, and TinyAya-Water focuses on Asia Pacific, West Asia, and European languages. According to the company, this regional specialization allows each model to develop deeper linguistic grounding and cultural nuance while still maintaining broad multilingual coverage.

The models were trained on a single cluster of 64 Nvidia H100 GPUs, a relatively modest setup compared with the massive supercomputers used for frontier-scale systems. Cohere argues that this more efficient training regime, combined with software optimized for on-device inference, makes Tiny Aya particularly attractive for researchers, startups, and enterprises that need capable models without hyperscale infrastructure.

Because Tiny Aya can run locally, it can power offline translation, privacy-preserving assistants, and domain-specific tools for communities that operate primarily in native languages. In countries with patchy connectivity or high data costs, that capability could significantly expand access to AI-powered services.

The Tiny Aya models are being distributed through major AI model hubs and Cohere’s own platform, alongside training and evaluation datasets and an upcoming technical report detailing the training methodology. Together, these releases are intended to seed a broader ecosystem of multilingual, locally deployable AI applications.

Attach Product

Cancel

You have a new feedback message