Cohere for AI, AI startup Cohere’s nonprofit research lab, this week released a multimodal “open” AI model, Aya Vision, the lab claimed is best-in-class.
Aya Vision can perform tasks like writing image captions, answering questions about photos, translating text, and generating summaries in 23 major languages. Cohere, which is also making Aya Vision available for free through WhatsApp, called it “a significant step towards making technical breakthroughs accessible to researchers worldwide.”
“While AI has made significant progress, there is still a big gap in how well models perform across different languages — one that becomes even more noticeable in multimodal tasks that involve both text and images,” Cohere wrote in a blog post. “Aya Vision aims to explicitly help close that gap.”
Aya Vision comes in a couple of flavors: Aya Vision 32B and Aya Vision 8B. The more sophisticated of the two, Aya Vision 32B, sets a “new frontier,” Cohere said, outperforming models 2x its size including Meta’s Llama-3.2 90B Vision on certain visual understanding benchmarks. Meanwhile, Aya Vision 8B scores better on some evaluations than models 10x its size, according to Cohere.
Both models are available from AI dev platform Hugging Face under a Creative Commons 4.0 license with Cohere’s acceptable use addendum. They can’t be used for commercial applications.
Cohere said that Aya Vision was trained using a “diverse pool” of English datasets, which the lab translated and used to create synthetic annotations. Annotations, also known as tags or labels, help models understand and interpret data during the training process. For example, annotation to train an image recognition model might take the form of markings around objects or captions referring to each person, place, or object depicted in an image.
Cohere’s use of synthetic annotations — that is, annotations generated by AI — is on trend. Despite its potential downsides, rivals including OpenAI are increasingly leveraging synthetic data to train models as the well of real-world data dries up. Research firm Gartner estimates that 60% of the data used for AI and analytics projects last year was synthetically created.
According to Cohere, training Aya Vision on synthetic annotations enabled the lab to use fewer resources while achieving competitive performance.
“This showcases our critical focus on efficiency and [doing] more using less compute,” Cohere wrote in its blog. “This also enables greater support for the research community, who often have more limited access to compute resources.”
Together with Aya Vision, Cohere also released a new benchmark suite, AyaVisionBench, designed to probe a model’s skills in “vision-language” tasks like identifying differences between two images and converting screenshots to code.
The AI industry is in the midst of what some have called an “evaluation crisis,” a consequence of the popularization of benchmarks that give aggregate scores that correlate poorly to proficiency on tasks most AI users care about. Cohere asserts that AyaVisionBench is a step toward rectifying this, providing a “broad and challenging” framework for assessing a model’s cross-lingual and multimodal understanding.
With any luck, that’s indeed the case.
“[T]he dataset serves as a robust benchmark for evaluating vision-language models in multilingual and real-world settings,” Cohere researchers wrote in a post on Hugging Face. “We make this evaluation set available to the research community to push forward multilingual multimodal evaluations.”