Cohere claims its new Aya Vision AI model is best-in-class

Date:

Share post:


Cohere for AI, AI startup Cohere’s nonprofit research lab, this week released a multimodal “open” AI model, Aya Vision, the lab claimed is best-in-class.

Aya Vision can perform tasks like writing image captions, answering questions about photos, translating text, and generating summaries in 23 major languages. Cohere, which is also making Aya Vision available for free through WhatsApp, called it “a significant step towards making technical breakthroughs accessible to researchers worldwide.”

“While AI has made significant progress, there is still a big gap in how well models perform across different languages — one that becomes even more noticeable in multimodal tasks that involve both text and images,” Cohere wrote in a blog post. “Aya Vision aims to explicitly help close that gap.”

Aya Vision comes in a couple of flavors: Aya Vision 32B and Aya Vision 8B. The more sophisticated of the two, Aya Vision 32B, sets a “new frontier,” Cohere said, outperforming models 2x its size including Meta’s Llama-3.2 90B Vision on certain visual understanding benchmarks. Meanwhile, Aya Vision 8B scores better on some evaluations than models 10x its size, according to Cohere.

Both models are available from AI dev platform Hugging Face under a Creative Commons 4.0 license with Cohere’s acceptable use addendum. They can’t be used for commercial applications.

Cohere said that Aya Vision was trained using a “diverse pool” of English datasets, which the lab translated and used to create synthetic annotations. Annotations, also known as tags or labels, help models understand and interpret data during the training process. For example, annotation to train an image recognition model might take the form of markings around objects or captions referring to each person, place, or object depicted in an image.

Cohere’s Aya Vision model can perform a range of visual understanding tasks.Image Credits:Cohere

Cohere’s use of synthetic annotations — that is, annotations generated by AI — is on trend. Despite its potential downsides, rivals including OpenAI are increasingly leveraging synthetic data to train models as the well of real-world data dries up. Research firm Gartner estimates that 60% of the data used for AI and an­a­lyt­ics projects last year was syn­thet­i­cally created.

According to Cohere, training Aya Vision on synthetic annotations enabled the lab to use fewer resources while achieving competitive performance.

“This showcases our critical focus on efficiency and [doing] more using less compute,” Cohere wrote in its blog. “This also enables greater support for the research community, who often have more limited access to compute resources.”

Together with Aya Vision, Cohere also released a new benchmark suite, AyaVisionBench, designed to probe a model’s skills in “vision-language” tasks like identifying differences between two images and converting screenshots to code.

The AI industry is in the midst of what some have called an “evaluation crisis,” a consequence of the popularization of benchmarks that give aggregate scores that correlate poorly to proficiency on tasks most AI users care about. Cohere asserts that AyaVisionBench is a step toward rectifying this, providing a “broad and challenging” framework for assessing a model’s cross-lingual and multimodal understanding.

With any luck, that’s indeed the case.

“[T]he dataset serves as a robust benchmark for evaluating vision-language models in multilingual and real-world settings,” Cohere researchers wrote in a post on Hugging Face. “We make this evaluation set available to the research community to push forward multilingual multimodal evaluations.”



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Aspiration co-founder and board member defrauded investors of $145M, prosecutors say

Just over four years ago, climate friendly fintech startup Aspiration was on the verge of a $2...

Amazon reportedly forms a new agentic AI group

Amazon has formed a new group within AWS dedicated to creating AI agents, systems that help people...

X updates Communities with new filters, sorting options, and a way to see your own posts

X is bringing more attention to Communities, a feature that allows X users to connect and engage...

Andy Dunn’s new app Pie uses AI to help you make friends

“We’re a long way from pants,” Andy Dunn, founder of online fashion retailer Bonobos, told TechCrunch. Now,...

Klarna CEO doubts that other companies will replace Salesforce with AI

The founder and CEO of IPO-bound fintech Klarna took to X to once again explain why his...

Amazon is reportedly developing its own AI ‘reasoning’ model

Amazon reportedly wants to get in on the AI “reasoning” model game. According to Business Insider, Amazon...

OpenAI chairman Bret Taylor lays out the bull case for AI agents

We still didn’t get a straight up definition of exactly what an AI agent is during Bret...

Google’s March Pixel Drop adds AI-powered scam detection and live location sharing with friends

Google on Tuesday announced new software updates for the Pixel phones as part of its “Pixel Drop”...