Ai2’s open source Tülu 3 lets anyone play the AI post-training game

Date:

Share post:


Ask anyone in the open source AI community, and they will tell you the gap between them and the big private companies is more than just computing power. Ai2 is working to fix that, first with fully open source databases and models and now with an open and easily adapted post-training regimen to turn “raw” large language models (LLMs) into usable ones.

Contrary to what many think, “foundation” language models don’t come out of the training process ready to put to work. The pretraining process is necessary, of course, but far from sufficient. Some even believe that pretraining may soon no longer be the most important part at all.

That’s because the post-training process is increasingly being shown to be where real value can be created. That’s where the model is molded from a giant, know-it-all network that will as readily produce Holocaust-denial talking points as it will cookie recipes. You generally don’t want that!

Companies are secretive about their post-training regimens because, while everyone can scrape the web and make a model using state-of-the-art methods, making that model useful to, say, a therapist or research analyst is a completely different challenge.

Ai2 (formerly known as the Allen Institute for AI) has spoken out about the lack of openness in ostensibly “open” AI projects, like Meta’s Llama. While the model is indeed free for anyone to use and tweak, the sources and process of making the raw model and the method of training it for general use remain carefully guarded secrets. It’s not bad — but it also isn’t really “open.”

Ai2, on the other hand, is committed to being as open as it can possibly be, from exposing its data collection, curation, cleaning, and other pipelines to the exact training methods it used to produce LLMs like OLMo.

But the simple truth is that few developers have the chops to run their own LLMs to begin with, and even fewer can do post-training the way Meta, OpenAI, or Anthropic does — partly because they don’t know how, but also because it’s technically complex and time-consuming.

Fortunately, Ai2 wants to democratize this aspect of the AI ecosystem as well. That’s where Tülu 3 comes in. It’s a huge improvement over an earlier, more rudimentary post-training process (called, you guessed it, Tülu 2). In the nonprofit’s tests, this resulted in scores on par with the most advanced “open” models out there. It’s based on months of experimentation, reading, and interpreting what the big guys are hinting at, and lots of iterative training runs.

a diagram doesn’t really capture it all, but you see the general shape of it.Image Credits:AI2

Basically, Tülu 3 covers everything from choosing which topics you want your model to care about — for instance, downplaying multilingual capabilities but dialing up math and coding — to taking it through a long regimen of data curation, reinforcement learning, fine-tuning and preference tuning, to tweaking a bunch of other meta-parameters and training processes that I couldn’t adequately describe to you. The result is, hopefully, a far more capable model focused on the skills you need it to have.

The real point, though, is taking one more toy out of the private companies’ toybox. Previously, if you wanted to build a custom-trained LLM, it was very hard to avoid using a major company’s resources one way or the other, or hiring a middleman who would do the work for you. That’s not only expensive, but it also introduces risks that some companies are loath to take.

For instance, medical research and service companies: Sure, you could use OpenAI’s API, or talk to Scale or whoever to customize an in-house model, but both of these involve outside companies in sensitive user data. If it’s unavoidable, you just have to bite the bullet — but if it isn’t? Like if, for instance, a research organization released a soup-to-nuts pre- and post-training regimen that you could implement on-premises? That may well be a better alternative.

Ai2 is using this itself, which is the best endorsement one can give. Even though the test results it’s publishing today use Llama as a foundation model, they’re planning to put out an OLMo-based, Tülu 3-trained model soon that should offer even more improvements over the baseline and also be fully open source, tip to tail.

If you’re curious how the model performs currently, give the live demo a shot.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Trump’s tariff threats don’t scare this Mexican fintech

Mexico’s economic development — turbocharged by the amount of nearshoring in recent years — has made it...

Meet three incoming EU lawmakers in charge of key tech policy areas

The European Union looks to have clinched political agreement on the team of 26 commissioners who will...

OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit (updated)

Lawyers for The New York Times and Daily News, which are suing OpenAI for allegedly scraping their...

Sequoia marks up its 2020 fund by 25%

Sequoia says no exits, no problem. The Silicon Valley titan of venture marked up the value of its...

Illumen Capital doubles down on supporting underrepresented funds

Illumen Capital is doubling down on its support for fund managers and founders from underrepresented communities.  The firm...

Gilroy, former Coatue fintech head, and angel investor Rajaram launch VC firm

Michael Gilroy, a former head of fintech investments at Coatue, and Gokul Rajaram, a longtime tech executive...

OpenAI is funding research into ‘AI morality’

OpenAI is funding academic research into algorithms that can predict humans’ moral judgements. In a filing with the...

Y Combinator often backs startups that duplicate other YC companies, data shows — it’s not just AI code editors

The Silicon Valley dream is to build a tech startup that is such a unique idea it...