Why Elon Musk’s AI company ‘open-sourcing’ Grok matters — and why it doesn’t

Date:

Share post:


Elon Musk’s xAI released its Grok large language model as “open source” over the weekend. The billionaire clearly hopes to set his company at odds with rival OpenAI, which despite its name is not particularly open. But does releasing the code for something like Grok actually contribute to the AI development community? Yes and no.

Grok is a chatbot trained by xAI to fill the same vaguely defined role as something like ChatGPT or Claude: you ask it, it answers. This LLM, however, was given a sassy tone and extra access to Twitter data as a way of differentiating it from the rest.

As always, these systems are nearly impossible to evaluate, but the general consensus seems to be that it’s competitive with last-generation medium-size models like GPT-3.5. (Whether you decide this is impressive given the short development time frame or disappointing given the budget and bombast surrounding xAI is entirely up to you.)

At any rate, Grok is a modern and functional LLM of significant size and capability, and the more access the dev community has to the guts of such things, the better. The problem is in defining “open” in a way that does more than let a company (or billionaire) claim the moral high ground.

This isn’t the first time the terms “open” and “open source” have been questioned or abused in the AI world. And we aren’t just talking about a technical quibble, such as picking a usage license that’s not as open as another (Grok is Apache 2.0, if you’re wondering).

To begin with, AI models are unlike other software when it comes to making them “open source.”

If you’re making, say, a word processor, it’s relatively simple to make it open source: you publish all your code publicly and let community to propose improvements or make their own version. Part of what makes open source as a concept valuable is that every aspect of the application is original or credited to its original creator — this transparency and adherence to correct attribution is not just a byproduct, but is core to the very concept of openness.

With AI, this is arguably not possible at all, because the way machine learning models are created involves a largely unknowable process whereby a tremendous amount of training data is distilled into a complex statistical representation the structure of which no human really directed, or even understands. This process cannot be inspected, audited, and improved the way traditional code can — so while it still has immense value in one sense, it can’t ever really be open. (The standards community hasn’t even defined what open will be in this context, but are actively discussing it.)

That hasn’t stopped AI developers and companies from designing and claiming their models as “open,” a term that has lost much of its meaning in this context. Some call their model “open” if there is a public-facing interface or API. Some call it “open” if they release a paper describing the development process.

Arguably the closest to “open source” an AI model can be is when its developers release its weights, which is to say the exact attributes of the countless nodes of its neural networks, which perform vector mathematics operations in precise order to complete the pattern started by a user’s input. But even “open-weights” models like LLaMa-2 exclude other important data, like the training dataset and process — which would be necessary to recreate it from scratch. (Some projects go further, of course.)

All this is before even mentioning the fact that it takes millions of dollars in computing and engineering resources to create or replicate these models, effectively restricting who can create and replicate them to companies with considerable resources.

So where does xAI’s Grok release fall on this spectrum?

As an open-weights model, it’s ready for anyone to download, use, modify, fine tine, or distill. That’s good! It appears to be among the largest models anyone can access freely this way, in terms of parameters — 314 billion — which gives curious engineers a lot to work with if they want to test how it performs after various modifications.

The size of the model comes with serious drawbacks, though: you’ll need hundreds of gigabytes of high-speed RAM to use it in this raw form. If you’re not already in possession of, say, a dozen Nvidia H100s in a six-figure AI inference rig, don’t bother clicking that download link.

And although Grok is arguably competitive with some other modern models, it’s also far, far larger than them, meaning it requires more resources to accomplish the same thing. There’s always a hierarchy of size, efficiency, and other metrics, and it’s still valuable, but this is more raw material than final product. It’s also not clear whether this is the latest and best version of Grok, like the clearly tuned version some have access to via X.

Overall, it’s a good thing to release this data, but it’s not a game-changer the way some hoped it might be.

It’s also hard not to wonder why Musk is doing this. Is his nascent AI company really dedicated to open source development? Or is this just mud in the eye of OpenAI, with which Musk is currently pursuing a billionaire-level beef?

If they are really dedicated to open source development, this will be the first of many releases, and they will hopefully take the feedback of the community into account, release other crucial information, characterize the training data process, and further explain their approach. If they aren’t, and this is only done so Musk can point to it in online arguments, it’s still valuable — just not something anyone in the AI world will rely on or pay much attention to after the next few months as they play with the model.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Tesla profits tumble, Fisker flatlines, and California cities battle for control of AVs

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation....

MongoDB CEO Dev Ittycheria talks AI hype and the database evolution as he crosses 10-year mark

A lot has happened since Dev Ittycheria took the reins at MongoDB, the $26 billion database company...

Stripe’s big changes, Brazil’s newest fintech unicorn and the tale of a startup shutdown

Fundid's founder shares how rising interest rates, VCs and partners killed the business finance startup Welcome to TechCrunch Fintech!...

How RPA vendors aim to remain relevant in a world of AI agents

What’s the next big thing in enterprise automation? If you ask the tech giants, it’s agents —...

London’s first defense tech hackathon brings Ukraine war closer to the city’s startups

Last week, the UK announced its largest ever military support package for Ukraine. The bill takes the...

TikTok faces a ban in the US, Tesla profits drop and healthcare data leaks

Welcome, folks, to Week in Review (WiR), TechCrunch’s regular newsletter covering this week’s noteworthy happenings in tech. TikTok’s...

Will a TikTok ban impact creator economy startups? Not really, founders say

President Joe Biden signed a bill on Wednesday that could ban TikTok – for real this time....

Investors won’t give you the real reason they are passing on your startup

“When an investor passes on you, they will not tell you the real reason,” said Tom Blomfield,...