Databricks spent $10M on new DBRX generative AI model, but it can’t beat GPT-4

Date:

Share post:


If you wanted to raise the profile of your major tech company and had $10 million to spend, how would you spend it? On a Super Bowl ad? An F1 sponsorship?

You could spend it training a generative AI model. While not marketing in the traditional sense, generative models are attention grabbers — and increasingly funnels to vendors’ bread-and-butter products and services.

See Databricks’ DBRX, a new generative AI model announced today akin to OpenAI’s GPT series and Google’s Gemini. Available on GitHub and the AI dev platform Hugging Face for research as well as for commercial use, base (DBRX Base) and fine-tuned (DBRX Instruct) versions of DBRX can be run and tuned on public, custom or otherwise proprietary data.

“DBRX was trained to be useful and provide information on a wide variety of topics,” Naveen Rao, VP of generative AI at Databricks, told TechCrunch in an interview. “DBRX has been optimized and tuned for English language usage, but is capable of conversing and translating into a wide variety of languages, such as French, Spanish and German.”

Databricks describes DBRX as “open source” in a similar vein as “open source” models like Meta’s Llama 2 and AI startup Mistral’s models. (It’s the subject of robust debate as to whether these models truly meet the definition of open source.)

Databricks says that it spent roughly $10 million and eight months training DBRX, which it claims (quoting from a press release) “outperform[s] all existing open source models on standard benchmarks.”

But — and here’s the marketing rub — it’s exceptionally hard to use DBRX unless you’re a Databricks customer.

That’s because, in order to run DBRX in the standard configuration, you need a server or PC with at least four Nvidia H100 GPUs. A single H100 costs thousands of dollars — quite possibly more. That might be chump change to the average enterprise, but for many developers and solopreneurs, it’s well beyond reach.

And there’s fine print to boot. Databricks says that companies with more than 700 million active users will face “certain restrictions” comparable to Meta’s for Llama 2, and that all users will have to agree to terms ensuring that they use DBRX “responsibly.” (Databricks hadn’t volunteered those terms’ specifics as of publication time.)

Databricks presents its Mosaic AI Foundation Model product as the managed solution to these roadblocks, which in addition to running DBRX and other models provides a training stack for fine-tuning DBRX on custom data. Customers can privately host DBRX using Databricks’ Model Serving offering, Rao suggested, or they can work with Databricks to deploy DBRX on the hardware of their choosing.

Rao added:

We’re focused on making the Databricks platform the best choice for customized model building, so ultimately the benefit to Databricks is more users on our platform. DBRX is a demonstration of our best-in-class pre-training and tuning platform, which customers can use to build their own models from scratch. It’s an easy way for customers to get started with the Databricks Mosaic AI generative AI tools. And DBRX is highly capable out-of-the-box and can be tuned for excellent performance on specific tasks at better economics than large, closed models.

Databricks claims DBRX runs up to 2x faster than Llama 2, in part thanks to its mixture of experts (MoE) architecture. MoE — which DBRX shares in common with Llama 2, Mistral’s newer models, and Google’s recently announced Gemini 1.5 Pro — basically breaks down data processing tasks into multiple subtasks and then delegates these subtasks to smaller, specialized “expert” models.

Most MoE models have eight experts. DBRX has 16, which Databricks says improves quality.

Quality is relative, however.

While Databricks claims that DBRX outperforms Llama 2 and Mistral’s models on certain language understanding, programming, math and logic benchmarks, DBRX falls short of arguably the leading generative AI model, OpenAI’s GPT-4, in most areas outside of niche use cases like database programming language generation.

Rao admits that DBRX has other limitations as well, namely that it — like all other generative AI models — can fall victim to “hallucinating” answers to queries despite Databricks’ work in safety testing and red teaming. Because the model was simply trained to associate words or phrases with certain concepts, if those associations aren’t totally accurate, its responses won’t always accurate.

Also, DBRX is not multimodal, unlike some more recent flagship generative AI models including Gemini. (It can only process and generate text, not images.) And we don’t know exactly what sources of data were used to train it; Rao would only reveal that no Databricks customer data was used in training DBRX.

“We trained DBRX on a large set of data from a diverse range of sources,” he added. “We used open data sets that the community knows, loves and uses every day.”

I asked Rao if any of the DBRX training data sets were copyrighted or licensed, or show obvious signs of biases (e.g. racial biases), but he didn’t answer directly, saying only, “We’ve been careful about the data used, and conducted red teaming exercises to improve the model’s weaknesses.” Generative AI models have a tendency to regurgitate training data, an major concern for commercial users of models trained on unlicensed, copyrighted or very clearly biased data. In the worst-case scenario, a user could end up on the ethical and legal hooks for unwittingly incorporating IP-infringing or biased work from a model into their projects.

Some companies training and releasing generative AI models offer policies covering the legal fees arising from possible infringement. Databricks doesn’t at present — Rao says that the company’s “exploring scenarios” under which it might.

Given this and the other aspects in which DBRX misses the mark, the model seems like a tough sell to anyone but current or would-be Databricks customers. Databricks’ rivals in generative AI, including OpenAI, offer equally if not more compelling technologies at very competitive pricing. And plenty of generative AI models come closer to the commonly understood definition of open source than DBRX.

Rao promises that Databricks will continue to refine DBRX and release new versions as the company’s Mosaic Labs R&D team — the team behind DBRX — investigates new generative AI avenues.

“DBRX is pushing the open source model space forward and challenging future models to be built even more efficiently,” he said. “We’ll be releasing variants as we apply techniques to improve output quality in terms of reliability, safety and bias … We see the open model as a platform on which our customers can build custom capabilities with our tools.”

Judging by where DBRX now stands relative to its peers, it’s an exceptionally long road ahead.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Getir pulls out of US, UK, Europe to focus on Turkey; 6,000+ jobs impacted

True to its business concept, Turkey’s “instant delivery” juggernaut Getir rose quickly. Now, with the quick commerce...

Ford’s BlueCruise hands-free system under investigation after fatal crashes

Federal safety regulators have opened an investigation into Ford’s hands-free driver assistance system, BlueCruise, after it was...

Apple’s iPadOS will have to comply with EU’s Digital Markets Act too

The European Union will apply its flagship market fairness and contestability rules to Apple’s iPadOS, the Commission...

OpenAI inks strategic tie-up with UK’s Financial Times, including content use

OpenAI, maker of the viral AI chatbot ChatGPT, has netted another news licensing deal in Europe, adding...

Ethiopian plastic upcycling startup Kubik gets fresh funding, plans to license out its tech

Kubik, a plastic upcycling startup, has raised a $1.9 million seed extension, months after announcing initial equity...

ChatGPT’s ‘hallucination’ problem hit with another privacy complaint in EU

OpenAI is facing another privacy complaint in the European Union. This one, which has been filed by...

Humanoid robots are learning to fall well

The savvy marketers at Boston Dynamics produced two major robotics news cycles last week. The larger of...

Tesla profits tumble, Fisker flatlines, and California cities battle for control of AVs

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation....