Unbabel among first AI startups to win millions of GPU training hours on EU supercomputers

Date:

Share post:


The European Union has announced the winners of a “Large AI Grand Challenge” it kicked off earlier this year in a bid to accelerate the pace of homegrown innovation by large-scale AI model makers.

Four startups will share €1 million in prize money and — perhaps more importantly — eight million GPU hours to train their models on a couple of the bloc’s high performance computing (HPC) supercomputers over the next 12 months. The Commission reckons this will enable them to shrink model training times “from years to weeks”, as its PR puts it.

The winning four startups are — in alphabetical order — : French fintech Lingua Custodia, which does financial document processing using natural language processing (NLP); Belgian startup Textgain, which also uses NLP for text processing but focuses on analysis of unstructured data, such as monitoring social media chatter for hate speech; Latvian startup Tilde, another language specialist that’s focuses on Balto-Slavic languages — offering machine translation and AI-powered chatbots in the target tongues; and Portugal’s Unbabel, which has historically blended machine translation with the expertise of native human speakers — applying AI for customer service and productivity use-cases for enterprise customers.

The Commission said the AI Challenge received a total of 94 proposals.

Unbabel likely has the highest profile of the four winners. The Y Combinator-backed translation business has been around for the best part of a decade and raised close to $100M over its run, per Crunchbase.

Whether Unbabel needs an extra quarter million euros or even 2 million freebie GPU training hours is up for debate — but even veteran AI startups may feel every little helps given the fast-paced developments in generative AI over the past 1.5 years or so.

At the end of the training period, the EU expects all the winners to release their developed models under an open-source license for non-commercial use or publish their research findings. 

EU supercomputers to support AI startups

The EU unveiled a plan to expand startup access to the bloc’s supercomputing hardware in president Ursula von der Leyen’s state of the union address last fall — saying at the time that it wanted “ethical and responsible AI startup” to be first in line to tap computational support.

The European High Performance Computing Joint Undertaking (aka EuroHPC JU) — to give the bloc’s supercomputer initiative its full name — currently has eight operational (nine procured) supercomputers — two of which will be providing the allocation of eight million GPU hours to the four winners: Namely, Finland-based Lumi and Italy-based Leonardo (which are both pre-exascale HPC supercomputers).

A fifth startup — Spain-based Multiverse Computing, which is focusing on trying to improve the energy efficiency and speed of large language models using “quantum-inspired tensor networks” — just missed out on any prize money but there’s a consolation: It will be allocated 800,000 computational hours on another of the supercomputers, Spain’s (pre-exascale) MareNostrum 5.

This handful of European startups building large scale AI models won’t be the first to get a taste of what HPC hardware can do. French general purpose AI model maker Mistral was a participant in an early pilot phase of the supercomputing provision last summer, using Leonardo to “run a few small experiments”, as co-founder and CEO Arthur Mensch told TechCrunch back in December — though he said it had not been used for model training at that point.

The EuroHPC JU has also historically provided some capacity to commercial players. However demand for the supercomputers typically far outstrips supply, so the AI startups are essentially getting bumped to the front of the queue.

EU policymakers have also recognized there’s a need to reconfigure and retool the HPC infrastructure for the generative AI age. Hence why, back in January, the Commission announced a package of “AI innovation” measures that included proposals for upgrading the supercomputers and building out a support layer to improve accessibility so that AI startups can more easily tap the infrastructure.





Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Bluesky tops 20M users, narrowing gap with Instagram Threads

Bluesky, the social network and X competitor has been benefiting from a surge of departures from the...

Oura valued at $5B following deal with medical device firm Dexcom

Smart ring maker Oura announced on Tuesday that it has received a $75 million investment from glucose...

Amazon’s top music plan will now include one audiobook per month

Amazon said on Tuesday that it will let Amazon Music Unlimited subscribers access one audiobook per month...

Spectro Cloud nets $75M to help companies manage their Kubernetes installations

Kubernetes, the open-source system that helps manage containerized applications (software packages that run in isolated environments), long...

Instagram will soon let you reset your recommendation algorithm

Instagram is testing the ability for users to reset their recommendations, the company announced on Tuesday. By...

Former basketball hopeful wants to prevent ACL tears with airbags for knees

You’ve heard of car airbags deploying within milliseconds to protect passengers. How about an airbag for your...

Indian news agency sues OpenAI alleging copyright infringement

One of India’s largest news agencies, Asian News International, has sued OpenAI in a case that could...

Sagence is building analog chips to run AI

Graphics processing units (GPUs), the chips on which most AI models run, are energy-hungry beasts. As a...