OpenAI CEO Sam Altman said that the company was forced to stagger the rollout of its newest model, GPT-4.5, because OpenAI is “out of GPUs.”
In a post on X, Altman said that GPT-4.5, which he described as “giant” and “expensive,” will require “tens of thousands” more GPUs before additional ChatGPT users can gain access. GPT-4.5 will come first to subscribers to ChatGPT Pro starting Thursday, followed by ChatGPT Plus customers next week.
Perhaps in part due to its enormous size, GPT-4.5 is wildly expensive. OpenAI is charging $75 per million tokens (~750,000 words) fed into the model and $150 per million tokens generated by the model. That’s 30x the input cost and 15x the output cost of OpenAI’s workhorse GPT-4o model.
GPT 4.5 pricing is unhinged. If this doesn’t have enormous models smell, I will be disappointed pic.twitter.com/1kK5LPN9GH
— Casper Hansen (@casper_hansen_) February 27, 2025
“We’ve been growing a lot and are out of GPUs,” Altman wrote. “We will add tens of thousands of GPUs next week and roll it out to the Plus tier then […] This isn’t how we want to operate, but it’s hard to perfectly predict growth surges that lead to GPU shortages.”
Altman has previously said that a lack of computing capacity is delaying the company’s products. OpenAI hopes to combat this in the coming years by developing its own AI chips, and by building a massive network of datacenters.