OpenAI's substantial investments in compute infrastructure are raising eyebrows, with estimates suggesting a commitment of $1.4 trillion over the next eight years. This figure significantly overshadows the company's current annual revenue of approximately $13 billion. These costs encompass the AI infrastructure, including advanced chips and servers, essential for training complex models and generating responses.
In 2024, OpenAI's compute spending was divided between research and development ($5 billion) and inference ($2 billion). A significant portion of the R&D compute was allocated to experimental training runs and unreleased models, rather than the final training of models like GPT-4.5 and GPT-4o. To manage these expenses, OpenAI is exploring various strategies, including expanding paid versions of ChatGPT, offering data centre access to other companies, and developing hardware devices.
To support its infrastructure needs, OpenAI has agreements with several major vendors, including Broadcom, Oracle, Microsoft, Nvidia, AMD, Amazon AWS, and CoreWeave. These partnerships involve substantial financial commitments, such as Oracle receiving $60 billion annually for five years for cloud infrastructure. The company projects a gross profit margin of 48% in 2025, increasing to 70% by 2029, which will be crucial in sustaining these high levels of investment.
Related Articles

Microsoft Pursues Superintelligence AI
Read more about Microsoft Pursues Superintelligence AI →
AI Agents Flounder in Marketplace
Read more about AI Agents Flounder in Marketplace →
AI's Unpredictable Power Demand
Read more about AI's Unpredictable Power Demand →
OpenAI Faces Public Interest Test
Read more about OpenAI Faces Public Interest Test →
