- Semiconductor Revenue to Total $717 Billion in 2025.
- Semiconductor Revenue to Grow 19% in 2024.
- GPU Revenue to Grow 27% in 2025.
Table 1: Semiconductors Revenue Forecast, Worldwide, 2023-2025 (Billions of U.S. Dollars)
2023 | 2024 | 2025 | |
Revenue | 530.0 | 629.8 | 716.7 |
Growth (%) | -11.7 | 18.8 | 13.8 |
Source: Gartner (October 2024)
In the near term, the memory market and graphics processing units (GPUs) will bolster semiconductor revenue globally.
The worldwide memory revenue market is forecast to record 20.5% growth in 2025, to total $196.3 billion. Sustained undersupply in 2024 will fuel NAND prices up 60% in 2024, but they are poised to decline by 3% in 2025. With lower supply and a softer pricing landscape in 2025, NAND flash revenue is forecast to total $75.5 billion in 2025, up 12% from 2024.
DRAM supply and demand will rebound due to improved undersupply, unprecedented high-bandwidth memory (HBM) production and rising demand, and the increase in double data rate 5 (DDR5) prices. Overall, DRAM revenue is expected to total $115.6 billion in 2025, up from $90.1 billion in 2024.
AI Impact on Semiconductors
Since 2023, GPUs have dominated the training and development of AI models. Their revenue is projected to total $51 billion, an increase of 27% in 2025. “However, the market is now shifting to a return on investment (ROI) phase where inference revenues need to grow to multiples of training investments,” said George Brocklehurst, VP Analyst at Gartner.
Among them is a steep increase in HBM demand, a high-performance AI server memory solution. “Vendors are investing significantly in HBM production and packaging to match next-generation GPU/AI accelerator memory requirements,” said Brocklehurst.
HBM revenue is expected to increase by more than 284% in 2024 and 70% in 2025, reaching $12.3 billion and $21 billion, respectively. Gartner analysts predict that by 2026, more than 40% of HBM chips will facilitate AI inference workloads, compared to less than 30% today. This is mainly due to increased inference deployments and limited repurpose for training GPUs.