Investments in artificial intelligence infrastructure continue to grow at a record pace. And fortunately or unfortunately, their impact on the U.S. economy remains limited. The share of AI investments in the country’s GDP growth increased from 0.44 percentage points to 0.59 after the introduction of ChatGPT. The figure appears modest compared to previous economic booms, such as the dot-com era or the Japanese commercial real estate market in the 1980s.
By 2030, investments in AI are expected to grow at an annual rate of $7 trillion, with nearly half of that amount allocated to Nvidia stock. But at the same time, it is worth considering the growing debt burden, which is already reaching $200 billion. The increasing popularity of circular financing schemes and the companies’ conditional liabilities creates additional risk for creditors, which may affect the sector’s sustainability.
According to preliminary estimates, current investments in AI infrastructure have a moderate impact on overall U.S. GDP growth. Even with record investments, AI-driven GDP growth still does not exceed 0.6 percentage points annually, significantly lower than the effects of previous technological or resource booms. Even if investment continues at its current pace, AI’s contribution to GDP may increase to 1.3 percentage points; however, rising borrowing and companies’ debt burden remain risks to long-term economic growth.
On the technical front, the growth of AI infrastructure is constrained by physical limitations. Modern computing accelerators have reached a stage of development where the bottleneck is not the volume but rather memory bandwidth. Although GPUs and accelerators have improved significantly in performance in recent years — allowing their producers to expand their representation in stock screeners — the associated memory remains a weak link, limiting the scalability of large language models. The transition of AI systems to inference increases memory demand, making memory scarcity a key factor in determining how many clients the system can serve simultaneously.
Even the introduction of HBM4 memory by AMD and Nvidia only partially solves the problem, since restrictions on the number of channels and the height of the chip stack remain. Developers are seeking ways to overcome these barriers through in-memory computing chips, multi-chip designs, and next-generation interfaces like UCIe, which increase bandwidth and reduce latency. HBM4 will provide a 1.5x increase in bandwidth over HBM3E, up to 2 TB/s. By 2027, the introduction of HBM4E and the CXL interface is expected to increase data exchange efficiency further and reduce power consumption by 20-30%.
Amid these technological limitations, investors have already begun to show caution. The growth of companies’ debt obligations and the high capital intensity of projects are prompting skepticism among institutional funds, which is reflected in financial markets. Stocks of AI-related companies remain volatile, and technology indices are reacting to negative news about major projects and resource shortages. Given the forecasts of long-term profitability and risks for lenders, it is important for investors to closely monitor the relationship between rapid capitalization growth and the limitations of the production infrastructure.
Thus, the AI boom remains limited not only by investors’ economic caution but also by real technological barriers. Further scaling of computing platforms will depend on the introduction of faster memory and advanced architectural solutions. The growth of the economy and company profits will be closely linked to the effectiveness with which these bottlenecks are overcome.