What Happened?
China has rapidly built the largest power grid in history, more than doubling U.S. generating capacity and expanding electricity production since 2010 by more than the rest of the world combined. This power build-out, anchored by coal, nuclear, hydro, wind, and solar, is now being weaponized for AI: remote regions such as Inner Mongolia (Ulanqab and Horinger) have been designated as key hubs under the “East Data, West Computing” initiative, attracting over 100 data centers with access to power as low as 3 cents per kilowatt-hour—less than half typical U.S. data-center rates.
These hubs benefit from priority permitting, land access, and subsidies that in some cases cover up to half of electricity bills. China is also constructing ultrahigh-voltage transmission lines to move cheap western power east and knitting data centers into a national “compute pool” or “national cloud” by 2028. At the same time, Chinese tech firms such as Huawei, Alibaba and Baidu are compensating for weaker domestic chips by bundling hundreds of local processors into large systems, enabled by abundant and cheap power—even though these systems are energy-intensive and operationally complex. While U.S. export controls continue to constrain China’s access to top-end semiconductors, President Trump has recently eased some restrictions by allowing Nvidia’s H200 chips to be shipped to China, partially relieving the chip gap but not eliminating it.
Why It Matters?
For investors, this is a structural shift in the AI infrastructure race. The U.S. retains a lead in cutting-edge models and chips, but faces an emerging “electron gap”: grid constraints and lengthy permitting threaten to limit available data-center power just as AI workloads explode. China, by contrast, is pairing state-directed capital with cheap energy to build compute capacity at scale, lowering unit costs for AI training and inference—even when using second-tier chips. This dynamic could compress cost structures for Chinese AI players (e.g., DeepSeek) and improve their competitiveness against U.S. cloud and model providers over time.
However, China’s strategy is debt-heavy and potentially prone to overbuild: State Grid’s liabilities have surged to around $450 billion, and there are early signs of oversupply in some locations. Still, from a strategic and geopolitical standpoint, abundant power keeps China “in the game” despite semiconductor constraints and gives its AI ecosystem time and headroom to evolve. For U.S. hyperscalers, constraints on grid expansion, higher power prices, and permitting friction may translate into rising capex, slower deployment, and pressure to vertically integrate into power generation.
What’s Next?
Looking ahead to 2030 and beyond, China is expected to hold hundreds of gigawatts of spare capacity—far exceeding projected global data-center power demand—while U.S. data centers may face a 44-gigawatt shortfall in the next three years if upgrades lag. This divergence sets up a multi-year competition in which power availability becomes as important as chips and algorithms in determining AI leadership. Key variables for investors to watch include: the pace and scale of China’s grid and data-center build-out; whether its AI systems meaningfully close the performance gap using bundled domestic chips and selective access to Nvidia hardware; and how aggressively the U.S. responds with grid investments, regulatory reform, and on-site generation for hyperscalers. Over time, sustained cheap power may enable China to narrow the AI capability gap even without full access to leading-edge chips, particularly if the “AI race” extends over decades. If that happens, China’s current power advantage could become a durable moat for its AI and cloud providers—provided debt, overcapacity, and geopolitical tensions don’t derail the strategy first.













