The U.S. House shelved the GAIN AI Act, blocking a rule that would have forced AMD and Nvidia to put U.S. buyers ahead of China for advanced GPUs, though Beijing's own limits blunt the impact.
OpenAI's Sam Altman announced in an internal memo that the company is in 'Code Red' status, putting every other project on the backburner in favor of ChatGPT.
Nvidia has released a paper describing TiDAR, a decoding method that merges two historically separate approaches to accelerating language model inference.
Major insurers are moving to ring-fence their exposure to artificial intelligence failures, after a run of costly and highly public incidents pushed concerns about systemic, correlated losses.
Elon Musk argues that terawatt-scale AI computing will soon be impossible to power and cool on Earth and must move to orbit. But despite abundant solar energy and radiative cooling in GEO, launch mass, radiation-hardening, and networking challenges make such space-based data center only a distant dream.
Anthropic, Microsoft, and Nvidia have struck a joint partnership to run the Claude AI on Microsoft's Azure servers using Nvidia hardware. Anthropic will invest $30 billion in the move, as well as a guarantee to provide an additional gigawatt of compute performance. This deal could help all companies meet their existing commitments, but it adds extra inflation to the ballooning AI bubble.
Elon Musk claims Tesla may need 100 – 200 billion AI chips per year, a volume far beyond what TSMC and Samsung can supply, which is why he is considering building Tesla's own fab, as he believes existing foundries cannot scale fast enough for him.
Nvidia's Vera Rubin platform may upend the AI-server market by shipping partners fully built L10 compute trays that include all compute, power, and cooling hardware, leaving OEMs and ODMs to handle only rack integration while Nvidia takes over the core server design, much of the value, and margins.
A J.P. Morgan report says that the AI industry needs to make at least $650 billion annually for investors to get a 10% return on all the money going into it until 2030.
OpenAI has asked the Trump administration to expand a major CHIPS Act tax credit to support the build-out of AI infrastructure, including servers, data centers, and power systems.
After President Trump and U.S. Treasury Secretary Scott Bessent reiterated that Nvidia will be barred from selling its most advanced chips to China, Jensen Huang has now confirmed that there are currently no active discussions on the matter. In fact, Nvidia also requires authorization from China itself before it can sell GPUs to local firms.
Google Cloud has launched new Axion CPU and Ironwood TPU instances that combine Arm-based general-purpose computing with 7th-generation AI acceleration. Ironwood-based pods with up to 9,216 chips and 42.5 FP8 ExaFLOPS per pod vastly surpass Nvidia's GB300 systems and form the foundation of Google's AI Hypercomputer for large-scale model training and inference.
Nvidia CEO Jensen Huang said that China will win the AI race because of its abundance of power and the fact that the U.S. is losing out on the chance for its hardware to become the standard tool that Chinese AI developers use.
The U.S. Secretary of Treasury Scott Bessent has highlighted a potential future where China could get its hands on Nvidia's current-gen flagship AI chips... when they're no longer the current-gen flagships. Washington is considering letting Nvidia sell Blackwell chips to China once they're outdated by at least a year or two.