Arm Is Now Making Its Own Chips

The Hot Take: ARM wants a piece of that Ai cash pie for sure. I'm wondering how their licensing partners are going to take this.

The chip design firm says Meta, OpenAI, Cerebras, and Cloudflare are among the first customers of its new artificial intelligence hardware.

Read the full article

US senators want to suspend Nvidia AI chip export licenses to China and its intermediaries — bipartisan letter to Commerce Dept says that Huang’s claims of no chip diversion ā€˜were contradicted by reporting available’

The Hot Take: Uh oh, Ai king looks to be in trouble.

U.S. senators Elizabeth Warren (D-Mass.) and Jim Banks (R-Ind.) told Commerce Secretary Howard Lutnick that he should suspend all active export licenses to China for Nvidia AI chips, saying that Nvidia's most advanced AI GPUs are being diverted into the country despite Jensen Huang's assurances.

Read the full article

Wine 11 Rewrites How Linux Runs Windows Games At the Kernel Level

The Hot Take: Linux is coming for Windows Gamers for sure!

Linux gamers are seeing massive performance gains with Wine's new NTSYNC support, "which is a feature that has been years in the making and rewrites how Wine handles one of the most performance-sensitive operations in modern gaming," reports XDA Developers. Not every game will see a night-and-day difference, but for the games that do benefit from these changes, "the improvements range from noticeable to absurd." Combined with improvements to Wayland, graphics, and compatibility, as well as a major WoW64 architecture overhaul, the release looks less like an incremental update and more like one of Wine's most important upgrades in years. From the report: The numbers are wild. In developer benchmarks, Dirt 3 went from 110.6 FPS to 860.7 FPS, which is an impressive 678% improvement. Resident Evil 2 jumped from 26 FPS to 77 FPS. Call of Juarez went from 99.8 FPS to 224.1 FPS. Tiny Tina's Wonderlands saw gains from 130 FPS to 360 FPS. As well, Call of Duty: Black Ops I is now actually playable on Linux, too. Those benchmarks compare Wine NTSYNC against upstream vanilla Wine, which means there's no fsync or esync either. Gamers who use fsync are not going to see such a leap in performance in most games. The games that benefit most from NTSYNC are the ones that were struggling before, such as titles with heavy multi-threaded workloads where the synchronization overhead was a genuine bottleneck. For those games, the difference is night and day. And unlike fsync, NTSYNC is in the mainline kernel, meaning you don't need any custom patches or out-of-tree modules for it work. Any distro shipping kernel 6.14 or later, which at this point includes Fedora 42, Ubuntu 25.04, and more recent releases, will support it. Valve has already added the NTSYNC kernel driver to SteamOS 3.7.20 beta, loading the module by default, and an unofficial Proton fork, Proton GE, already has it enabled. When Valve's official Proton rebases on Wine 11, every Steam Deck owner gets this for free. All of this is what makes NTSYNC such a big deal, as it's not simply a run-of-the-mill performance patch. Instead, it's something much bigger: this is the first time Wine's synchronization has been correct at the kernel level, implemented in the mainline Linux kernel, and available to everyone without jumping through hoops. Read more of this story at Slashdot.

Read the full article

Elon Musk Announces $20B 'Terafab' Chip Plant in Texas To Supply His Companies

The Hot Take: US domestic chip manufacturing appears to be exploding. That's an insane goal, but to bad it's just for his companies.

"Billionaire Elon Musk has announced plans to build a $20 billion chip plant in Austin, Texas" reports a local news station: Musk announced on Saturday night during a livestream on his social media platform X that the plant, called "Terafab," will be built near Tesla's campus and gigafactory in eastern Travis County. The long-anticipated project is a joint venture between Musk-owned properties Tesla, SpaceX and xAI... The Terafab plant is expected to begin production in 2027. Musk "has said the semiconductor industry is moving too slow to keep up with the supply of chips he expects to need," writes Bloomberg — quoting Musk as saying "We either build the Terafab or we don't have the chips, and we need the chips, so we build the Terafab." Musk detailed some specific plans, including producing chips that can support 100 to 200 gigawatts a year of computing power on Earth, and chips that can support a terawatt in space, but gave no timelines for the facility or its output... The facility is expected to make two types of chips, one of which will be optimized for edge and inference, primarily for his vehicle, robotaxi and Optimus humanoid robots. The other will be a high-power chip, designed for space that could be used by SpaceX and xAI... Musk said he expects xAI to use the vast majority of the chips. During the presentation, Musk also unveiled a speculative rendering of a future "mini" AI data center satellite, one piece of a much larger satellite system that he wants SpaceX to build to do complex computing in space. In January, SpaceX requested a license from the Federal Communications Commission to launch one million data center satellites into orbit around Earth. Musk said that the mini satellite he revealed would have the capacity for 100 kilowatts of power. "We expect future satellites to probably go to the megawatt range," Musk said. Raising money to build and launch AI data centers in space is one of the driving forces behind SpaceX's planned IPO later this year. SpaceX is expected to raise as much as $50 billion in a record-setting IPO this summer which could value it at more than $1.75 trillion, Bloomberg News reported earlier. Read more of this story at Slashdot.

Read the full article

Microsoft and Nvidia launch AI partnership to speed up nuclear power plant permitting and construction — simulation tools and generative models could hasten historically lengthy processes

The Hot Take: Green New agenda doesn't fit in with Ai replacement of the plebes for sure. So they push us to Solar & Wind while they get viable power options for a bot?

Microsoft and Nvidia are joining forces to accelerate the construction of nuclear power plants for power-hungry AI data centers. The partnership combines generative AI, digital twin simulation, and Nvidia's Omniverse platform to streamline the nuclear lifecycle from permitting through operations.

Read the full article

Nvidia admits one GPU to rule them all was a fairy tale

The Hot Take: Nvidia starting to feel the heat of competition and see those $ evaporate as they try other vendors.

Nvidia is preparing to launch a new chip designed to speed up AI responses, breaking with its long-running habit of flogging the same processor for every job. Nvidia chief executive Jensen Huang is expected to unveil a chip focused on ā€œinferenceā€, meaning running models rather than training them. According to people familiar with the plans for GTC next week, the chip is the first new product to emerge from December’s $20bn deal to hire the founders of Groq, a start-up building ā€œlanguage processing unitsā€ tuned for high-speed answers to complex AI queries. Three months after that deal, Nvidia is expected to debut a Groq-based LPU to sit alongside its forthcoming flagship Vera Rubin graphics processing unit. It is part of a product family meant to head off challengers and meet new kinds of AI applications. The move lands as the world’s most valuable company gets grief from start-ups and customers, such as Google, all busy cooking up their own AI chips. This week, Meta announced a new family of four inference-focused processors. One Silicon Valley venture investor said: ā€œWe are entering an interesting phase that is not ā€˜Nvidia dominant’,ā€ For the past three years, Nvidia’s $4.5tn market capitalisation has been built on its GPUs, which have become the backbone of generative AI. They train models such as the ones behind OpenAI’s ChatGPT. Huang has insisted that a single system can handle training and then run the chatbots and coding tools built on top. Big Tech has spent hundreds of billions deploying these boxes while funding their own specialised silicon. But the growing sophistication of AI tools, including ā€œagenticā€ coding systems, is pushing Huang to ditch the mantra that one GPU fits every workload. The Groq deal was worth about $20bn, according to people familiar with the transaction, making it one of the biggest deals in Nvidia’s 33-year history. It includes licensing and the hiring of key talent, including Groq founder and former Google chip executive Jonathan Ross. Groq, which had been working with Samsung to manufacture its products, previously bragged that its LPUs were faster and more efficient than Nvidia’s GPUs for inference. Nvidia clearly listened. Nvidia’s flagship Blackwell and Rubin systems lean on high-bandwidth memory to cope with the massive data loads that AI models fling around. But HBM is expensive and in increasingly short supply as SK Hynix and Micron struggle to keep up with demand. The Groq-style chip will use SRam rather than the dynamic Ram used for HBM, according to people familiar with Nvidia’s plans, because SRam is more available and better suited to speeding up AI ā€œreasoningā€ tasks. Bank of America reckons that by 2030, inference will account for 75 per cent of AI data centre spending, up from about 50 per cent last year, and it expects a ā€œbroadened AI portfolioā€ at GTC. Ā 

Read the full article

Intel introduces its Binary Optimization Tool, aiming to fundamentally redefine x86 performance

The Hot Take: Intel doing what it's great at with it's CPUs, software optimizations.

With the introduction of the new Binary Optimization Tool (BOT), Intel is taking a significantly different approach to boosting the performance of modern processors than in the past. While traditional optimizations rely heavily on developers and are determined during the software compilation process, Intel is now focusing on a post-compilation optimization layer based directly on […] Source

Read the full article

Canonical Joins Rust Foundation

The Hot Take: Linux appears to be getting more Rust by the day.

BrianFagioli writes: Canonical has joined the Rust Foundation as a Gold Member, signaling a deeper investment in the Rust programming language and its role in modern infrastructure. The company already maintains an up-to-date Rust toolchain for Ubuntu and has begun integrating Rust into parts of its stack, citing memory safety and reliability as key drivers. By joining at a higher tier, Canonical is not just adopting Rust but also stepping closer to its governance and long-term direction. The move also highlights ongoing tensions in Rust's ecosystem. While Rust can reduce entire classes of bugs, it often depends heavily on external crates, which can introduce complexity and auditing challenges, especially in enterprise environments. Canonical appears aware of that tradeoff and is positioning itself to influence how the ecosystem evolves, as Rust continues to gain traction across Linux and beyond. "As the publisher of Ubuntu, we understand the critical role systems software plays in modern infrastructure, and we see Rust as one of the most important tools for building it securely and reliably. Joining the Rust Foundation at the Gold level allows us to engage more directly in language and ecosystem governance, while continuing to improve the developer experience for Rust on Ubuntu," said Jon Seager, VP Engineering at Canonical. "Of particular interest to Canonical is the security story behind the Rust package registry, crates.io, and minimizing the number of potentially unknown dependencies required to implement core concerns such as async support, HTTP handling, and cryptography -- especially in regulated environments." Read more of this story at Slashdot.

Read the full article