This article is more than 1 year old

Microsoft: Can't wait for ARM to power MOST of our cloud data centers! Take that, Intel! Ha! Ha!

Redmond fires bullet into WinTel beast's belly

Pic Microsoft today signaled more than half of its cloud data center capacity is set to be powered by 64-bit ARM servers.

In a briefing to techies at this year's OCP Summit in Silicon Valley, top Azure engineer Leendert van Doorn flashed up the above slide: it lays out Redmond's desire, over the next few years, to see the majority of its Bing web search and indexing, Azure cloud platform and database services, online storage, machine learning, and other features, powered by Windows Server on beefy ARM chips. Earlier today, Van Doorn said:

We feel ARM servers represent a real opportunity and some Microsoft cloud services already have future deployment plans on ARM servers ... We found that [ARM servers] provide the most value for our cloud services, specifically our internal cloud applications such as search and indexing, storage, databases, big data and machine learning. These workloads all benefit from high-throughput computing.

This comes as it was revealed today Microsoft has ported Windows Server 2016, plus language runtimes and middleware, to Qualcomm's 64-bit ARMv8-compatible 10nm FinFET Centriq 2400 system-on-chip, and to Cavium's 14nm FinFET 64-bit ARMv8-compatible ThunderX2 processor.

This operating system build is for Microsoft's internal use only: Redmond is evaluating the port on the competing ARM server rivals, pitting Qualcomm against Cavium for the Windows giant's love. The server software and hardware are being tested with non-production Bing and cloud services workloads.

Van Doorn said the ARM64 Windows Server port won't be used to run virtual machines in Azure, and a public release of the operating system is unlikely to appear in the near future simply because there isn't, yet, enough demand from enterprises. So for now, the ARMv8-compiled Windows Server will be used for internal testing, ahead of any deployment within Microsoft's data centers to provide cloud services.

We snapped a pic of the operating system running on Qualcomm's hardware in Microsoft's OCP Summit booth. It shows a Bing AI training example consuming pretty much all the compute capacity of the Centriq SoC's 48 cores. The OS version is 15033, and was compiled on February 12.

Van Doorn added that modern Windows – both the Server and client flavors – is now built from a single source code base, dubbed OneCore, targeting Intel x86 and ARM-compatible machines and devices. He claimed not a single line of code is different between the ARM64 Windows Server 2016 build for Qualcomm and Cavium's processors. That's because the chipsets and motherboards adhere to strict open standards, meaning peripherals and other hardware should be automatically discoverable and programmable by the operating system using a generic ARM ACPI driver, rather than requiring drivers for specific chipsets.

This has helped Microsoft port Windows Server to 64-bit ARM: goodbye all the legacy crap no one needs, from fax drivers to support for every NIC under the sun, and mysterious fixed hardware registers. Instead, a uniform abstraction to access the underlying ARM server hardware is used by the operating system to boot up on any compliant machine, whether it's from Qualcomm, Cavium or another ARM player.

Engineers who have experienced ARM SoCs in the embedded world will know the pain of hunting for documentation and driver source. This standardized hardware model, plus the single-thread performance modern ARM server CPUs can now deliver, has led Microsoft to throw its weight behind the architecture for the data center.

That, and the fact that it wants Intel – which dominates the global server compute market, and supplies Microsoft's data centers with chips – to slash its prices. Qualcomm and Cavium's parts will compete against Intel Xeons not just on performance per watt, but also on price. Microsoft has found it is relatively easy to ask ARM system-on-chip designers to tweak their blueprints to suit its workloads, allowing it to maximize efficiency and throughput, whereas with Intel, it faced a take-it-or-leave-it situation.

"We operate in a highly competitive market and take all competitors seriously," Intel spokesperson Steve Gabriel told The Register a few hours ago. "We are confident that Xeon processors will continue to deliver the highest performance and lowest total cost of ownership for our cloud customers.

"However, we understand the desire of our customers to evaluate other product offerings."

RIP, Microsoft and Intel's Wintel Alliance. It's now WinComm... and WinAvium. Previously, Google signaled that it, too, wanted to end Intel's stranglehold in the data center, and was considering ARM and POWER processors. Miraculously, Google suddenly got exclusive early access to Intel's Skylake Xeons for its data centers. And Microsoft went with ARM for at least half of its future data center capacity. Clearly, some drama went down, there.

Also at this year's Open Compute Project Summit

If you're not aware, the OCP works like this: internet giants including Facebook, Microsoft and Google offer blueprints and specifications of their servers to world and dog. These are the systems they have highly optimized for handling billions of users. And like Facebook et al, you're free to take these blueprints to Foxconn and other electronics factories to crank out your customized machines, which is supposed to be better and cheaper than buying them from normies like Dell and Lenovo.

Unfortunately, it's not quite that simple for various reasons. If you not buying in the quantity that Amazon, Facebook and Google are, you may not get the super deal you're expecting. And the Open Compute specifications may not be suitable for your traditional data center: the machines may not physically fit in your racks, you won't be able to get the right power to them, you won't be able to cool them properly, and so on.

Still, it's not impossible to buy and use OCP gear if you're not a hyper-scaler – there are financial institutions that are using them, for instance – and you can learn a lot from the designs. It's kinda like a Silicon Valley show'n'tell.

Here's some OCP related news from Tuesday's summit in Santa Clara, California:

  • Facebook refreshed its OCP server designs to include a Bryce Canyon storage chassis, Big Basin GPU server, Tioga Pass compute box, and Yosemite v2 and Twin Lakes compute hardware.
  • Nvidia and Microsoft showed off the HGX-1, a GPU-powered accelerator for artificial intelligence workloads.
  • AMD and Intel are working with Microsoft to get their latest processor silicon – Naples and Skylake, respectively – specified in Redmond's Project Olympus open server hardware blueprints.

Finally, Timothy Prickett Morgan, co-editor of our sister website The Next Platform has an analysis of today's news over here on TNP. ®

More about

TIP US OFF

Send us news


Other stories you might like