Intel immediately launched a barrage of recent merchandise for the info heart, tackling virtually each enterprise workload on the market. The corporate’s various vary of merchandise highlights how immediately’s information heart is extra than simply processors, with community controllers, customizable FPGAs, and edge gadget processors all a part of the providing.
The star of the present is the brand new Cascade Lake Xeons. These had been first introduced final November, and on the time a dual-die chip with 48 cores, 96 threads, and 12 DDR4 2933 reminiscence channels was going to be the highest spec half. However Intel has gone even additional than initially deliberate with the brand new Xeon Platinum 9200 vary: the top-spec half, the Platinum 9282, pairs two 28 core dies for a complete of 56 cores and 112 threads. It has a base frequency of two.6GHz, a Three.8GHz turbo, 77MB of stage Three cache, 40 lanes of PCIe Three.zero enlargement, and a 400W energy draw.
The brand new dual-die chips are dubbed “Superior Efficiency” (AP) and slot in above the Xeon SP (“Scalable Processor”) vary. They will be supported in two socket configurations for a complete of four dies, 24 reminiscence channels, and 112 cores/224 threads. Intel doesn’t plan to promote these as naked chips; as an alternative, the corporate goes to promote motherboard-plus-processor packages to OEMs. The OEMs are then chargeable for including liquid or air cooling, deciding how densely they wish to pack the motherboards, and so forth. As such, there isn’t any worth for these chips, although we think about it will be someplace north of “costly.”
Degree Three cache/MB
In addition to these new AP components, Intel is providing a full refresh of the Xeon SP line. The total Cascade Lake SP vary consists of some 60 completely different variations, providing completely different combos of core depend, frequency, stage Three cache, energy dissipation, and socket depend. On the high finish is the Xeon Platinum 8280, 8280M, and 8280L. All three of those have the identical primary parameters: 28 cores/56 threads, 2.7/four.0GHz base/turbo, 38.5MB L3, and 205W energy. They differ within the quantity of reminiscence they assist: the naked 8280 helps 1.5TB, the M bumps that as much as 2TB, and the L goes as much as four.5TB. The bottom mannequin is available in at $10,009, with the excessive reminiscence variants costing extra nonetheless.
Throughout the total vary, a variety of different suffixes pop up too; N, V, and S are geared toward particular workloads (Networking, Virtualization, and Search, respectively), and T is designed for long-life/reduce-thermal hundreds. Lastly, a number of fashions have a Y suffix. This denotes that they’ve a characteristic referred to as “velocity choose,” which permits purposes to be pinned to the cores with the perfect thermal headroom and highest-possible clock speeds.
Cascade Lake itself is an incremental revision to the Skylake SP structure. The fundamental parameters—as much as 28 cores/56 threads per die, 1MB stage 2 cache per core, as much as 38.5MB shared stage Three cache, as much as 48 PCIe Three.zero lanes, six DDR4 reminiscence channels, and AVX-512 assist—stay the identical, however the particulars present enchancment. They assist DDR4-2933, up from DDR4-2666, and the usual reminiscence supported is now 1.5TB as an alternative of 768GB. Their AVX-512 assist has been prolonged to incorporate an extension referred to as VNNI (“vector neural community directions”) geared toward accelerating machine-learning workloads. Additionally they embody (largely unspecified) fixes for many variants of the Spectre and Meltdown assaults.
The opposite massive factor that Cascade Lake brings past Skylake is assist for Optane reminiscence. A lot of the Xeon SP vary (although oddly, not the Xeon AP processors) can use Optane DIMMs constructed to the DDR4-T normal. Optane (also referred to as 3D XPoint) is a non-volatile solid-state reminiscence know-how developed by Intel and Micron. Its promise is to supply density that is akin to flash, random entry efficiency that is inside an order of magnitude or two of DDR RAM, and sufficient write endurance that it may be utilized in memory-type workloads with out failing prematurely. It does all this at a worth significantly decrease than DDR4.
Intel has been speaking about utilizing Optane DIMMs for memory-like duties for a while, however solely immediately is it lastly launching, as Optane DC Persistent Reminiscence. Programs cannot use Optane solely—they will want some standard DDR4 as properly—however through the use of the mix they are often readily geared up with huge portions of reminiscence, utilizing 128, 256, or 512GB Optane DIMMs.
Functions unaware of non-volatile reminiscence can use the Optane and DDR4 as a single big pool of reminiscence. Behind the scenes, the DDR4 will cache the Optane, and the general impact shall be merely machine has an terrible lot of reminiscence that is somewhat slower than common reminiscence. Alternatively, purposes may be written to explicitly use non-volatile reminiscence and can have direct entry to the Optane, utilizing it as a sort of big, randomly accessible, high-speed disk.
To alleviate any considerations about endurance, Intel is providing a Three-year guarantee for Optane DC Reminiscence, even for components which were operating at their peak write efficiency for the whole three years.
Intel additionally introduced some refreshes to the Xeon D systems-on-chips first launched in 2015. In 2015, Intel launched the Broadwell-based Xeon D 1500 line. Final yr, these had been joined by the Skylake SP-based Xeon D 2100 line. The 2100 line supplied a big improve in efficiency and reminiscence capability however with a lot increased energy attracts, too.
Immediately comes the Xeon D 1600 line, direct replacements for the 1500 components. Surprisingly, these new 1600 components proceed to make use of the identical Broadwell structure as their predecessors; they’re aimed on the similar sorts of storage and networking workloads, with two to eight cores/16 threads, as much as 128GB RAM, and energy attracts between 27 and 65W.
In addition to the processor cores, they embody (relying on which actual mannequin you have a look at) 4 10GbE Ethernet controllers, Intel Fast Help Know-how acceleration of compression and encryption workloads, 6 SATA Three channels, 4 every of USB Three.zero and a pair of.zero ports, 24 lanes of PCIe Three.zero, and eight lanes of PCIe 2.zero.
Introduced immediately however coming within the third quarter is a brand new Intel Ethernet controller. The 800 sequence, codenamed Columbiaville, will assist 100Gb Ethernet. These controllers are slightly extra programmable than your typical Ethernet controller, with customizable software-controlled packet parsing occurring throughout the Ethernet controller itself. Which means the chip can ship a packet for additional processing, reroute it to a distinct vacation spot, or do no matter an utility wants, all with out the involvement of the host processor in any respect. The controllers additionally assist application-defined queues and fee limits, so advanced application-specific prioritization may be enforced.
For its last information heart providing, Intel introduced the Agilex FPGA (area programmable gate array—a processor that may have its inside wiring reconfigured on-the-fly), constructed utilizing the corporate’s 10nm course of. These chips provide as much as 40TFLOPS of number-crunching efficiency and allow builders to construct a variety of application-specific accelerators. The FPGAs will sport a spread of optionally available capabilities, corresponding to containing 4 ARM Cortex-A53 cores, PCIe technology four or 5, DDR4, DDR5, and Optane DC Persistent reminiscence, with an choice for HBM excessive bandwidth reminiscence mounted on-chip, and cache coherent interconnects to connect them to Xeon SP chips.
For machine-learning workloads, they will assist a spread of low-precision integer and floating level codecs. Additional customization will come from the power to work with Intel and instantly embed customized chiplets into the FPGAs.
Over the previous few years, FPGAs have turn into more and more widespread, particularly within the cloud information facilities operated by the likes of Microsoft, Google, and Amazon, as they provide a helpful midpoint between the big flexibility of software-based computation and the big efficiency of hardware-based acceleration; they provide versatile acceleration of issues like networking, encryption, and machine-learning workloads, in a fashion that’s readily upgraded and altered to adapt to new algorithmic necessities and fashions.
Intel plans to have these out there from the third quarter.