Intel has introduced the following household of Xeon processors that it plans to ship within the first half of subsequent yr. The brand new elements symbolize a considerable improve over present Xeon chips, with as much as 48 cores and 12 DDR4 reminiscence channels per socket, supporting as much as two sockets.
These processors will doubtless be the top-end Cascade Lake processors; Intel is labelling them “Cascade Lake Superior Efficiency,” with a better stage of efficiency than the Xeon Scalable Processors (SP) under them. The present Xeon SP chips use a monolithic die, with as much as 28 cores and 56 threads. Cascade Lake AP will as an alternative be a multi-chip processor with a number of dies contained with in a single bundle. AMD is utilizing an identical method for its comparable merchandise; the Epyc processors use 4 dies in every bundle, with every die having eight cores.
The swap to a multi-chip design is probably going pushed by necessity: because the dies change into larger and larger it turns into an increasing number of doubtless that they will include a defect. Utilizing a number of smaller dies helps keep away from these defects. As a result of Intel’s 10nm manufacturing course of is not but ok for mass market manufacturing, the brand new Xeons will proceed to make use of a model of the corporate’s 14nm course of. Intel hasn’t but revealed what the topology inside every bundle might be, so the precise distribution of these cores and reminiscence channels between chips is as but unknown. The big variety of reminiscence channels will demand an unlimited socket, presently believed to be a 5903 pin connector.
Intel, notably, is itemizing solely a core rely for these processors, as an alternative of the same old core rely/thread rely mixture. It isn’t clear whether or not which means the brand new processors will not have hyperthreading in any respect, or if the corporate is preferring to emphasise bodily cores and keep away from a few of the safety considerations that hyperthreading can current in sure utilization situations. Cascade Lake silicon will include fixes for many variants of the Spectre and Meltdown assaults.
General, the corporate is claiming a couple of 20 % efficiency enchancment over the present Xeon SPs and 240 % over AMD’s Epyc, with larger features coming in workloads which might be notably reminiscence bandwidth intensive. The brand new processors will embody various new AVX512 directions designed to boost the efficiency of operating neural networks; Intel reckons that this can enhance the efficiency of picture matching algorithms by as a lot as 17 instances sooner than the present Xeon SP household. The smallprint for the efficiency comparisons notes that hyperthreading/simultaneous multithreading is disabled on each the Xeon SP and Epyc programs.
On the different finish of the efficiency spectrum, Intel mentioned that its newest crop of Xeon E-2100 processors is transport immediately. These are single socket chips meant for small servers, providing as much as 6 cores and 12 threads per chip. Functionally, they’re Xeon-branded variations of the mainstream Core processors, with the one notable distinction being that they help ECC reminiscence, and use a server variant of the chipset.