Broadwell-EP hosts, any specs?
Perhaps the most exciting thing about this is that we should be able to use new instructions like TSX-NI on these new servers. Is this functionality enabled on these hosts, and should I rely on the availablility of these instructions for my applications?
More significantly, are there any specs on these new hosts? What about the typical number of Linodes run on each host at common sizes (2 GB, 4 GB, 8 GB)? I would be very interested to see what Linode's latest hardware looks like and how it's being used.
Draco
–-
Edit: I can see that ADX and TSX-NI (hle and rtm flags) are enabled in /proc/cpuinfo. Any official word on new instruction availability, though?
5 Replies
@bwDraco:
More significantly, are there any specs on these new hosts? What about the typical number of Linodes run on each host at common sizes (2 GB, 4 GB, 8 GB)? I would be very interested to see what Linode's latest hardware looks like and how it's being used.
Linode does not release this kind of information, and I doubt they ever will.
@bwDraco:
I can see that ADX and TSX-NI are enabled in /proc/cpuinfo. Any official word on new instruction availability, though?
It's unlikely that Linode will make any commitment, because 1) not everybody is on a 2697v4 host, and 2) the instructions may need to be disabled at some point for security reasons. For example, AVX was disabled in Xen for a long time due to issues with not saving the registers properly during context switches. My recommendation would be to verify that the feature is available at program start, and then use it if it is, but have fallbacks if it isn't. Make sure you do the full feature test recommended by Intel. (I say this because AVX's feature test was 2 parts, and many things only did the first part, which passed even when Xen had disabled AVX, because Xen only cause the second part to fail; this proceeded to cause loads of problems.) I realize that TSX is a bit different than AVX, but stuff does happen, like it being broken in the entire Haswell line, and needing to be disabled in a microcode update.
@dwfreed:
@bwDraco:More significantly, are there any specs on these new hosts? What about the typical number of Linodes run on each host at common sizes (2 GB, 4 GB, 8 GB)? I would be very interested to see what Linode's latest hardware looks like and how it's being used.
Linode does not release this kind of information, and I doubt they ever will.
Well, many years back, on the old Xen servers with 8 shared cores per instance, Linode said that they were running about 40 Linode 1 GB instances on a single host. What I'm most curious about is the level of CPU contention typical of these new hosts and therefore how predictable CPU performance is. Some of the biggest players in the cloud space tend to guarantee that their cores can deliver full performance at all times except possibly for the low end (e.g. "shared-core" instances). I'm wondering whether that's improved with the new hosts, which have significantly more cores than before. The rest of the technical details are not a big deal.
When I first signed up for Linode, the servers were 2S/16C/32T; they're apparently now 2S/36C/72T. With these new servers, it certainly looks like they could pack 80 to 100 2 GB Linodes onto a single host without causing an excessive amount of contention.
I recognize the cloud services market is getting more competitive than ever, so Linode keeping the cards to themselves with respect to their host hardware is probably best.
Draco
@bwDraco:
@dwfreed:
@bwDraco:More significantly, are there any specs on these new hosts? What about the typical number of Linodes run on each host at common sizes (2 GB, 4 GB, 8 GB)? I would be very interested to see what Linode's latest hardware looks like and how it's being used.
Linode does not release this kind of information, and I doubt they ever will.
Well, many years back, on the old Xen servers with 8 shared cores per instance, Linode said that they were running about 40 Linode 1 GB instances on a single host. What I'm most curious about is the level of CPU contention typical of these new hosts and therefore how predictable CPU performance is.
They stopped releasing that information a long long time ago (easily 5 years ago).
@bwDraco:
Some of the biggest players in the cloud space tend to guarantee that their cores can deliver full performance at all times except possibly for the low end (e.g. "shared-core" instances).
You mean like Azure, Google Compute Engine, Softlayer, or Amazon, none of which actually do that? Sure, Amazon has the "ECU" but that's based on a unit of measure that's so old as to be completely useless as an indicator of actual resource availability in modern systems and applications.
With respect to new instructions, it definitely looks best to verify that the instructions are indeed usable before actually using them.
Draco