Category: Benchmarks

Hyper-V Disk Speed Musings

Hyper-V Disk Speed Musings

I was testing out the Intel 905P and getting the benchmark numbers, I also went ahead and installed HyperV, set it up correctly, and then created a Windows 2016 VM running and decided to run the same CrystalDiskMark benchmarks again, in the HyperV host as well as inside the VM.

I only had the one VM running, and the host was not doing anything other than hosting whilst I was running the benchmark

On the queue depth of 32, sequential reads and writes are where they should be.

On to queue depth 32 random reads and writes, and we can start to see a penalty.  Just wait.

At a queue depth of 1, sequential reads and writes are looking not too shabby.  The VM itself sees a reduction on reading, but not too bad.

And here we get to the interesting part.  Look at the random reads and writes from inside the VM.  I ran this test multiple times and the result was nearly the same numbers at every go.   From a VM, raw disk performance sees a tremendous, nay, massive hit.

I went so far as to reboot the host, rebuild the mirror, and ran the numbers again.  Each time was basically the same.  I made sure the virtual disk on the VM was not throttled, and it was a Gen 2 VM.

Quite odd.  I’m hoping to soon be able to test to see if perhaps the VHDX auto expanding setup was the culprit.  Stay tuned.

Intel 905P SSD Benchmarks

Intel 905P SSD Benchmarks

Today I’ve got some benchmarks on the Intel 905P SSD 960GB card.

For reference, this is a PCIe add in card (4x lanes) that uses the Intel/Micron 3D XPoint (or Optane) memory instead of the usual NAND flash.  The promise is really high random performance at low queue depths.

For these benchmarks the server config is:

  • Dell R740XD containing two Xeon Gold 6144 CPUs (3.5Ghz)
  • Windows Server 2016 Standard fully updated, full GUI
  • For tests with two cards, each card was placed in a PCIe slot on different CPUs
  • CrystalDiskMark version 5.2.0.
  • I’m also including the results of an Intel P4510  for comparison’s sake.
  • I ran CrystalDiskMark three times for each test, but since the results each time where so very, very close to each other I’m only going to show the results of the first run.
  • I’m showing the data that includes the IOPS count as I find that more informative

I’m also including the following drives for comparison’s sake:

  • Intel 4800X 750GB
  • Intel P4510 4TB
  • Micron 9200 MAX 1.6TB
  • Samsung 850Pro 2TB Sata drive (please note, this was in Dell R720 server hooked to a Perc H710P Raid controller with no read ahead, write through, use cache.  This gives SSDs on that controller the fastest performance they can get).

For more details of just how awesome 3D Xpoint is compared to normal NAND, I highly recommend a stop by Anandtech as they go into much greater detail than I do.

Too the charts!

If you look at sequential reads and writes at a queue depth of 32, you can see just how much of an advantage PCIe based SSD’s have over their sata ancestors.  On the PCIe side of things, the P4510 is the king.

The same is pretty much the case when it comes to randomness at this high queue depth.

At the lower queue depth of 1, we start seeing Optane come into its own on the random read side.  On the write, it is above the Micron and yet is beaten by the P4510.

Now we come to random reads at a queue depth of 1.  Trounces everything.  Nothing even comes close for random reads.

Writes are a different story, but still no slouch.

This being a server however, where uptime is king and redundancy is quite necessary, what happens if we mirror the drives?  Sadly the Dell server does not support Intel VROC yet so I went with a software mirror inside of Windows.  Additionally, I added in some benchmarks on the Samsung 850Pro 2TB in Raid 1 (2 drives) and Raid 10 (4 drives).  Settings on the raid controller are the same as above.

First, we can see that at a queue depth of 32, random reads are affected by a lot, but not as much as random writes as they take a substantial, if not massive, hit. On the 905P we go from a single drive random write of 222147 down to 107525.  The P4510 goes from 239401 down to 139499.   That is a large penalty for redundancy.

The long and short of it is that redundancy costs in raw performance.  That being said, your data is redundant, ’nuff said.

All told, if you have a workload that can benefit from low queue depth reads, the 905P is one killer SSD.


***DISCLAIMER*** I am not responsible for this breaking or damaging any of your stuff.  Copyrights belong to their original owners***


Hyper-V CPU Musings

Hyper-V CPU Musings

Recently I had the opportunity arise where I was able to test a few CPU core configurations on an unused host.

My gold here is to see if a CPU virtualization penalty exists and secondarily to see what effect hyperthreading has on CPU performance in a single VM setting.

Specs of the host:  Dual Xeon Gold 6144, 512GB RAM, SSD

HyperV version:  Windows 2016 (long term branch)

To start, here are the Cinebench R15 scores before HyperV was installed:

Hyperthreading enabled got a score of 3427 (left) whereas hyperthreading disabled got a score of 2680:


Next, installed Hyper-V, built a VM running the full GUI of 2012R2, fully updated.

First test, hyperthreading disabled, VM has 16 cores assigned:

Nice!! only 2 points off of the physical.

Now, enable hyperthreading at the host level.  VM still has 16 cores assigned:

Ouch, 1021 points lower (-38%).  Keep in mind all we did was enable hyperthreading on the host.  A 38% penalty just in that setting.

Next test, assign 32 cores to the same VM:

Above 3000 again.  191 points off (-5%) the physical install benchmark above.

And just because, 24 cores assigned to the VM:

Here we have 511 points off the physical host (-15%).


What did I learn?  With hyperthreading disabled, there is a virtualization penalty, but it barely registers.

It’s when hyperthreading is enabled that one has to be careful.  That large of a hit (-38%) is interesting to say the least.

That being said, with hyperthreading enabled and you assign all available logical cores to the VM, it wasn’t too shabby.

***DISCLAIMER*** I am not responsible for this breaking or damaging any of your stuff.  Copyrights belong to their original owners***

Upgrading to a current gen Xeon… worth it?

Upgrading to a current gen Xeon… worth it?

Recently I had a chance to introduce a new server into our environment spec’d out to use two CPU’s from Intel’s Xeon Gold family with the same amount of cores as the server it was replacing.  Could their be an increase in CPU performance?

Let’s find out, using a quick and easy way to measure:  Cinebench R15

The old server (Dell R720XD) running two Intel Xeon E5-2667 v2 processors (Ivy Bridge, 8 cores each):

The new server (Dell R740XD) running two Intel Xeon Gold 6144 processors (Skylake-SP, 8 cores each)

BIOS and Power management settings were set to maximum (no power savings, max performance)

So first, 16 cores of Xeon E5-2667v2 (Hyperthreading enabled)

A score of 2464.  No slouch.

Next, 16 cores of Xeon Gold 6144 (Hyperthreading enabled)

Wow, 3427! Almost 1000 points higher, which if my math is right is nearly 40% faster.

Keep in mind this is one benchmark, YMMV, but it’s an easy way of showing the difference you can gain in three generations of CPU’s.

***DISCLAIMER*** I am not responsible for this breaking or damaging any of your stuff.  Copyrights belong to their original owners***