Tag: optane

Hyper-V Disk Speed Musings

Hyper-V Disk Speed Musings

I was testing out the Intel 905P and getting the benchmark numbers, I also went ahead and installed HyperV, set it up correctly, and then created a Windows 2016 VM running and decided to run the same CrystalDiskMark benchmarks again, in the HyperV host as well as inside the VM.

I only had the one VM running, and the host was not doing anything other than hosting whilst I was running the benchmark

On the queue depth of 32, sequential reads and writes are where they should be.

On to queue depth 32 random reads and writes, and we can start to see a penalty.  Just wait.

At a queue depth of 1, sequential reads and writes are looking not too shabby.  The VM itself sees a reduction on reading, but not too bad.

And here we get to the interesting part.  Look at the random reads and writes from inside the VM.  I ran this test multiple times and the result was nearly the same numbers at every go.   From a VM, raw disk performance sees a tremendous, nay, massive hit.

I went so far as to reboot the host, rebuild the mirror, and ran the numbers again.  Each time was basically the same.  I made sure the virtual disk on the VM was not throttled, and it was a Gen 2 VM.

Quite odd.  I’m hoping to soon be able to test to see if perhaps the VHDX auto expanding setup was the culprit.  Stay tuned.

Intel 905P SSD Benchmarks

Intel 905P SSD Benchmarks

Today I’ve got some benchmarks on the Intel 905P SSD 960GB card.

For reference, this is a PCIe add in card (4x lanes) that uses the Intel/Micron 3D XPoint (or Optane) memory instead of the usual NAND flash.  The promise is really high random performance at low queue depths.

For these benchmarks the server config is:

  • Dell R740XD containing two Xeon Gold 6144 CPUs (3.5Ghz)
  • Windows Server 2016 Standard fully updated, full GUI
  • For tests with two cards, each card was placed in a PCIe slot on different CPUs
  • CrystalDiskMark version 5.2.0.
  • I’m also including the results of an Intel P4510  for comparison’s sake.
  • I ran CrystalDiskMark three times for each test, but since the results each time where so very, very close to each other I’m only going to show the results of the first run.
  • I’m showing the data that includes the IOPS count as I find that more informative

I’m also including the following drives for comparison’s sake:

  • Intel 4800X 750GB
  • Intel P4510 4TB
  • Micron 9200 MAX 1.6TB
  • Samsung 850Pro 2TB Sata drive (please note, this was in Dell R720 server hooked to a Perc H710P Raid controller with no read ahead, write through, use cache.  This gives SSDs on that controller the fastest performance they can get).

For more details of just how awesome 3D Xpoint is compared to normal NAND, I highly recommend a stop by Anandtech as they go into much greater detail than I do.

Too the charts!

If you look at sequential reads and writes at a queue depth of 32, you can see just how much of an advantage PCIe based SSD’s have over their sata ancestors.  On the PCIe side of things, the P4510 is the king.

The same is pretty much the case when it comes to randomness at this high queue depth.

At the lower queue depth of 1, we start seeing Optane come into its own on the random read side.  On the write, it is above the Micron and yet is beaten by the P4510.

Now we come to random reads at a queue depth of 1.  Trounces everything.  Nothing even comes close for random reads.

Writes are a different story, but still no slouch.

This being a server however, where uptime is king and redundancy is quite necessary, what happens if we mirror the drives?  Sadly the Dell server does not support Intel VROC yet so I went with a software mirror inside of Windows.  Additionally, I added in some benchmarks on the Samsung 850Pro 2TB in Raid 1 (2 drives) and Raid 10 (4 drives).  Settings on the raid controller are the same as above.

First, we can see that at a queue depth of 32, random reads are affected by a lot, but not as much as random writes as they take a substantial, if not massive, hit. On the 905P we go from a single drive random write of 222147 down to 107525.  The P4510 goes from 239401 down to 139499.   That is a large penalty for redundancy.

The long and short of it is that redundancy costs in raw performance.  That being said, your data is redundant, ’nuff said.

All told, if you have a workload that can benefit from low queue depth reads, the 905P is one killer SSD.

 

***DISCLAIMER*** I am not responsible for this breaking or damaging any of your stuff.  Copyrights belong to their original owners***