Industrialising storage benchmarks with Hosted Private Cloud from OVHcloud

Benchmarking is a much debated topic in the computing industry due to the countless methods and approaches. However, this blog will not list all of these approaches but instead provide an insight into the way we benchmark different kinds of storage available under our Hosted Private Cloud solutions. 

Before discussing this in more detail we should understand why we benchmark in the first place. The number one reason is that we need to assess the impact on our customers before putting anything into production. This could be anything from a new storage platform, a new disk model, a kernel patch, to a firmware update and any number of use cases.

Short story

As we continue to improve our storage infrastructures and introduce new storage technologies we regularly enhance our benchmarking methodologies and tools. With thousands of hardware references and an equal number of configurations, the number of benchmarks we need to achieve is exponential. This is why it is vital to industrialise the process.

In an effort to increase transparency we chose to conduct our benchmarks from the point of view of the end user. This means that all the orchestration which we describe in this blog post can be done by any of our Hosted Private Cloud customers.

Tools & Orchestration

FIO, vdbench, I/O Meter, (dd !) are just some of the widely-used and proven tools to benchmark storage. They give you an overview of raw performance for a given storage type. But, what if you want to benchmark a whole infrastructure from end-to-end? This could include 1, 10, 100, 1000 VMs or more, with multiple disk/raid settings, multiple disk sizes, multiple block sizes. All with different generic workloads consisting of a certain percentage of reads/writes, sequential or random and run by so many threads. You would also need to use workloads which match your production workloads. In fact the combinations are endless.

With all this in mind, for our first iteration, we started to use HCIbench to automate our benchmarks.

Hyper-converged infrastructure benchmark

HCIBench

HCibench (https://flings.vmware.com/hcibench) is a free and open-source benchmark automation tool. It defines itself as a “Hyper-converged Infrastructure Benchmark”. It’s essentially an automation wrapper around the popular and proven open-source benchmark tools: Vdbench and Fio, making it easier to automate testing across an HCI cluster. HCIBench aims to simplify and accelerate customer POC performance testing in a consistent and controlled way. The tool fully automates the end-to-end process of deploying test VMs, coordinating workload runs, aggregating test results, performance analysis and collecting necessary data for troubleshooting purposes.

HCIBench Appliance

HCIbench is as easy to install as deploying an OVA (Open Virtual Appliance) in your VMware infrastructure:

Once HCIbench is set up, just point your browser to https://IP:8483 and you’re ready to start your benchmarks:

Once you have entered your vCenter credentials and some additional information (datacenter, cluster, network, datastore etc..),  you’ll be able to set guest VMs settings:

  • Number of VMs to deploy
  • Number of disks for each VM
  • Disk size
  • Benchmark tool (FIO or vdbench)
  • I/O parameter file (see details below)
  • Duration
  • And more …

HCIbench will then take charge of all of the VM deployment/recycling operations and will execute your benchmark. The results are available in various forms, from Grafana interfaces to Excel files or simply flat text files for further external parsing…

Workload parameter files, modeling I/Os

Workload parameter files (using vdbench here) are at the heart of all benchmarks operations. They describe the I/O model you want to run against a given storage endpoint. Percentage of read/write , random/sequential, block size, threads and many other options are available.

We chose 3 different approaches to evaluate our storage platforms: generic workloads, application workloads and production workloads.

“Generic” workloads

By “generic” workloads we mean all workloads that look like “ONLY RANDOM READS” or “ONLY SEQUENTIAL WRITES”. They allow us to check how a storage type reacts on linear cases and how it performs with “extreme” cases.

Sample of a “generic” vdbench workload parameter file

root@photon-HCIBench [ /opt/automation/vdbench-param-files ]# cat 
GENERIC-ONLY-READS-RANDOM-16K-vdb-1vmdk-80ws-16k-100rdpct-100randompct-4threads
*SD:    Storage Definition
*WD:    Workload Definition
*RD:    Run Definitionsd=sd1,
        lun=/dev/sda,openflags=o_direct,
        hitarea=0,range=(0,80),
        threads=4,
        wd=wd1,
        sd=sd1,
        rd=run1,
        xfersize=16k,
        rdpct=100,
        seekpct=100,
        iorate=max,
        elapsed=120,
        interval=1

“Application” workloads

By “application” workloads we mean workloads that match common production use cases like “DATABASE WORKLOAD” , “VDI WORKLOAD” ,  “BACKUP WORKLOAD” etc.. With these benchmarks we can emulate a typical workload and verify the areas where a given storage type excels.

Sample of an “Application” vdbench workload parameter file

root@photon-HCIBench [ /opt/automation/vdbench-param-files ]# cat 
OLTP-SQL-Oracle-Exchange-vdb-1vmdk-80ws-16k-100random-70rdpct-4threads
*SD:    Storage Definition
*WD:    Workload Definition
*RD:    Run Definitionsd=sd1,
        lun=/dev/sda,
        openflags=o_direct,
        hitarea=0,range=(0,80),
        threads=4wd=wd1,
        sd=(sd1),
        xfersize=16k,
        rdpct=70,
        seekpct=100,
        rd=run1,
        wd=wd1,
        iorate=max,
        elapsed=120,
        interval=1

“Production” workloads.

Finally, another approach we are working on is the ability to “record” a production workload and “replay” it on an other storage endpoint to evaluate how the target storage performs with your production workload without the need to run your real production on it. The trick here is to use a mix of 3 tools blktrace , btrecord and btreplay to track and trace low level I/O calls and be able to replay those traces on another storage platform. 

In a future blog post we’ll share this feature with you, stay tuned!

Industrialisation of HCIbench runs with Rundeck scheduler

Rundeck

As we’ve seen, in few clicks we’re able to define and launch a benchmark scenario under a specific workload. Deployment and recycling of probing VMs are fully automated. What if, next, we want to iterate through multiple scenarios? As part of a whole validation of a new storage platform for example? At this point we started to use Rundeck (http://www.rundeck.com), a free and open-source runbook automation scheduler, in front of HCIbench. The idea is to be able to create complete benchmark scenario collections.

Industrialisation of HCIbench runs with Rundeck scheduler

The first step was to understand how HCIbench works under the hood so we could control it via the Rundeck scheduler. HCIbench is designed to be used via a web interface but all the mechanics behind it are done via clean and separate scripts like start/stop/kill. All the benchmark settings are stored in a clean flat configuration file which was easy to convert as a template…

Templating HCIbench configuration file

root@photon-HCIBench [ /opt/automation/conf ]# cat template.yaml
vc: '<VCENTER_HOSTIP>'
vc_username: '<USERNAME>'
vc_password: '<PASSWORD>'
datacenter_name: '<DATACENTER>'
cluster_name: '<CLUSTER>'
number_vm: '<NUMBERVM>'
testing_duration: '<DURATION>'
datastore_name:- '<DATASTORE>'
output_path: '<OUTPUT_PATH>'
...

Bench “root job”

A rundeck job consists of a sequence of steps which will execute on a defined list of nodes. In our context nodes are VMs running HCIbench.

What we call the bench “root job” is a rundeck job which is the main bench entry point. Its generic role is to be called by other jobs and to launch one specific bench. 

Options (parameters) of this job are all the items from the HCIbench configuration template (see above).

The workflow for this job is as follows:

            – Parse job options
            – SSH connect to HCIbench VM
            – Populate configuration template with corresponding job options
            – Launch HCIbench 

Bench jobs

Secondly we have “bench jobs”. Through rundeck API we created jobs for each workload to be able to launch benches individually or by group, as required. Each of those “bench jobs” call the “root job” explained above with corresponding bench parameters.


“Super” jobs

Finally we have “Super jobs”. These are collections of jobs, their workflows are series of bench jobs calls. We use the cascading options mechanism to pass options through jobs. In the example below we bench a vSAN cluster through a complete panel of I/O models.

Another interesting feature of using Rundeck as an HCIbench scheduler is the ability to keep all logs from HCIbench’s VMs and the timing of each benchmark at the same place. Therefore, it’s easy to browse and search around all our benchmarks or to target a specific behaviour shown by graphics.

Results & use cases

vSAN

Integrating vSAN into our Hosted Private Cloud product was a typical benchmark project where we needed not only to check how the whole platform performed in every area but also to refine the design of the platform itself. On one side we evaluated hardware designs with multiple disk references and on the other side we improved the software design by evaluating various vSAN disk group and cache configurations.

New kernel impact on storage arrays

Another interesting use case is when we’re evaluating impacts of a new kernel on our OmniOS (http://www.omniosce.org) based storage arrays. OmniOS is a free open-source Operating System based on OpenSolaris which integrates some great technologies such as ZFS, DTrace, Crossbow, SMF, Bhyve, KVM and Linux zone support. This case shows not only slightly better performances but also a great improvement in I/Os handling.

Indeed, among a lot of different benchmarks, new kernel (r151022), shows far more stable and linear I/Os. This bench also confirms few ZFS/NFS fixes that were included in this kernel which fix latency issues during ZFS send/receive snapshot.

Industrialising our benchmarks enabled us to monitor the performance of our storage. First of all, because we created them with our users in mind, we’re aligned with what our customers would get. In addition, benchmarks give us an insight into troubleshooting storage issues that are very specific and/or only visible from VMs. We plan to extend this so we can check how the compute side performs (CPU/RAM/…). Finally, we’re now focusing on the record/replay workload feature allowing our users to predict how their production workload will perform on “XYZ” platforms without having to actually run their production environment on it. We will detail this in a future blog post, stay tuned!

+ posts

Francois joined OVHcloud in 2007 and specialized as storage engineer. He designed, deployed and operated first waves of ZFS based storage servers within OVHcloud including High Availability clusters. Since 2017, Francois focus his expertise on performance engineering and R&D into Private Cloud team.