As mentioned in a previous article, Tintri VMstore supports the SMB 3.0 extensions that allow for offloaded data transfers within a VMstore appliance. Recently announced was support for Copy Offload, or ODX, between your VMstore appliances — something we’ve been calling Remote ODX.

ODX Recap

Without ODX in place, a Hyper-V host making a copy of a large virtual disk (when deploying from a template for example), would need to read all of the bytes of the source vDisk over the network and write all of the bytes back to the destination. This has a network bandwidth cost and has a compute-side cost. Plus it can take a considerable amount of time, which can have a business cost.

Whereas with ODX, the Hyper-V host opens up the source and the destination vDisks and issues a command to the storage to have it copy the data internally. This saves compute and bandwidth resources, and in the case of the Tintri VMstore, the copy happens almost instantly, saving time in provisioning. This is a little bit of an oversimplification of the actual exchange between the Hyper-V host and the storage, but this MSDN article goes into the gory details and shows that it isn’t a lot more complicated.

Remote ODX

The press release above mistakenly announced support for ODX in an imminent release of the VMstore firmware. ODX has been available in the product for nearly two years now. What the article meant to announce was support for Remote ODX. Whereas ODX support was limited to offloaded transfers within a single appliance, Remote ODX now allows those same data transfers to also be offloaded between multiple Tintri VMstore appliances.

The mechanics from the compute side are still the same — the Hyper-V host opens a source and a destination vDisk (or any large file) and issues a command to have the data transfer offloaded to the storage. In the Remote ODX case, the two storage appliances negotiate amongst themselves and copy the file without the compute having to read and then write each byte of data.

Here are some of the details around that:

  • This still means that there are almost no compute resources being used for the transfer, leaving those available for running Applications/VMs.
  • The data is still copied across the network, but only once. In a traditional copy, the Hyper-V host would read the whole file over the network and write the whole file over the network to the destination. With Remote ODX, one VMstore sends the data to the second VMstore, halving the number of data bytes sent over the network.
  • Under the covers, Remote ODX makes use of the ReplicateVM technology that’s been built into Tintri appliances for years. This means that compression and deduplication are applied to the copy too, further reducing the time and bandwidth needed to make the copy.

To illustrate this all further, consider the case without ODX below. The blue arrows represent minor metadata operations (file opens and closes for example) and the red arrows represent the bulk data transfer of the vDisk contents.

remote-no-odx.png

As you can see, all of the data being transferred has to traverse the Hyper-V compute node and traverse the network twice.

In the case of Remote ODX, as shown in the next diagram, the host still performs some minor metadata operations, but the bulk of the transfer traverses the network once and the Hyper-V compute node doesn’t need to get involved.

remote-odx

Configuring Remote ODX

No assembly required. Batteries included. Existing customers should see an update available for download once the team has given it the green light to go GA. If both VMstores are running 4.3.0 or later, you have all you need to enjoy Remote ODX.

Caveat

Remote ODX sits behind the Windows CopyFile API, which is used by most modern Windows applications and services. As a result, they can all make use of Remote ODX without change. This even includes using Hyper-V storage migration to migrate a VM from one Tintri VMstore to another.

However, migrating a live VM requires more careful attention from the hypervisor. If two VMstores are copying a vDisk using ODX, and the VM writes to a block on that same vDisk, the Hyper-V host isn’t going to be able to write that block until the copy has completed. But live storage migration isn’t live if we have to halt writes to the guest for a minute or so. As a result, for live storage migration, the Hyper-V host won’t try to use copy offload to transfer the whole file, giving it the level of control it needs to transfer the VM without interrupting the VM itself.

There’s a simple solution to this though: set a Hyper-V checkpoint on the VM before starting the live storage migration. By setting a checkpoint, the hypervisor can guarantee that no writes will be applied to the base vDisk and that allows a full-file ODX transfer to happen.

This doesn’t happen by default before storage migration however. So you’ll want to remember to take the checkpoint first, then perform the migration and then remove the checkpoint.

Summary

Whether you call it Offloaded Data Transfers, ODX, or Copy Offload, the fewer resources taken away from your compute the better. In this age of Private Cloud, Software-Defined Everything and Everything-as-a-Service, the more quickly and efficiently you can move resources around, the more agile you can be. Local and Remote ODX are a piece of that puzzle that you should be able to take for granted.

 

Advertisements

2 thoughts on “Tintri VMstore and Remote ODX

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s