Microsoft DPM 2016 Support for Tintri VMstore

Microsoft DPM 2016 Support for Tintri VMstore

As of firmware version 4.4.2 and later, Tintri VMstore has been validated for use with Microsoft’s Data Protection Manager version 2016.

Configuration Changes

There isn’t a lot that’s needed to get DPM 2016 working against your Hyper-V VMs on your Tintri VMstore. We like simplicity. However, things are slightly different with DPM vs other data protection applications such as Veeam or Commvault. In the case of DPM 2016, the Hyper-V hosts themselves need a higher set of privileges when performing I/O against the storage. I’ll go into more detail below.

Simply grant the Hyper-V hosts the Super Admin role on the VMstore(s) and everything else is taken care of. We generally recommend creating an Active Directory group containing the Hyper-V host computer accounts, and then granting that group the Super Admin role. This makes ongoing management and auditing much more straightforward.


Above: An Active Directory group containing Hyper-V host computer objects.


Above: Granting the Hyper-V Hosts group Super Admin role on VMstore.

You may also want to purge cached tickets (these contain group membership information) on your Hyper-V hosts by running the following command:

klist -li 0x3e7 purge

The next time the Hyper-V tries to connect (such as DPM attempting a backup), it will request a new Kerberos ticket that has the updated group membership information.

What Does This Do?

In the case of products like Veeam, this step isn’t necessary. What is necessary in the Veeam case is to grant the Super Admin role to the service accounts that Veeam uses for VM I/O. So in a way, the process is the same, albeit with all of the Hyper-V computer accounts instead of one or two Active Directory service accounts.

The reason why this is required in both cases is that to perform backups and restores, the user (service account or Hyper-V host account) performing the backup needs to have SeBackupPrivilege and SeRestorePrivilege. The detail of these is discussed in this Microsoft TechNet article. By assigning the Super Admin role, these privileges are inherited by the Hyper-V host accounts used by DPM.

What About DPM 2012r2?

Unfortunately, Microsoft DPM 2012r2 makes an assumption that all SMB-protocol storage is a Windows Server and is running the DPM agent. This isn’t the case for third party storage vendors, so DPM 2012r2 currently won’t work. Should this limitation be removed by Microsoft in the future, it’s not difficult to imagine support for 2012r2 being validated too.

[Backup image by GotCredit and used unmodified under CC 2.0]


DIY Self-Service Portal with SCSM

DIY Self-Service Portal with SCSM

Extending on our series’ on automation and orchestration, we’re going to look at building a web portal to allow us to safely delegate tasks to other teams or individual users. We’ll look at System Center Service Manager, given that many organisations already have a license. This approach could be taken with countless other tools however.

Here’s a screenshot of what we’ll end up with:


We’re essentially going to be walking through the items in the Service Catalog Checklist inside the Service Manager Console application:


First, we’ll create a management pack, which will give us a container to store the rest of the components we’ll create. We do this to allow us to more easily port this between Orchestrator instances (think development vs production as an example). Under Administration, right-click Management Packs and select Create Management Pack. It just needs a name and a description:


We’ll select this management pack whenever prompted for a management pack in subsequent components and that will keep them all neatly together.

Next, we’ll connect System Center Service Manager to System Center Orchestrator. This connector will allow you to call Orchestrator runbooks from inside Service Manager workflows. If you haven’t finished all of your runbooks (we’ll be adding some as part of this series), don’t worry. We can always add more and then sync Orchestrator and Service Manager after.

We create and label the connector:


We then point it at the System Center Orchestrator web service (on TCP port 81 by default), giving the credentials of a user who is allowed to run Orchestrator runbooks. Note that the Orchestrator Web Service URL has a path of Orchestrator2012/Orchestrator.svc even if you’re running Orchestrator 2016 (as I am below).


You can also provide access to runbook information (optional) by including the Web Console URL too. This is on port 82 by default.



In order to call our runbooks, we  need to define a Runbook Activity Template. Select our management pack and make sure that the Class is set to Runbook Automation Activity.


This will open the template form dialogue box, where we set a few parameters….


…. and most importantly, the runbook that we wish to be executed. In our case, we’re selecting a runbook that has no input parameters (we’ll cover the runbook itself in the next article in the series). If you had parameters to pass in, you would map those here:


To expose this to our end users, we create a Service Request template. What we add here will be visible as task items that users can request. In our case, we’re going to create one to allow our end users to request the ability to restore files from the previous day’s snapshot:


In the template form for this SR, do hit the ‘+’ sign to add an activity and select the runbook automation template that we created earlier. It’s possible to add other activities here too to have things like approval performed for cases where that might be necessary.


This covers all of the plumbing for the backend connectivity to the automation and orchestration infrastructure. Next we need to package this all up into something consumable by the users of the self-service portal in Service Manager.

To do this, we package up the individual activities as Request Offerings. Each of these will be a clickable button to allow users to trigger an activity. We’ll also create a Service Offering to group these activities together in one place for a given set of users. There’s a screenshot of the finished product at the end of this page.

The Request Offering should use our File Level Restore template that we created earlier and should be added to the management pack we created. Had we had any inputs to our orchestration runbook and template, we would also configure user prompts and inputs here too.


Our Service Offering is the collection of Request Offerings we plan to publish together. Create it with the relevant labels, icons and knowledge articles, and as with everything else, add it to our management pack.


We also select all of the Request Offerings we want to include as part of this Service Offering. At the moment, we only have the one Request Offering we created above, but if you had already created others, you could add/include them all here:


And here’s a screenshot of how the finished product looks in the Service Manager Self-Service portal:


In the next article, we’ll go through the details of the orchestration workflow. It will use the template that we created in an earlier series with a few minor changes. At that point, it’s a case of replicating the workflow in this article and the automation/orchestration piece, and you can add any number of self-service tasks to make your end users more self-sufficient.

All of this could apply equally to placing useful functionality in the hands of helpdesk staff to delegate common tasks as you scale.

[Self-Serve image by JP and used unmodified under SA2.0]

Failover Cluster with Shared Virtual Disk beta

Failover Cluster with Shared Virtual Disk beta

In our last installment, we took a look at deploying a set of VMs with shared storage in the form of shared virtual disks. This week, we’ll go into the guests and bring the disks online and deploy Failover Cluster.

Preparing The Disks

When we attach the shared virtual disks along with our Windows template vDisk, the shared disks will start out offline and unformatted. We’ll rectify that here. You can use the Disk Management applet in Computer Management to do this, but we’ll present it here as a set of DISKPART.EXE commands, which are easy to copy/paste and apply in a script somewhere.

No matter which approach you take, the process is the same and comes down to these steps:

  1. On the primary:
    1. Online the shared disks
    2. Explicitly assign a drive letter (optional)
    3. Format the disks
  2. On the secondary
    1. Online the shared disks
    2. Explicitly assign a drive letter (optional)

Add the following commands to a text file (init-disks-primary.txt for example):

select disk 1
online disk
attributes disk clear readonly
create partition primary
select partition 1
format fs=NTFS LABEL="DATA"
assign letter=S

select disk 2
online disk
attributes disk clear readonly
create partition primary
select partition 1
format fs=NTFS LABEL="LOGS"
assign letter=L

These commands are DISKPART.EXE commands to bring each disk online, set them writable and create a partition and format it. We also explicitly assign drive letters (S for SQL and L for Logs). The latter isn’t required.

To run these commands in batch, simply redirect the contents of this file to stdin for DISKPART.EXE, which might look like this:

DISKPART.EXE < init-disks-primary.txt

We do this on the primary. The secondary is a very similar process, but we don’t need to create a partition or format it (the primary will have already done so). For the secondary, add the following DISKPART.EXE commands to a file called init-disks-secondary.txt:

select disk 1
online disk
attributes disk clear readonly
select partition 1
assign letter=S

select disk 2
online disk
attributes disk clear readonly
select partition 1
assign letter=L

And run those in a similar fashion:

DISKPART.EXE < init-disks-secondary.txt

At this point, we have both disks available to both guests. It’s important that we don’t try to write anything to the disks at this point — we need to deploy Failover Cluster to control which VM has write access to the disks (the active node in the cluster). If you take a look at the Disk Management applet in Computer Management, you ought to see something similar this:


Installing and configuring Failover Cluster on both nodes is straightforward through PowerShell. First, we install the Failover Cluster role on both VMs:

Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools

And then create a new cluster from the primary node and add the secondary and the shared virtual disks. To deploy the cluster, we’ll need the following information:

  1. A unique name for the cluster (SQL-SC in the example below)
  2. A shared IP address for clients to access the cluster ( in the example below)
  3. The names of the nodes/VMs in the cluster (RSVD-SC1 and RSVD-SC2 in the example below)

Given those, we can create the cluster:

New-Cluster -Name SQL-SC -Node RSVD-SC1 -StaticAddress

Add the second node to the cluster:

Add-ClusterNode -Cluster SQL-SC -Name RSVD-SC1

And finally add all of the shared storage to the cluster:

Get-ClusterAvailableDisk -Cluster SQL-SC | Add-ClusterDisk -Cluster SQL-SC

At this point, you should have a two node cluster and Failover Cluster Manager should show the two shared virtual disks as cluster shared storage, which might look something like this:


With the cluster running, simply use the Microsoft SQL Server installer to install a clustered instance of SQL and point it at the shared storage. You can then point your SQL clients at the cluster name given above and live happily ever after.

[Serpens Cluster image by Robert Sullivan and used unmodified under Public Domain]

Hyper-V Shared Virtual Disks Beta

Hyper-V Shared Virtual Disks Beta

We’ve been busy here at Tintri HQ. One of the things that has taken up a chunk of my time is taking a look at some functionality that is currently in beta. Specifically Shared Virtual Disks for Hyper-V.


Shared Virtual Disks allow you to have multiple virtual machines that have a common virtual disk (often multiple) that each of the VMs can communicate. The primary use case for this is for highly available clustered applications that require shared storage and quorum. Much in the same way that a Tintri VMstore has two controllers for redundancy and common storage between the two.

These clustered VMs are configured as active-passive sets where the active VM is the one that currently has the ability to do I/O to the storage.

More Information

Shared Virtual Disk support over SMB sounds pretty simple, but there’s a lot to it. It involves taking SCSI commands, such as those used to manage SCSI reservations on shared storage, and tunnel them within SMB commands. If you’re really interested in how this works, you can find out more in the MS-RSVD specification.

A big shout out to our friends at Microsoft who have been working with us for quite some time on this and helping to make sure that any ambiguities in the spec were cleared up.


Deployment of clustered VMs is a little more involved than a standalone VM. The way we’ve been doing it is to take a freshly sysprepped Windows 2012r2 VHDX file, create two VMs from it and create and add the shared disks to it. From there, I can set them up as a Microsoft Failover Cluster and install clustered SQL.

We’ll use standard Hyper-V PowerShell cmdlets to demonstrate the process.

First, creation of the two VMs. Note that they have no disks attached yet.

$mastername = "SQL-1"
$slavename = "SQL-2"
$smbpath = "\\\VMs"
$masterpath = "$smbpath\$mastername"
$slavepath = "$smbpath\$slavename"
$master = New-VM -Name $mastername -Path $masterpath -MemoryStartupBytes 8GB -Generation 2 -NoVHD
$slave = New-VM -Name $slavename -Path $slavepath -MemoryStartupBytes 8GB -Generation 2 -NoVHD

Next, we copy the operating system vDisk to each VM’s folder and add it to the VM.

$vmtemplate = "\\\Templates\Windows Core 2012r2.vhdx"
copy $vmtemplate "$masterpath\$mastername.vhdx"
Add-VMHardDiskDrive -VM $master -Path "$masterpath\$mastername.vhdx" 
copy $vmtemplate "$slavepath\$slavename.vhdx"
Add-VMHardDiskDrive -VM $slave -Path "$slavepath\$slavename.vhdx"

You’ll notice that because the template is on the same storage appliance as the VMs are being deployed to, that the copy of the vDisk is nearly instant. This is due to local copy offload or ODX, which is something that we’ve covered before. If the template was on one VMstore and being deployed to another VMstore, distributed copy offload would kick in making the copy far quicker than usual.

At this point, our VMs are pretty-much standard. They have a single disk from a common golden image.

Next, we need to create the shared disks and attach those. In this case, we intend to have two shared disks for my SQL 2014 instance — one for the data and one for logs. Here’s how:

New-VHD -Path "$masterpath\$mastername-data.vhdx" -Fixed -SizeBytes 100GB
New-VHD -Path "$masterpath\$mastername-logs.vhdx" -Fixed -SizeBytes 25GB

Add-VMHardDiskDrive -VM $master -Path "$masterpath\$mastername-data.vhdx" -SupportPersistentReservations
Add-VMHardDiskDrive -VM $master -Path "$masterpath\$mastername-logs.vhdx" -SupportPersistentReservations

Add-VMHardDiskDrive -VM $slave -Path "$masterpath\$mastername-data.vhdx" -SupportPersistentReservations
Add-VMHardDiskDrive -VM $slave -Path "$masterpath\$mastername-logs.vhdx" -SupportPersistentReservations

Things to note here:

  1. The -SupportPersistentReservations option. This is where the magic happens and allows these two VMs to share the same vDisk
  2. We’ve placed the two share vDisks under the same directory as the master and simply pointed the slave to it, but this is arbitrary. These shared disks could be in their own directory. It is important to keep them all somewhat local to each other though.

At this point, we have two virtual machines that both have common, shared storage over SMB3.

This is how it looks in Hyper-V Manager once deployed:


In a follow-up article, we’ll look at provisioning this shared storage and setting up the Failover Cluster.


As mentioned, this functionality is still in beta and not yet generally available. When the initial support is released, it will come with some temporary limitations. The focus has been put on data integrity above everything else, so to limit the development and test scope and allow more resources to prove out the data integrity side of things, the use case for Shared Virtual Disks over SMB is currently limited to:

  • Windows 2012r2 Hosts
  • Windows 2012r2 Guests
  • SQL 2014 as a clustered guest application
  • Shared disks must be fixed VHDX
  • Snapshots and replication are currently not supported

More specific detail around these use cases will accompany the release, but we’re always interested to hear your thoughts and requirements around additional use cases and functionality.

[High Availability image by flattop341 and used unmodified under CC2.0]

Enterprise Cloud and All-You-Can-Eat Buffets

Enterprise Cloud and All-You-Can-Eat Buffets

What does Enterprise Cloud have in common with an all-you-can-eat buffet?

Delegation and Self-Service

As well as automating tasks to be run on a regular schedule to avoid manual handling, many tasks are being automated to allow the delegation of executing certain tasks to other humans.

Consider the case where you have 10 VDI desktops deployed. As common tasks come up, such as restoring files from snapshots, diagnosing performance issues or provisioning new desktops, it’s easy to jump in and take care of matters by hand. Take that number to 1000 and you’re likely going to start to see issues maintaining those by hand as you scale. Get to 10,000 or more and it’s an entirely different class of problem.

This doesn’t just apply to VDI — DevOps deployments and Enterprise server farms are seeing the same kinds of challenges as they scale too.

In order to scale past a few systems, you need to start to delegate some number of tasks to someone else. Whether that be a helpdesk team of some kind, or a developer or application owner, or even potentially the end user of a VDI desktop.

However, delegation and self-service are not just a case of dumping a bunch of tech in front of folks and wishing them luck. In most cases, these folks won’t have the technical domain knowledge required to safely manage their portion of infrastructure. We need to identify the tasks that they need to be able to perform and package those up safely and succinctly.


Consider a restaurant with an all-you-can-eat buffet. One of the nice ones — we’re professionals here. Those buffets don’t have a pile of raw ingredients, knives and hotplates, yet they’re most definitely still self-service.

You’re given a selection of dishes to choose from. They’ve all been properly prepared and safely presented, so that you don’t need to worry about the food preparation yourself. There is the possibility of making some bad decisions (roast beef and custard), but you can’t really go far enough to actually do yourself any great harm.

They do this to scale. More patrons with fewer overhead costs, such as staff.

DIY Self-Service

As we deploy some kind of delegation or self-service infrastructure, we need to:

  1. Come up with a menu of tasks that we wish to allow others to perform,
  2. Work out the safety constraints around putting them in the hands of others, and
  3. Probably still having staff to pour the bottomless mimosas instead of simply a tap.

We did introduce these two things in previous series’ of articles. In particular, #1 is a case of listing and defining one or more business problems, as we saw in the automation series.. For example, users that accidentally delete or lose an important file, might need a way to retrieve files from a snapshot from a few days ago. #2 above is referring to taking and validating very limited user input. In the restore example above, we’d probably only allow the user to specify the day that contains the snapshot they’re looking for and maybe the name of their VM.

Public Cloud

Self-service and autonomy are one of the things that Public Cloud have brought to the table at a generic level. By understanding the specifics of your own Enterprise, you can not only meet, but exceed that Public Cloud agility within your own data centre. This can also be extended to seamlessly include Public Cloud for the hybrid case.

Next Steps

As with each of these series, we’re starting here with a high level overview and will follow that up with an illustrative example over the coming articles. We’ll build on what we’ve learned in those previous series and we’ll again use the common System Center suite to get some hands-on experience. As always, the concepts and workflow apply quite well to tools other than System Center too.

To summarise, delegation and self-service are essential for most organisations as they scale. When used to safely allow autonomy of other groups, it can save you and your team significantly.

[Buffet picture by Kenming Wang and used unmodified under SA2.0]


A Very Particular Set Of Skills

A Very Particular Set Of Skills

Has anybody not heard of the recent ransomware attack known as WannaCry? No? Good. Hopefully you’re only aware of it through news articles, but for far too many folks, this is not the case.

We all keep our patches up to date and we all use various levels of protection to limit the attack surface and potential spread of these kinds of attacks.

Unfortunately, and for various reasons, these kinds of attacks can still wreak havoc.

When this does happen, it doesn’t have to ruin your year.

For an individual virtual machine affected by this, simply:

  1. Revert your affected VM back to a previous snapshot using SyncVM
  2. Start the VM disconnected from the network
  3. Apply any updates to close the exploited security hole
  4. Reconnect to the network
  5. Don’t pay the ransom

In cases where there are a very large number of affected VMs, a lot of this process can be automated.

To misappropriate and misquote the famous speech from the movie Taken, our customers have a very particular set of skills. Skills we’ve been assisting them with over a long career.

[Ransom Note image by Sheila Sund and used unmodified under CC BY 2.0]

Enterprise Cloud Orchestration recap

Enterprise Cloud Orchestration recap

This brief article hopes to summarise and collect the recent set of articles published around orchestration in the Enterprise Cloud.

  1. In our first article, we gave an overview of orchestration in the context of the larger automation umbrella and looked at is as a way to simplify the safe execution of automated tasks.
  2. Part two in the series looked at orchestration workflows (runbooks in System Center speak), using System Center Orchestrator 2016 as an example.
  3. Article #3 looked at a Microsoft PowerShell template for calling complex PowerShell functionality from within a System Center Orchestrator runbook.
  4. In our next article, number four, we looked at the use-case specific code. Our example used Tintri SyncVM to perform some innovative and efficient data-copy management for our Test/Dev folks.
  5. Finally, article five in the series pulled it all together and allowed us to execute the orchestration runbook, and our PowerShell activity, and see the results.

This series extended upon our automation series to take a business problem and create an agile and automated solution suitable for safely delegating to folks outside our core infrastructure group. This could also be scheduled for regular execution within Orchestrator.

Keep your eye out for the next series, which will look at putting this in the hands of the end user through a simple self-service portal.

[La grande salle de la Philharmonie de Paris image by Jean-Pierre Dalbera and used unmodified under CC2.0]