In our previous articles, we created a high-level design for a solution to a business problem, and we automated one part of that. All of this using tools that come with a Tintri VM-aware Storage appliance.

Our code currently uses SyncVM to synchronise vDisks from a production SQL VM to a reporting VM. In this article, we’ll build on that a little so that it performs that task over a number of similar VMs.


We have code to sync vDisks between our sqlprod1 VM and our sqlrep1 VM. In our example, we have two other pairs of prod/rep VMs we want to be able to do the same work to. We could just copy and paste the existing code a few times into the same script and be done with it. However, if we need to update that process later, we then need to remember to change it in three places. Adding more VMs to the list only increases that problem.

Instead, we’ll package up the code we finished the last article with and call it for each of the VM pairs we want. This packaging uses PowerShell functions. I’m going to take our previous code and simply wrap it up in the function stuff. Here’s the code and we’ll walk through the changes afterwards:

function Sync-ProdRep {
    # Connect to our VMstore
    $vmstore = ""
    Connect-TintriServer -UseCurrentUserCredentials $vmstore

    # Retrieve a VM object for both VMs by name
    $report = Get-TintriVM -Name $reportname
    $prod = Get-TintriVM -Name $prodname

    # Take a snapshot of our production VM and using the
    # Returned snapshot ID, retrieve the snapshot object
    $snapshotIdNew-TintriVMSnapshot `
       -SnapshotDescription "Reporting snapshot" `
       -VM $prod `
       -SnapshotConsistency CRASH_CONSISTENT
    $snapshot = Get-TintriVMSnapshot -VM $prod `
       -SnapshotId $snapshotId

    # Use SyncVM's vDisk sync to synchronise the data and
    # log vDisks from the prod snapshot to the reporting VM
    $result = Sync-TintriVDisk -VM $report `
       -SourceSnapshot $snapshot `

There’s a lot of text there, but very little of it has changed from what we created in the previous article. We’ll look at specifically what we’ve changed and why. There are a lot of PowerShell resources out there and I don’t want to duplicate that here, but I’ll describe a few things that are pertinent.

  1. We’ve removed the lines where we set the $prodname and $reportname variables to the names of the production and reporting VM.
  2. We’ve wrapped the whole code block in a starting and ending set of curly braces ({}) and indented each line to make it clearer to read.
  3. We’ve defined that code block as a function that we have called Sync-ProdRep. As we’ll see, this will make it trivial for us to call this code block over and over again.
  4. There’s a new line with all of that CmdletBinding() stuff in it. What this will do, is allow us to pass in some information when we call Sync-ProdRep and have it automatically put into some variables for us. This may seem a little unclear, but should become clearer very shortly.
  5. We’ve got some new lines that start with [parameter….] and end with the variable names we removed (see item #1 in this list). This, combined with item #4, defines prodname and reportname as parameters that can be passed to Sync-ProdRep to tell it which production VM and which reporting VM to do our Production->Replication synchronisation to.

That may be a lot to take in if automation and PowerShell are new to you. What it means overall is that to execute the sync code within our script, we can simply add some lines to the bottom of the script (after the closing brace) to call our Sync-ProdRep code, telling it which VMs to operate on. Like this:

Sync-ProdRep -ProdName "sqlprod1" -ReportName "sqlrep1"
Sync-ProdRep -ProdName "sqlprod2" -ReportName "sqlrep2"
Sync-ProdRep -ProdName "sqlprod3" -ReportName "sqlrep3"

Not bad. We’ve abstracted our synchronisation code into its own separate module. If we need to update or change it (we will be), we do that in one single place. If we want to change which VMs we use, or how we define the list (we will be), that’s in a single place.

Are We There Yet?

We’ve done a lot to go from a business issue to a solution design and now have some code that seems like it will do the job we need.

There is still some more for us to do, but we’re getting there. I promise.

Next, we’ll look at improving maintenance of our automation and we’ll spend some time covering error handling and diagnostics. Currently we aren’t handling any error conditions or logging at all and that’s bad.

[Machinery image created by bradleyolin and used unmodified under CC 2.0]


4 thoughts on “Automation and Private Cloud part III

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s