Enterprise Cloud and All-You-Can-Eat Buffets

Enterprise Cloud and All-You-Can-Eat Buffets

What does Enterprise Cloud have in common with an all-you-can-eat buffet?

Delegation and Self-Service

As well as automating tasks to be run on a regular schedule to avoid manual handling, many tasks are being automated to allow the delegation of executing certain tasks to other humans.

Consider the case where you have 10 VDI desktops deployed. As common tasks come up, such as restoring files from snapshots, diagnosing performance issues or provisioning new desktops, it’s easy to jump in and take care of matters by hand. Take that number to 1000 and you’re likely going to start to see issues maintaining those by hand as you scale. Get to 10,000 or more and it’s an entirely different class of problem.

This doesn’t just apply to VDI — DevOps deployments and Enterprise server farms are seeing the same kinds of challenges as they scale too.

In order to scale past a few systems, you need to start to delegate some number of tasks to someone else. Whether that be a helpdesk team of some kind, or a developer or application owner, or even potentially the end user of a VDI desktop.

However, delegation and self-service are not just a case of dumping a bunch of tech in front of folks and wishing them luck. In most cases, these folks won’t have the technical domain knowledge required to safely manage their portion of infrastructure. We need to identify the tasks that they need to be able to perform and package those up safely and succinctly.


Consider a restaurant with an all-you-can-eat buffet. One of the nice ones — we’re professionals here. Those buffets don’t have a pile of raw ingredients, knives and hotplates, yet they’re most definitely still self-service.

You’re given a selection of dishes to choose from. They’ve all been properly prepared and safely presented, so that you don’t need to worry about the food preparation yourself. There is the possibility of making some bad decisions (roast beef and custard), but you can’t really go far enough to actually do yourself any great harm.

They do this to scale. More patrons with fewer overhead costs, such as staff.

DIY Self-Service

As we deploy some kind of delegation or self-service infrastructure, we need to:

  1. Come up with a menu of tasks that we wish to allow others to perform,
  2. Work out the safety constraints around putting them in the hands of others, and
  3. Probably still having staff to pour the bottomless mimosas instead of simply a tap.

We did introduce these two things in previous series’ of articles. In particular, #1 is a case of listing and defining one or more business problems, as we saw in the automation series.. For example, users that accidentally delete or lose an important file, might need a way to retrieve files from a snapshot from a few days ago. #2 above is referring to taking and validating very limited user input. In the restore example above, we’d probably only allow the user to specify the day that contains the snapshot they’re looking for and maybe the name of their VM.

Public Cloud

Self-service and autonomy are one of the things that Public Cloud have brought to the table at a generic level. By understanding the specifics of your own Enterprise, you can not only meet, but exceed that Public Cloud agility within your own data centre. This can also be extended to seamlessly include Public Cloud for the hybrid case.

Next Steps

As with each of these series, we’re starting here with a high level overview and will follow that up with an illustrative example over the coming articles. We’ll build on what we’ve learned in those previous series and we’ll again use the common System Center suite to get some hands-on experience. As always, the concepts and workflow apply quite well to tools other than System Center too.

To summarise, delegation and self-service are essential for most organisations as they scale. When used to safely allow autonomy of other groups, it can save you and your team significantly.

[Buffet picture by Kenming Wang and used unmodified under SA2.0]



Orchestration For Enterprise Cloud

Orchestration For Enterprise Cloud

In our last series, we looked at taking a business problem or need and turning into a piece of automated code. We started out by breaking the problem down into smaller pieces, we then put together a piece of sortamation to demonstrate the overall solution, and then made it more modular and added some error handling.

In this series, we’re going to extend upon this and integrate our automation into an orchestration framework. This approach will apply to any orchestration framework, but we’ll use System Center Orchestrator 2016 in our examples.

Why Orchestration?

Primarily for delegation of tasks. We may have written a script that we can run to perform some mundane task, but for us to be able to successfully scale, we need to start putting some of these tasks into the hands of others.

As a dependency for delegation, we also want to use automation and orchestration as a way to guarantee us consistent results and general damage prevention. Sure, we could just allow everyone access to SCVMM or vCenter to manage their virtual machines, but that’s a recipe for disaster. Orchestration gives us a way to safely grant controlled access to limited sets of functionality to make other groups or teams more self-sufficient.

The Process

Much like in our earlier automation series, we want to start by defining a business problem and breaking it down to smaller tasks from there. We want to extend this too to include the safe delegation of this task and this will include thinking carefully about input we’ll accept from the user, information we’ll give back to the user, and what kind of diagnostic information we’ll need to collect so that if something goes wrong, we can take a look later.

The fictitious, but relevant, business problem that we’re going to solve in this series is a common DevOps problem:

Developers want to be able to test their code against real, live production data to ensure realistic test results.

The old approach would be to have someone dump a copy of the production application database and restore the dump to each of the developers’ own database instances. This is expensive in time and capacity and can adversely impact performance. It’s also error-prone.

Instead, we’ll look at making use of Tintri’s SyncVM technology to use space-efficient clones to be able to nearly-instantly make production data available to all developers. We’ll do this with some PowerShell and a runbook in System Center Orchestrator.

We can then either schedule the runbook to be executed nightly, or we can make the runbook available to the helpdesk folks, who can safely execute the runbook from the Orchestrator Web Console. [Later, we’ll look at another series that shows us how to make this available to the developers themselves — probably through a Service Manager self-service portal — but let’s not get too far ahead of ourselves]

Core Functionality

Our production VM and developer VMs all have three virtual disks:

  1. A system drive that contains the operating system, any tools or applications that the developer needs, and the application code either in production or in development.
  2. A disk to contain database transaction logs.
  3. A disk to contain database data file logs.

In our workflow, we’ll want to use SyncVM to take vDisks #2 and #3 from a snapshot of the production VM, and attach them to the corresponding disk slots on the developer’s VM. We want this process to not touch vDisk #1, which contains the developer’s code and development tools.


For us to use SyncVM (as we saw previously), we need to pass in some information to the Tintri Automation Toolkit about what to sync. Looking at previous similar code, we probably need to know the following:

  • The Tintri VMstore to connect to.
  • The production VM and the snapshot of it that we wish to sync
  • The set of disks to sync
  • The destination developer VM

In order to limit potential accidents, we probably want to limit how much of this comes from the person we’re delegating this to. In our example, we’ll assume a single VMstore and a single known production VM. We’ve also established earlier that there is a consistent and common pattern for the sets of disks to sync (always vDisk #2 and #3). The only parameter where there should be any user input is the destination VM. The rest can all be handled consistently and reliably within our automation.


After this has been executed, we’ll need to let the user know if it succeeded or failed, along with any helpful information they’ll need. We’ll also want to track a lot of detailed information about our automation so that if an issue arises, we have some hope of resolving it.


So far, we have again defined a business need or business problem (synchronisation of production data to developer VMs), defined the set of inputs we’ll need and where those will come from, and we’ve defined the outputs.

In the next installment, we’ll start to get our hands dirty with System Center Orchestrator, followed by PowerShell and the Tintri Automation Toolkit for PowerShell.

[Orchestra image by Sean MacEntee and used unmodified under CC2.0]

Automation and Private Cloud part IV

Automation and Private Cloud part IV

Over the past three articles in this series, we’ve defined a business problem and broken it down into components in order to create a simple high-level design, we’ve written some initial code to perform some of the fundamental operations, and using that initial design and some abstraction, we’ve incorporated some of the rest of the design objectives.

We have some automation code now that in the usual happy case probably works just fine. But we’re not done yet. The article after this will help us to take care of issues as they arise and in this article, we’ll look at making this automation even more flexible and portable and cloud-like.

Detached Configuration

In our last article, we ended up calling our Sync-ProdRep function with the names of a production and a reporting VM to process. It looked like this in our script:

Sync-ProdRep -ProdName "sqlprod1" -ReportName "sqlrep1"
Sync-ProdRep -ProdName "sqlprod2" -ReportName "sqlrep2"
Sync-ProdRep -ProdName "sqlprod3" -ReportName "sqlrep3"

This is fine given that we have only a handful of pairs. But what about cases where we have many more? Or if we want to allow others to use our automation script? In the latter case, the instructions will be to open the script in a text editor and change all of the VM names.

Here, we’ll create a simple configuration file that contains only the host mappings, and we’ll teach our script how to parse it and call the Sync-ProdRep function for each VM pair. It may sound complication, but we’ll create the file as a JSON format file and use the in-built JSON magic that PowerShell already has.

First create a file called host-mappings.json in a text editor and add this as the contents:

 "mappings": [
 { "prod": "sqlprod1", "report": "sqlrep1" },
 { "prod": "sqlprod2", "report": "sqlrep2" },
 { "prod": "sqlprod3", "report": "sqlrep3" }

The JSON format is well-documented across the web, but what we have is a JSON object called “mappings” that is an array (a set) of production and reporting VM name mappings.

Now we’ll replace all three Sync-ProdRep lines above with this little PowerShell excerpt:

$mappings = ConvertFrom-Json "$(Get-Content 'host-mappings.json')"
$mappings.mappings | `
   Foreach-Object { Sync-ProdRep -Prod $_.prod -Report $_.report }

This may seem daunting, there’s nothing there that you won’t find in any of the standard PowerShell corners of the interwebs. Here’s a breakdown:

  1. We’re using Get-Content to read our JSON file into a string.
  2. ConvertFrom-Json is turning our JSON text into a PowerShell object that we’re storing in a variable called $mappings.
  3. The $mappings object contains an array called ‘mappings’ (see line 2 of our JSON file).
  4. We’re using Foreach-Object to take each item in that array (each item is the name of a production and the name of a reporting VM) and pass each in to Sync-ProdRep as a production and a reporting VM name.

This achieves exactly the same thing as the code we ended the last article with. The difference is that to add, remove or modify the list of VM pairs we process, we simply modify the JSON file and not the code itself.


This kind of abstraction allows us to grant control over different parts of the process to different users. In its simplest form, that JSON file could be modified by someone without any PowerShell experience and our code would work unchanged.

Extend that with another business case and the JSON file could be automatically generated from a list of VMs from SCVMM that have particular tags for example.

Or could be created carefully by some automation behind a System Center (or other) Self-Service Portal. The synchronisation automation code doesn’t need to change as the business needs grow over time.


You’ll notice that the Tintri VMstore hostname is still a hard-coded string in our Sync-ProdRep function. If we had VMs spread across multiple VM-aware storage appliances, this wouldn’t work. How would you move that hostname into the JSON configuration file and pass it into the Sync-ProdRep function?


Here’s a snapshot of where our automation code is at this point:

function Sync-ProdRep {
    # Connect to our VMstore
    $vmstore = "vmstore01.vmlevel.com"
    Connect-TintriServer -UseCurrentUserCredentials $vmstore

    # Retrieve a VM object for both VMs by name
    $report = Get-TintriVM -Name $reportname
    $prod = Get-TintriVM -Name $prodname

    # Take a snapshot of our production VM and using the
    # Returned snapshot ID, retrieve the snapshot object
    $snapshotIdNew-TintriVMSnapshot `
       -SnapshotDescription "Reporting snapshot" `
       -VM $prod `
       -SnapshotConsistency CRASH_CONSISTENT
    $snapshot = Get-TintriVMSnapshot -VM $prod `
       -SnapshotId $snapshotId

    # Use SyncVM's vDisk sync to synchronise the data and
    # log vDisks from the prod snapshot to the reporting VM
    $result = Sync-TintriVDisk -VM $report `
       -SourceSnapshot $snapshot `

$mappings = ConvertFrom-Json "$(Get-Content 'host-mappings.json')"
$mappings.mappings | `
   Foreach-Object { Sync-ProdRep -Prod $_.prod -Report $_.report }


[Image created by Wolfgang Maslo and used unmodified under CC 2.0]

Happy Anniversary!

Happy Anniversary!

December 13th 2016 marks the two-year anniversary since the first official GA release of Hyper-V support for Tintri’s VM-aware storage platforms. I thought I’d take a little time to cover some of the history and some of the achievements along the way.

Tintri VMstore was designed from the ground up to be a VM-aware storage platform for Enterprise and Cloud Service Providers. For the first few years, the NFSv3 protocol implementation and and vCenter management stack were the primary points of contact between the VMstore and customers’ virtualisation ecosystem.

Early in July 2013, we started the process of adding the necessary components for supporting Hyper-V customers. SMB 3.0+ support, Active Directory integration, an SMI-S provider and Hyper-V WMI support. All built on top of the same VM-aware storage platform that had already been serving our VMware customers in mission-critical applications.

A year and five months later (after a few months of alpha and beta), we brought our Hyper-V support to the world in a GA release.

It was a fairly basic implementation. Some VMware functionality didn’t exist with Hyper-V and we learned a lot about how we could best fit into a Hyper-V environment.

Since then, we’ve been busy making sure that innovative new Tintri functionality is exposed to Hyper-V customers as quickly as is practical, including some of the following:

  • SyncVM, including File-Level Recovery
  • Quality of Service
  • Tintri VM Scale Out
  • VM consistent snapshot integration with Hyper-V 2012r2 and 2016
  • Tintri VM Analytics

We’ve also added a great deal of awesome Hyper-V specific functionality — some of which doesn’t yet exist in the VMware ecosystem — including:

Plus a plethora of small little improvements and fixes.

None of this would be possible without all of the folks involved. We have a dedicated team of talented engineers developing, testing and supporting our Hyper-V featureset. They’re not alone though. Our entire Engineering organisation is dedicated to delivering the best possible product suite to all of our customers and right from that very first day in July 2013, the whole team has been there to help.

Our field teams and customerbase have also been invaluable. There have been countless long discussions around new features and improving current functionality. Thanks and keep those letter coming, folks.

We aren’t done yet though. We’ve come along way over the past two years, and there’s so much more that we want to do. The whole company is constantly looking for ways to continue to innovate and help make customers’ lives easier — no matter which hypervisor(s) they use.

Watch this space.

[Image courtesy of David Joyce under the Creative Commons 2.0 license]


Tintri and Microsoft SCVMM

Tintri and Microsoft SCVMM

You may have seen this press release leading up to Microsoft Ignite 2016. Or perhaps you stopped by our booth and heard it from the horse’s mouth (I’m using artistic license — I get called an ass a lot). Either way, we’re extending our SCVMM integration.

Right from the time that our Hyper-V support left the nest for that first flight 2 years ago, it had some integration with Microsoft’s System Center Virtual Machine Manager (SCVMM to its friends). With an SMI-S provider built into Tintri VMstore, creating shares, assigning them to Hyper-V hosts and clusters, and managing permissions is a no-brainer from within SCVMM.

So what’s new?

Back at the Tintri Factory, we’ve put together an SCVMM client plug-in that provides access to key per-VM functionality in a handful of unobtrusive buttons.


Once installed, click select a VM and snapshot, clone or replicate a VM, or take a look at its vital signs:


Simple and straightforward.

This epic plug-in is available as a tech preview from the Tintri Support Portal. Take a look.


[Epic plug image by Epic Fireworks under CC 2.0]