Working with Azure VM, I’ve run into an issue trying to attach the disk. The command I’ve tried was simply

$ az vm disk attach --vm-name azureuw-ap009 -g azureuw-ap009 --new --size-gb 10 --disk newDisk

Addition of a managed disk to a VM with blob based disks is not supported.

The natural question arises – what is the difference between those two types of drives.

What is the difference between managed and unmanaged disk in Azure?

Quick googling for the question above directs us to the discussion on Microsoft forum, where users explain that in case of unmanaged (which is the same as “blob based”) Azure user has to take care about the storage account for the disk himself. We can check what kind of settings we have for azure storage account. The command to get it from command line is:

$ az storage account show --resource-group azureuw-ap009 --ids "/subscriptions/d89b80cd-7f8b-4410-8a3e-bf75107c800e/resourceGroups/hpc-test/providers/Micro soft.Storage/storageAccounts/hpcxdmod" { "accessTier": null, "creationTime": "2017-05-24T07:39:25.469638+00:00", "customDomain": null, "enableHttpsTrafficOnly": false, "encryption": { "keySource": "Microsoft.Storage", "keyVaultProperties": null, "services": { "blob": { "enabled": true, "lastEnabledTime": "2017-12-15T12:56:14.109865+00:00" }, "file": null, "queue": null, "table": null } }, "id": "/subscriptions/d89b80cd-7f8b-4410-8a3e-bf75107c800e/resourceGroups/hpc-test/providers/Microsoft.Storage/storageAccounts/hpcxdmod", "identity": null, "kind": "Storage", "lastGeoFailoverTime": null, "location": "westeurope", "name": "hpcxdmod", "networkAcls": { "bypass": "AzureServices", "defaultAction": "Allow", "ipRules": [], "virtualNetworkRules": [] }, "primaryEndpoints": { "blob": "https://hpcxdmod.blob.core.windows.net/", "file": "https://hpcxdmod.file.core.windows.net/", "queue": "https://hpcxdmod.queue.core.windows.net/", "table": "https://hpcxdmod.table.core.windows.net/" }, "primaryLocation": "westeurope", "provisioningState": "Succeeded", "resourceGroup": "hpc-test", "secondaryEndpoints": null, "secondaryLocation": null, "sku": { "name": "Standard_LRS", "tier": "Standard" }, "statusOfPrimary": "available", "statusOfSecondary": null, "tags": {}, "type": "Microsoft.Storage/storageAccounts" }

Let’s analyse this output – accessTier for blob storage should be “hot” or “cold”, however, for the one of interest it’s null. Working with storage accounts one can learn that the reason is that the same data structure is used to store the information about the details for two types of storage accounts “General-purpose Storage Accounts” and “Blob Storage Accounts”. It’s confusing since as we know from the error message we got from az vm disk attach command our VM disk is “blob based”, but it doesn’t mean that the account used to manage it is of blob type. We can even try to set this property from the command line:

$ az storage account update --ids "/subscriptions/d89b80cd-7f8b-4410-8a3e-bf75107c800e/resourceGroups/azureuw-ap009/providers/Microsoft.Storage/storageAccou nts/azureuwap009diag348" --access-tier Hot Account property accessTier is required for the request.

This error message can be very misleading, you can see it also when you try to use the --access-tier option creating a general purpose storage account. There is an issue opened on azure-cli github repository about this [1].

Next we see creationTime attribute which name is self-explanatory. Another nulled attribute called customDomain is used to configure different endpoint domain name for example a subdomain in your company domain. Appropriate DNS configuration (CNAME record like for http virtual host) and update of this attribute is required to allow access to your storage services under dedicated hostname – for VM disk backend you probably won’t care about that.

Also enableHttpsTrafficOnly attribute is more important for the storage used directly, especially if you are going to connect to it over internet. It’s still interesting that it’s not setup be default for the VM drive. Is our data encrypted between the VM and blob when traveling across Microsoft datacenter? The answer comes in next attribute where we see that encryption is enabled for blob service. If you know how it works, is it HTTPS or upper layer please leave a comment under the article – personally I’m not sure how the connection between the VM hypervisor (iscsi initiator) and the backend storage service is secured.

Then we have a lot of basic information about the account like id , name , kind or location that I’m not going to discuss. Quite interesting one is networkAcls which can be used to configure additional security limitations – another option of special importance for directly accessed internet facing storage.

As you see beside the fact that the blob is used as a VM backend it still has blob endpoint address. The properties for secondary location and endpoints are set to null, because of our SKU which is Standard_LRS (Locally redundant storage). In general your storage account can be configured in three different ways in terms of redundancy: local, zone or geo, the third option has also the possibility to read from the geo replicated copy as a fourth redundancy possibility. You can read more about the details of implementation of those services on Microsoft web pages [2].

First part of the SKU refers to type of hard drives azure will use to store your data, to make long story short, Standard means spinning drives and Premium is SSD.

OK.. so we know configuration options provided by storage account abstraction, so should we create an account, disk and then somehow attach it to our VM? In fact the error message from az didn’t suggest the easiest option. We can still add an empty drive to our VM with one single command:

$az vm unmanaged-disk attach -g AZUREUW-AP009 --vm-name azureuw-ap009 --size 30 --new

When the command execution is finished it will display the whole VM details in JSON format. If you miss it, you can still check storageProfile of the VM:

$ az vm show --name azureuw-ap009 -g AZUREUW-AP009 --query 'storageProfile' { "dataDisks": [ { "caching": "ReadWrite", "createOption": "Empty", "diskSizeGb": 1024, "image": null, "lun": 0, "managedDisk": null, "name": "azureuw-ap009-20170918-103201", "vhd": { "uri": "https://azureuwap009.blob.core.windows.net/vhds/azureuw-ap009-20170918-103201.vhd" } } ], "imageReference": null, "osDisk": { "caching": "ReadWrite", "createOption": "Attach", "diskSizeGb": 30, "encryptionSettings": null, "image": null, "managedDisk": null, "name": "azureuw-ap009", "osType": "Linux", "vhd": { "uri": "https://azureuwap009.blob.core.windows.net/vhds/AZUREUW-TEST-os-6050.vhd" } } }

As you see our new drive is attached as the dataDisks lun 0. From the attributes we know that it was created as an empty drive, it’s not managed and we have its vhd address. Checking the managedDisk attribute of the osDisk we also see the reason of the 1st az vm disk attach command failure – the osDisk was not managed. The two arising questions are: If this is so easy to add the unmanaged drive, what are managed drives for and since Microsoft encourages to use managed drives why the osDisk is unmanaged? The VM was created some time ago, so let’s create a new one just to check if second complain is really valid:

$ az vm create -g AZUREUW-AP009 --image RedHat:RHEL:6.8:6.8.2017051119 -n Test $ az vm show --name Test -g AZUREUW-AP009 --query 'storageProfile' { "dataDisks": [], "imageReference": { "id": null, "offer": "RHEL", "publisher": "RedHat", "sku": "6.8", "version": "6.8.2017051119" }, "osDisk": { "caching": "ReadWrite", "createOption": "FromImage", "diskSizeGb": 32, "encryptionSettings": null, "image": null, "managedDisk": { "id": "/subscriptions/d89b80cd-7f8b-4410-8a3e-bf75107c800e/resourceGroups/AZUREUW-AP009/providers/Microsoft.Compute/disks/Test_OsDisk_1_f911fb099aef43c88fdc0164749003f5", "resourceGroup": "AZUREUW-AP009", "storageAccountType": "Premium_LRS" }, "name": "Test_OsDisk_1_f911fb099aef43c88fdc0164749003f5", "osType": "Linux", "vhd": null } }

So.. it’s not and as you see by default we were assigned 32GB of Premium_LRS (so it’s not the cheapest by default). Looking at current JSON format you see that you can treat managed disk more like a part of VM specification than a separate storage service related to computing capabilities of the VM.

One can still create a osDisk as unmanaged adding the –use-unmanaged-disk option to az vm create.

Let’s think about this very question: What kind of management is required for “unmanaged” disk? To answer this question we have to understand what storage account really is – it’s an SKU, which stands for Stock Keeping Unit. This is a service one can buy and has to pay for it, so you’re not charged for the single blob page that handles your VM hard drive. You can put multiple blob pages into the same storage account, you can even have other storage services in the same general storage account and you’ll be charged for that in total. The calculation of the final price may be really complicated, since it’s going to be driven by multiple factors (also type of objects you store :)). Finally, there are certain limitations for single storage account capacity [3] and if you decide to use one storage account for multiple hard drives and potentially VMs this is the management you have to deal with for your “unmanaged” devices.

There are also other practical differences. Just checking prices you’ll find that for managed disks there is a support for smaller volumes – 32 GB for managed vs 128GB for unmanaged disks (with appropriate IOPS/bandwidth capabilities). The other big difference is that for the moment of writing of this post managed disks are supported only with local replication (LRS). According to Microsoft managed drives FAQ this should be added by the end of 2018, the only settings change you can do for managed drives is Standard or Premium tier.

If you have read my article describing how to work with azure-cli for basic IAAS configuration, you probably remember that removal of the disks after VM object deletion was quite tricky. It’s quite similar for managed drives, to completely remove all services we can search for managedBy attribute value.

$ az vm delete -g AZUREUW-AP009 --name Test Are you sure you want to perform this operation? (y/n): y { "endTime": "2017-12-17T22:47:56.415632+00:00", "error": null, "name": "16a244f8-feb5-4f46-8809-f0b751d97529", "startTime": "2017-12-17T22:46:05.384069+00:00", "status": "Succeeded" } $ az resource list --resource-type "Microsoft.Compute/disks" -o table --query [*].[name,managedBy] | grep Test Test_OsDisk_1_f911fb099aef43c88fdc0164749003f5 /subscriptions/d89b80cd-7f8b-4410-8a3e-bf75107c800e/resourceGroups/AZUREUW-AP009/providers/Microsoft.Compute/virtualMachines/Test testAdd /subscriptions/d89b80cd-7f8b-4410-8a3e-bf75107c800e/resourceGroups/AZUREUW-AP009/providers/Microsoft.Compute/virtualMachines/Test

I hope it helped you understand the conceptual difference and practical implications of managed and unmanaged disks difference for your azure deployments (:

[1] https://github.com/Azure/azure-cli/issues/5115

[2] https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy

[3] https://docs.microsoft.com/en-us/azure/storage/common/storage-scalability-targets