Hyper-V Replica Capacity Planner

Windows Server 2012 introduced a new feature for  Hyper-v  – the Hyper-v replica. As discussed here Hyper-v replica allows you to replicate virtual machines to a secondary or offsite data centre to allow for easy disaster recovery.

Microsoft have now released a capacity planner which will be useful especially if you are working with a third party so you can keep costs for offsite hardware to a minimum.

You can download the capacity planner for here >> http://www.microsoft.com/en-eg/download/details.aspx?id=39057

Advertisements

Overview of SMB 3.0 in Windows Server 2012

History of the SMB Protocol

SMB (Server Message Block) also known as CIFS (Common Internet File System) is  network protocol (the ports used are 445 or over NetBIOS 137 & 138 on UDP or 137 7 139 on TCP). The SMB protocol uses a client-server based approach – a client makes a request and the server responds.

SMB 1.0 was first introduced into Windows to support the very early network operating systems from Microsoft. The original protocol didn’t change that much until the introduction of Windows Vista and Windows Server 2008 at which point Microsoft released the SMB v2.0. Some of the new features included at the time were; larger buffer sizes and increased scalability (the number of open file handles an number of shares a server can advertise and more.)

With the release of Windows 7 and Windows Server 2008 R2 so came alone version 2.1 of the protocol which included a small number of enhancements and a new feature for the opportunistic locking of files.

When Server 2012 was first starting to be talked about the SMB protocol was called version 2.2 but Microsoft have since promoted a major version to 3.0 due to the huge number of changes that have been included.

What’s new in the SMB protocol with version 3.0?

As mentioned above the list of new features for the new version of SMB is impressive, the list includes:

Continue reading

Hyper-V & SMB Direct (RDMA)

Windows Server 2012 brings an update to SMB which has so many new features in it Microsoft have bump it up an entire version and call it SMB 3.0. Many of the new features included in are there to directly improve your experience with Hyper-V.

Hyper-V now supports your using SMB storage for your virtual machines which opens up whole new possibility for deployment scenarios, which will benefit not only large corporates but will also make highly available virtual infrastructures available for the smaller customer without the costs of dedicated SANs and complexity of fibre and iSCSI LUNs.

One of these new SMB 3.0 features is SMB Direct which makes use of RDMA (Remote Direct Memory Access). RDMA  allows for computers on the network to send and receive data without having to use processor time, interrupt the OS or cache. This obviously aids with VM density – you will be able to have more VMs on your host machine as the processor won’t be so tied up with network operations but also allows for data transfer with very high throughput with ultra low latency.

RDMA works by using a protocol on the NIC (you need to make sure you purchase an RDMA NIC – both servers will need an RDMA compatible NIC) if this hardware is in place is makes it possible for one computer to directly read data and write data to another computers memory.

As mentioned above you need to have the correct hardware in place and that involves having the right NIC which re sometimes known as R-NIC. There are currently three different types available from various different manufacturers. The three types are: iWARP, RoCE an Infiniband.

Setting up your server infrastructure to support this could not be simpler – you don’t need to do anything! When two computers start to talk they make a standard connection via TCP, once the connection is established they share information about what they are capable of doing (data transfer also beings at the same time so there is no overhead or latency) once both computers have decided they are both capable of running SMB 3.0 and have RDMA capable hardware they will seamlessly switch.

Using some of the new NICs that are available from vendors such as Mellanox (the ConnectX-3) you are able to get incredible speeds up to 56Gb/s!!! when using Infiniband. These are some amazing speeds but when you start to pair this with SMB 3.0’s new multi channel feature and Windows Server 2012 network teaming capabilities the speeds possible really are incredible. Jose Barreto who works on the file server team at Microsoft has a blog post on how to configure the Mellanox.

Microsoft presented some stats on how using SMB data storage with RDMA performs which are defiantly worth having a look at.

I would be very interested to hear from people who have started to play with technology and see how you are finding it in a real world environment.

RDMA compatible NICs are defiantly something to add to your shopping list next time you are purchasing server infrastructure.

Windows Server 2012 Coming September 4th!

Yesterday we found out that Windows Server 2012 has gone RTM with an official blog post from the Windows server engineering team! September the 4th (there will be a virtual launch event) is the day when it will be available for you to purchase and start upgrading your servers, volume licence customers will be able to get hold of a copy ‘in the next couple of weeks’.

Microsoft announced their pricing a few weeks ago – there will be the usual datacentre & standard editions (no enterprise edition this time) as well as two editions aimed at smaller business customers which are known as essentials and foundation.

To get hold of the client version – ‘Windows 8’ you are going to have to wait a little longer as that will be made available on October the 26th.

CSVs (v2) in Server 2012

CSVs (Clustered Shared Volumes) were introduced in Windows Server 2008 R2 for use with Hyper-V which suddenly allowed for you to migrate virtual machines from one node to the other. According to Microsoft the idea of the CSV went from design board to being production ready in around 12 months which is why when it first appeared it was only supported for use with Hyper-V – when you enabled CSVs in 2008 R2 you got the message below:

clip_image002

I don’t want to go into why to use CSVs as that has been covered many times over the past few years but as a high level overview a CSV is a clustered file system that allows multiple nodes in a cluster to have simultaneous access to a LUN. CSVs became popular because not only did it offer a huge improvement in fault tolerance but it also allowed the VM to become the smallest point of failover. Before CSVs if you wanted to move a single VM from one machine to another each VM would need to be on its own LUN – in other words the LUN itself was the smallest point of failover.

CSV provides I/O fault tolerance – A CSV is able to transparently handle a node, network and HBA failure. This is achieved as the app handle is virtualized before being handed to NTFS if there is then a problem the I/O can be appended on the CSVFS filter until the problem has been solved. An example of how this works is if you have a fibre connection to your fibre SAN and you have VMs running on a node utilizing CSVs, you accidentally disconnect the fibre the I/O will be paused, redirected I/O would be established via the coordinator node and then I/O can be resumed via this new path until you reconnect the fibre and everything is OK again. Without CSVs you would instantly have problems with the VMs – you would have downtime!

Now with Server 2012 Microsoft has gone back to its drawing board and has completely rewritten CSVs from scratch to build upon its success. CSVs in Windows Server 2012 are now known as CSVs V2 (imaginative!!)

So why do we need v2 CSVs and what improvements do they bring?

Even more fault tolerance and resiliency for high availability built directly in

  • IN CSV v2 SMB multi-channel is used to detect which network should be used as the CSV network. If there are multiple networks available Windows Server will use these channels to multi-channel stream I/O to your SAN.
  • CSV Block Cache – This allows Windows to provide a cache of un-buffered I/O. The cache runs at the block level which allows caching of data right inside the VHD file. There have been caching systems like this on the market for a while but they have always been hardware based with SSD this is different as Windows Server 2012 has it built in and it utilizes system RAM (you need to factor this when looking at your hardware although by default the cache level is 512MB as Microsoft have found this gives the optimal level of performance with the minimum cost). This can dramatically improve a VDI deployment on Hyper-V (I plan on doing a demo if this in a future post.)

Les time spent in redirected I/O mode

  • Hugely improved backup interoperability. In 2008 R2 (CSV 1.0) when you started a backup the CSV would be moved to the node that started the back and it would have to be accessed by the other nodes in a redirected mode for the duration of the backup. Your backup software would have to understand CSVs and be able to work with CSV APIs – not all backup software was updated to support this. With Server 2012 Microsoft have worked far more closely with vendors to make their software CSV backup aware. With Windows Server 2012 you are only in redirected I/O mode only for the short time the VSS snapshot is taken – for the rest of the time your nodes access the disk in direct mode.
  • You can have parallel backups running on the same or different CSV volumes and cluster nodes.

CSVs were originally only supported on Hyper-V workloads originally as Microsoft had to work out what file systems APIs etc. they needed to optimize to work with Hyper-V they did not have time to do this for more than Hyper-V.

In Windows Server 2012 far more workloads are supported including the file server workload which opens up a whole range of possibilities for fault tolerance with your file servers! This is achieved by having multiple levels of CSV I/O redirection. There is the original ‘file system’ redirection and two new levels; file level and block level redirection.

Multi Subnets are now supported.

CSVs are enabled by default unlike with Windows Server 2008 R2 where you had to enable CSV and then you were able to go and assign the disk as a CSV volume you simply right click on the disk and click ‘Enable Cluster Shared Volume’ and unlike with Windows Server 2008 R2 there is no separate area for the CSV disks they all show in the same window.

clip_image004

In CSV 1.0 custom reparse points were used in 2012 standard mount points are used instead which makes it far more readable and easy to understand. E.g. you not now use C:\ClusterStorage\Volume1 when trying to setup a performance count or trying to monitor free space etc. instead of the disk GUID which was not easily understood.

Removed the external authentication dependencies. The dependency on AD has been removed, being replaced by local user accounts which exist on each node of the cluster (they are synchronized).

You may have noticed a slow response when you try to browse the Cluster Volume folder any node other than the coordinator node – this will stop this from happening and it will also mean that there is no longer need to be a domain controller up and running before you can access the CSV volumes.

A mini-filter driver is no longer used it has been replaced by a CSV proxy file system. If you look in disk management you will see that the disks now show as CSVFS formatted (although it is still NTFS when you pull back the covers). This was required because now that CSV supports more than Hyper-V applications needed to know what they were running on. This will allow an application to detect it’s on a CSV volume, this will be useful if a particular bit of software will not be supported for CSV it can have a hard block coded into it. This new approach is also better than the mini-filter as the mini-filter driver sat above the file system and intercepted I/O which means it bypassed things such as AV. With the new file system approach you will be able to attach to this file system as you would with NTFS.

clip_image006

Supports the huge number of improvements made in file server areas such as:

  • Bit-Locker is supported on CSV volumes.
  • Fully supports Offloaded Data Transfer
  • Defrag improvements and check disk Spot Fix features.
  • Support for Storage spaces

That was just a very quick overview of some of the new features and improvements made for Cluster Shared Volumes in 2012. Next I’ll look at how VMware’s offering compares to CSV v2!

Hyper-V Replica in Windows Server 2012 – Amazing!!

UPDATE – Have a look here for some of the new features that Windows Server 2012 R2 will bring to Hyper-V Replica (its even more amazing Open-mouthed smile)

Hyper-v replica is one of the most highly anticipated features of Windows Server 2012. With it comes a whole new range of DR possibilities, something that would have not been possible or taken a large amount of money to achieve is now free and in the box!

The basic concept of Hyper-V replica is, as the name suggests, to replicate VMs from one site to another (or one live server to a backup server on the same site). Some of the possibilities that come to mind is the ability to replicate branch office VMs back to the main office location or from a main office up into the cloud to easily and very quickly recover in a DR situation.

How does replication work?

I have heard people when describing Hyper-v replica say ‘we can already do this with DFS’ – you can’t! DFS will only replicate a file when it has been closed and no longer in use (also Microsoft does not support using DFS to replicate VHDs /VHDXs for this purpose even if you turn the VM off).

Hyper-v replica is able to replicate the files even when they are in use in your production environment. Replication is achieved by an initial replica (IR) of your data being replicated from the primary server to the replica server (this can either be over the wire or a copy can be copied to physical media, taken to the backup server and then copied onto it. Once you have the initial copy in place, the primary server makes use of a change tracking module which keeps track of the write-operations that happen in the virtual machine.

Every 5 minutes (this is none configurable at present) a delta replication will take place. The log file being used is frozen, a new log file is created to continue tracking changes and the original log file is sent to the replica server (provided the last log file was acknowledged as being received). The changes can be seen by looking in the Hyper-V Replication Log (*.hrl) that is located in the same directory as it is associated to.

Types of delta replicas

There are a few options for the delta replicas. In the most simple case you will have selected ‘Store only the latest point for recovery’, in which case all of the replication log data is merged into the VHD file that was initially replicated to the Replica server.

The second possibility is that you have chosen to store multiple recovery points in which case when the log file is received every 5 minutes these are stored and every 1 hour / 12 log files (again this is none configurable) a snapshot is created to which the log files are written. The number of snap shots is determined by the number of recovery points you opted to keep when replication is configured. Once the limit is reached, a merge is initiated which merges the oldest snapshot to the base replica VHD.

The third possibility allows for an application consistent snapshot to be created. Application-consistent recovery points are created by using the Volume Shadow Copy Service in the virtual machine to create snapshots. The log file are sent every 5 minutes as with the two examples above but as the 12th log arrives the log files will create a snapshot (as above) and the snap shot will the app consistent (if you chose for an app consistent every 2 hours every other snapshot would be app consistent etc.)

If at any time on the Primary Server a new log cannot be created, changes continue to be tracked in the existing log and an error is registered. Replication will be suspended and a Warning is reflected in the Replication Health Report for the virtual machine.

Clustered Replica Servers

If your replica server is part of a cluster you may want to move the Replica VM from one node to another (or it may move automatically by use of VVM). To keep track of where the replica VM is the VMMS (Virtual Machine Manager Service) uses a new object called the Hyper-V Replica Broker Manager.

Hyper-V Replica Communications

Communications Architecture

Communications Architecture

The replica communications is achieved by the use of the ‘Hyper-V Replica Network transport layer’. This transport layer is responsible for authorizing access to a Replica server as well as authenticating the Primary and Replica servers. It also provides the ability to encrypt (if you are using a certificate), compress and throttle (with the use of QoS) data that is sent by the primary server.

The first connection to be made between the Primary Server and the Replica server is the ‘control channel’ the Hyper-V Replica Network Services checks to see if a control channel exists – if it does it will use it, if not it will create the connection and then transmits a control message which contains a list of files that will be sent from the Primary server to the Replica server (this is used if data transfer is cancelled mid-way through). Hyper-V Replica Network Services on the Replica server forwards the package to the Hyper-V Replica Replication Engine, which then sends a response back which contains information about which, if any, of the files already exist within a timeout interval of 120 seconds.

Once the control message has been acknowledged as being received by the Replica server data transfer can begin. This data transfer is done over a different connection to the control channel – called the ‘data channel’. The files to be transmitted will be either for an Initial Replication or for a Delta Replication. The Hyper-V Replica Network Service layer chunks the data into 2 Mb chunks and compresses it. Once the data chunks have been received by the Replica server they are decrypted and put back together before being saved to the save location specified for the replica virtual machine.

Hyper-V replica handle virtual machine migrations from one host to another and even storage migration during a data transfer. If migration of a virtual machine takes place while data transfer is in progress the Hyper-V Replica Network Service closes any open connections and will automatically re-establish connection with the Replica server once the migration is complete. The control message is used to do a comparison to see which files were missed due to the cancelled connection. The exact same procedure is used if a storage migration is carried out during a data transfer.

Configuring Hyper-V Replica

Hardware Requirements:
This is fairly simple – all you need is two servers capable of running the Hyper-V role. The replica site is completely hardware and storage agnostic.

Software Requirements:
Again there is not much to this – obviously Windows Server 2012 is required and also if you want to encrypt the data during transmission (defiantly recommended if you are replicating offsite to a DR center for example) you will need a certificate which can either be self-signed or provided by your PKI infrastructure.

There are two possibilities for the Replica server – either stand alone or a failover cluster.

To configure a standalone Replica server:

  1. Right click on the Hyper-V server on the Hyper-V Manager and ‘Hyper-V Settings’
  2. Click on ‘Enable this computer as a Replica server.’ You will need to do this on both the primary and Replica servers.
  3. Next you have a couple of options for authentication you can use Kerberos or an SSL certificate. To further enhance security you can select the servers you want to allow replication from. You can select any server or you can be more restrictive and specify servers by wildcard e.g. *.contoso.local or by actual server name MANHYP01.contoso.local.
Replication Config

Replication Configuration

  

To configure clustered Replica servers:

  1. Install and configure your failover cluster as you normally would and ensure you introduce enough nodes into the cluster to meet the demand.
  2. Once you have the cluster in place and configured as required right click on your cluster and go to ‘Configure Role’
High Availability Wizard

High Availability Wizard

  1. You will need to specify a NETBIOS name for the broker service that      you will use as the Client Access Point when configuring the VMs for      replication. This will create a computer object in AD for you.
AD Computer Object

AD Computer Object

  1. Next, right      click the Replica Broker you created and click on ‘Replication Settings’
Replication Settings

Replication Settings

  1. You will then see the configuration wizard as in the standalone configuration. You can select http or a certificate based authentication (this depends on if the remote cluster is part of the same domain or has a trust in place – if not you will need to use a certificate based approach) You can as before also select the servers that are allowed to replicate by server name or wildcard and you can select the security tag to use.
Authentication

Authentication

  1. Once the server is configured for replication you can then enable replication a per virtual machine basis. Instead of selecting the physical server to migrate to you need to select the Client Access Point e.g. in my case ‘HyperVReplica’.
Replica Server

Replica Server

Replication from this point on works in exactly the same way as described earlier with the log files being transmitted every 5 minutes. The newly created virtual machine on the Replica server will be made highly available.

Folder Structure for Hyper-V Replica

The standard folder structure you are used to with Hyper-V is created with the addition of a folder called ‘Hyper-V Replica’ with several subfolders as seen below (The Snapshots folder is only created if recovery history is enabled) with each of the virtual machines being identified by its GUID.

Storage Paths

Storage Paths

Networking with Hyper-V Replica

In a real world situation you would most likely be replicating your virtual machines off site to another office or to a partners DR facility over a WAN connection. The network addressing schema will obviously be different at this site and will cause problems for your users trying to access your servers. Microsoft has thought about this and has included the ability to configure different network settings at the Replica site.

Replica Network

Replica Network

To configure this you need to modify the virtual machine properties of each machine and each of the virtual adaptors connected to the machine. This is only available on synthetic network adaptors – you can’t set this for legacy adaptors. The only other pre-requisite for this to work is that your virtual machine must be running any of the following OS’s Windows Server 2012, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 SP2 (or higher), Windows 7, Vista SP2 (or higher), and Windows XP SP2 (or higher). The latest Windows Server 2012 Integration Services must be installed in the virtual machine.

Using Hyper-V Replica

Once you have all this in place and you are successfully replicating your VMs to another stand alone or cluster server you have a few ways to move over to your replicated VMs.

Planned Failover – A planned failover allows you to failover to your Replica VMs in a planned and controlled manor. This can be used if you have prior warning of an event that you know is going to cause potential problems to your primary datacenter such as a power outage or natural disaster etc.

In a planned failover reverse replication must be enabled (this is checked as a pre-requisite) so that when you failback your Primary VMs are up to date. The second pre-requisite to a planned failover is that the VMs must be shutdown prior to the failover taking place. Due to this a planned failover does require a small amount of downtime but no data will be lost.

Test Failover – A test failover is a good way to test your DR plan. When you initiate a test failover a new virtual machine is created on the replica server with the name <your VM name – Test> This VM is added to a different network (You can specify a test failover network on the VM properties) this is so it will not affect your live production environment. You can add a few test workstations to this test network and check everything works as required.

Test Failover Network

Test Failover Network

This type of failover does not require any downtime of your live production machines and so can safely be carried out during the working day.

The final failover is the un-unplanned failover – the one no-one wants!

Unplanned Failover – An un-planned failover is as the name suggests. This can happen if you have a hardware problem in your main datacenter or an environmental problem – failed generator during a power outage or failed air-conditioning unit (from experience!) and no redundancy. This allows you to bring up your replica VMs and get your users up and running very quickly. When you’re primary datacenter is up and running again you can simply replicate the VMs back and get everything back to how it was.

Although this is a great additional to a DR policy by no means is it a replacement to you back routine! You MUST continue to preform your backups as you are now.

Which Windows Server edition is right for you?

Microsoft have recently announced their new licencing model for Windows Server 2012 with the SKUs available being changed.

As you can imagine Microsoft have a highly ‘cloud optimized’ version of Server 2012 coming in the form of the data centre edition which as with the 2008 versions is licenced per processor (around $4809 Open NL pricing) and again as with 2008 you can run unlimited virtual instances.

Then comes a change as there is no longer the ‘Enterprise’ version instead you jump straight down to the standard edition (open NL price over $882) which Microsoft are aiming at ‘Low density or non- Virtualized environments’ this allows for a full a Windows server and two virtual instances.

You then have two further editions aimed at the smaller business ‘Essentials’ and ‘Foundation’  other versions have no virtual instance rights and both have a limited user account limit 25 users for the Essentials (with an Open NL price of $425) and 15 for the Foundation (foundation has no known pricing as its only available to OEM)

It is no great surprise to see the licensing model heavily geared around Hyper-V and both public and private cloud. More information can be found on the Microsoft licence website for Server 2012 >>http://www.microsoft.com/en-us/server-cloud/windows-server/2012-editions.aspx

Server 2012 License overview

Server 2012 License Overview