Overview of SMB 3.0 in Windows Server 2012

History of the SMB Protocol

SMB (Server Message Block) also known as CIFS (Common Internet File System) is  network protocol (the ports used are 445 or over NetBIOS 137 & 138 on UDP or 137 7 139 on TCP). The SMB protocol uses a client-server based approach – a client makes a request and the server responds.

SMB 1.0 was first introduced into Windows to support the very early network operating systems from Microsoft. The original protocol didn’t change that much until the introduction of Windows Vista and Windows Server 2008 at which point Microsoft released the SMB v2.0. Some of the new features included at the time were; larger buffer sizes and increased scalability (the number of open file handles an number of shares a server can advertise and more.)

With the release of Windows 7 and Windows Server 2008 R2 so came alone version 2.1 of the protocol which included a small number of enhancements and a new feature for the opportunistic locking of files.

When Server 2012 was first starting to be talked about the SMB protocol was called version 2.2 but Microsoft have since promoted a major version to 3.0 due to the huge number of changes that have been included.

What’s new in the SMB protocol with version 3.0?

As mentioned above the list of new features for the new version of SMB is impressive, the list includes:

Continue reading

Advertisements

Hyper-V & SMB Direct (RDMA)

Windows Server 2012 brings an update to SMB which has so many new features in it Microsoft have bump it up an entire version and call it SMB 3.0. Many of the new features included in are there to directly improve your experience with Hyper-V.

Hyper-V now supports your using SMB storage for your virtual machines which opens up whole new possibility for deployment scenarios, which will benefit not only large corporates but will also make highly available virtual infrastructures available for the smaller customer without the costs of dedicated SANs and complexity of fibre and iSCSI LUNs.

One of these new SMB 3.0 features is SMB Direct which makes use of RDMA (Remote Direct Memory Access). RDMA  allows for computers on the network to send and receive data without having to use processor time, interrupt the OS or cache. This obviously aids with VM density – you will be able to have more VMs on your host machine as the processor won’t be so tied up with network operations but also allows for data transfer with very high throughput with ultra low latency.

RDMA works by using a protocol on the NIC (you need to make sure you purchase an RDMA NIC – both servers will need an RDMA compatible NIC) if this hardware is in place is makes it possible for one computer to directly read data and write data to another computers memory.

As mentioned above you need to have the correct hardware in place and that involves having the right NIC which re sometimes known as R-NIC. There are currently three different types available from various different manufacturers. The three types are: iWARP, RoCE an Infiniband.

Setting up your server infrastructure to support this could not be simpler – you don’t need to do anything! When two computers start to talk they make a standard connection via TCP, once the connection is established they share information about what they are capable of doing (data transfer also beings at the same time so there is no overhead or latency) once both computers have decided they are both capable of running SMB 3.0 and have RDMA capable hardware they will seamlessly switch.

Using some of the new NICs that are available from vendors such as Mellanox (the ConnectX-3) you are able to get incredible speeds up to 56Gb/s!!! when using Infiniband. These are some amazing speeds but when you start to pair this with SMB 3.0’s new multi channel feature and Windows Server 2012 network teaming capabilities the speeds possible really are incredible. Jose Barreto who works on the file server team at Microsoft has a blog post on how to configure the Mellanox.

Microsoft presented some stats on how using SMB data storage with RDMA performs which are defiantly worth having a look at.

I would be very interested to hear from people who have started to play with technology and see how you are finding it in a real world environment.

RDMA compatible NICs are defiantly something to add to your shopping list next time you are purchasing server infrastructure.

Windows Server 2012 Coming September 4th!

Yesterday we found out that Windows Server 2012 has gone RTM with an official blog post from the Windows server engineering team! September the 4th (there will be a virtual launch event) is the day when it will be available for you to purchase and start upgrading your servers, volume licence customers will be able to get hold of a copy ‘in the next couple of weeks’.

Microsoft announced their pricing a few weeks ago – there will be the usual datacentre & standard editions (no enterprise edition this time) as well as two editions aimed at smaller business customers which are known as essentials and foundation.

To get hold of the client version – ‘Windows 8’ you are going to have to wait a little longer as that will be made available on October the 26th.

CSVs (v2) in Server 2012

CSVs (Clustered Shared Volumes) were introduced in Windows Server 2008 R2 for use with Hyper-V which suddenly allowed for you to migrate virtual machines from one node to the other. According to Microsoft the idea of the CSV went from design board to being production ready in around 12 months which is why when it first appeared it was only supported for use with Hyper-V – when you enabled CSVs in 2008 R2 you got the message below:

clip_image002

I don’t want to go into why to use CSVs as that has been covered many times over the past few years but as a high level overview a CSV is a clustered file system that allows multiple nodes in a cluster to have simultaneous access to a LUN. CSVs became popular because not only did it offer a huge improvement in fault tolerance but it also allowed the VM to become the smallest point of failover. Before CSVs if you wanted to move a single VM from one machine to another each VM would need to be on its own LUN – in other words the LUN itself was the smallest point of failover.

CSV provides I/O fault tolerance – A CSV is able to transparently handle a node, network and HBA failure. This is achieved as the app handle is virtualized before being handed to NTFS if there is then a problem the I/O can be appended on the CSVFS filter until the problem has been solved. An example of how this works is if you have a fibre connection to your fibre SAN and you have VMs running on a node utilizing CSVs, you accidentally disconnect the fibre the I/O will be paused, redirected I/O would be established via the coordinator node and then I/O can be resumed via this new path until you reconnect the fibre and everything is OK again. Without CSVs you would instantly have problems with the VMs – you would have downtime!

Now with Server 2012 Microsoft has gone back to its drawing board and has completely rewritten CSVs from scratch to build upon its success. CSVs in Windows Server 2012 are now known as CSVs V2 (imaginative!!)

So why do we need v2 CSVs and what improvements do they bring?

Even more fault tolerance and resiliency for high availability built directly in

  • IN CSV v2 SMB multi-channel is used to detect which network should be used as the CSV network. If there are multiple networks available Windows Server will use these channels to multi-channel stream I/O to your SAN.
  • CSV Block Cache – This allows Windows to provide a cache of un-buffered I/O. The cache runs at the block level which allows caching of data right inside the VHD file. There have been caching systems like this on the market for a while but they have always been hardware based with SSD this is different as Windows Server 2012 has it built in and it utilizes system RAM (you need to factor this when looking at your hardware although by default the cache level is 512MB as Microsoft have found this gives the optimal level of performance with the minimum cost). This can dramatically improve a VDI deployment on Hyper-V (I plan on doing a demo if this in a future post.)

Les time spent in redirected I/O mode

  • Hugely improved backup interoperability. In 2008 R2 (CSV 1.0) when you started a backup the CSV would be moved to the node that started the back and it would have to be accessed by the other nodes in a redirected mode for the duration of the backup. Your backup software would have to understand CSVs and be able to work with CSV APIs – not all backup software was updated to support this. With Server 2012 Microsoft have worked far more closely with vendors to make their software CSV backup aware. With Windows Server 2012 you are only in redirected I/O mode only for the short time the VSS snapshot is taken – for the rest of the time your nodes access the disk in direct mode.
  • You can have parallel backups running on the same or different CSV volumes and cluster nodes.

CSVs were originally only supported on Hyper-V workloads originally as Microsoft had to work out what file systems APIs etc. they needed to optimize to work with Hyper-V they did not have time to do this for more than Hyper-V.

In Windows Server 2012 far more workloads are supported including the file server workload which opens up a whole range of possibilities for fault tolerance with your file servers! This is achieved by having multiple levels of CSV I/O redirection. There is the original ‘file system’ redirection and two new levels; file level and block level redirection.

Multi Subnets are now supported.

CSVs are enabled by default unlike with Windows Server 2008 R2 where you had to enable CSV and then you were able to go and assign the disk as a CSV volume you simply right click on the disk and click ‘Enable Cluster Shared Volume’ and unlike with Windows Server 2008 R2 there is no separate area for the CSV disks they all show in the same window.

clip_image004

In CSV 1.0 custom reparse points were used in 2012 standard mount points are used instead which makes it far more readable and easy to understand. E.g. you not now use C:\ClusterStorage\Volume1 when trying to setup a performance count or trying to monitor free space etc. instead of the disk GUID which was not easily understood.

Removed the external authentication dependencies. The dependency on AD has been removed, being replaced by local user accounts which exist on each node of the cluster (they are synchronized).

You may have noticed a slow response when you try to browse the Cluster Volume folder any node other than the coordinator node – this will stop this from happening and it will also mean that there is no longer need to be a domain controller up and running before you can access the CSV volumes.

A mini-filter driver is no longer used it has been replaced by a CSV proxy file system. If you look in disk management you will see that the disks now show as CSVFS formatted (although it is still NTFS when you pull back the covers). This was required because now that CSV supports more than Hyper-V applications needed to know what they were running on. This will allow an application to detect it’s on a CSV volume, this will be useful if a particular bit of software will not be supported for CSV it can have a hard block coded into it. This new approach is also better than the mini-filter as the mini-filter driver sat above the file system and intercepted I/O which means it bypassed things such as AV. With the new file system approach you will be able to attach to this file system as you would with NTFS.

clip_image006

Supports the huge number of improvements made in file server areas such as:

  • Bit-Locker is supported on CSV volumes.
  • Fully supports Offloaded Data Transfer
  • Defrag improvements and check disk Spot Fix features.
  • Support for Storage spaces

That was just a very quick overview of some of the new features and improvements made for Cluster Shared Volumes in 2012. Next I’ll look at how VMware’s offering compares to CSV v2!

Hyper-V Replica in Windows Server 2012 – Amazing!!

UPDATE – Have a look here for some of the new features that Windows Server 2012 R2 will bring to Hyper-V Replica (its even more amazing Open-mouthed smile)

Hyper-v replica is one of the most highly anticipated features of Windows Server 2012. With it comes a whole new range of DR possibilities, something that would have not been possible or taken a large amount of money to achieve is now free and in the box!

The basic concept of Hyper-V replica is, as the name suggests, to replicate VMs from one site to another (or one live server to a backup server on the same site). Some of the possibilities that come to mind is the ability to replicate branch office VMs back to the main office location or from a main office up into the cloud to easily and very quickly recover in a DR situation.

How does replication work?

I have heard people when describing Hyper-v replica say ‘we can already do this with DFS’ – you can’t! DFS will only replicate a file when it has been closed and no longer in use (also Microsoft does not support using DFS to replicate VHDs /VHDXs for this purpose even if you turn the VM off).

Hyper-v replica is able to replicate the files even when they are in use in your production environment. Replication is achieved by an initial replica (IR) of your data being replicated from the primary server to the replica server (this can either be over the wire or a copy can be copied to physical media, taken to the backup server and then copied onto it. Once you have the initial copy in place, the primary server makes use of a change tracking module which keeps track of the write-operations that happen in the virtual machine.

Every 5 minutes (this is none configurable at present) a delta replication will take place. The log file being used is frozen, a new log file is created to continue tracking changes and the original log file is sent to the replica server (provided the last log file was acknowledged as being received). The changes can be seen by looking in the Hyper-V Replication Log (*.hrl) that is located in the same directory as it is associated to.

Types of delta replicas

There are a few options for the delta replicas. In the most simple case you will have selected ‘Store only the latest point for recovery’, in which case all of the replication log data is merged into the VHD file that was initially replicated to the Replica server.

The second possibility is that you have chosen to store multiple recovery points in which case when the log file is received every 5 minutes these are stored and every 1 hour / 12 log files (again this is none configurable) a snapshot is created to which the log files are written. The number of snap shots is determined by the number of recovery points you opted to keep when replication is configured. Once the limit is reached, a merge is initiated which merges the oldest snapshot to the base replica VHD.

The third possibility allows for an application consistent snapshot to be created. Application-consistent recovery points are created by using the Volume Shadow Copy Service in the virtual machine to create snapshots. The log file are sent every 5 minutes as with the two examples above but as the 12th log arrives the log files will create a snapshot (as above) and the snap shot will the app consistent (if you chose for an app consistent every 2 hours every other snapshot would be app consistent etc.)

If at any time on the Primary Server a new log cannot be created, changes continue to be tracked in the existing log and an error is registered. Replication will be suspended and a Warning is reflected in the Replication Health Report for the virtual machine.

Clustered Replica Servers

If your replica server is part of a cluster you may want to move the Replica VM from one node to another (or it may move automatically by use of VVM). To keep track of where the replica VM is the VMMS (Virtual Machine Manager Service) uses a new object called the Hyper-V Replica Broker Manager.

Hyper-V Replica Communications

Communications Architecture

Communications Architecture

The replica communications is achieved by the use of the ‘Hyper-V Replica Network transport layer’. This transport layer is responsible for authorizing access to a Replica server as well as authenticating the Primary and Replica servers. It also provides the ability to encrypt (if you are using a certificate), compress and throttle (with the use of QoS) data that is sent by the primary server.

The first connection to be made between the Primary Server and the Replica server is the ‘control channel’ the Hyper-V Replica Network Services checks to see if a control channel exists – if it does it will use it, if not it will create the connection and then transmits a control message which contains a list of files that will be sent from the Primary server to the Replica server (this is used if data transfer is cancelled mid-way through). Hyper-V Replica Network Services on the Replica server forwards the package to the Hyper-V Replica Replication Engine, which then sends a response back which contains information about which, if any, of the files already exist within a timeout interval of 120 seconds.

Once the control message has been acknowledged as being received by the Replica server data transfer can begin. This data transfer is done over a different connection to the control channel – called the ‘data channel’. The files to be transmitted will be either for an Initial Replication or for a Delta Replication. The Hyper-V Replica Network Service layer chunks the data into 2 Mb chunks and compresses it. Once the data chunks have been received by the Replica server they are decrypted and put back together before being saved to the save location specified for the replica virtual machine.

Hyper-V replica handle virtual machine migrations from one host to another and even storage migration during a data transfer. If migration of a virtual machine takes place while data transfer is in progress the Hyper-V Replica Network Service closes any open connections and will automatically re-establish connection with the Replica server once the migration is complete. The control message is used to do a comparison to see which files were missed due to the cancelled connection. The exact same procedure is used if a storage migration is carried out during a data transfer.

Configuring Hyper-V Replica

Hardware Requirements:
This is fairly simple – all you need is two servers capable of running the Hyper-V role. The replica site is completely hardware and storage agnostic.

Software Requirements:
Again there is not much to this – obviously Windows Server 2012 is required and also if you want to encrypt the data during transmission (defiantly recommended if you are replicating offsite to a DR center for example) you will need a certificate which can either be self-signed or provided by your PKI infrastructure.

There are two possibilities for the Replica server – either stand alone or a failover cluster.

To configure a standalone Replica server:

  1. Right click on the Hyper-V server on the Hyper-V Manager and ‘Hyper-V Settings’
  2. Click on ‘Enable this computer as a Replica server.’ You will need to do this on both the primary and Replica servers.
  3. Next you have a couple of options for authentication you can use Kerberos or an SSL certificate. To further enhance security you can select the servers you want to allow replication from. You can select any server or you can be more restrictive and specify servers by wildcard e.g. *.contoso.local or by actual server name MANHYP01.contoso.local.
Replication Config

Replication Configuration

  

To configure clustered Replica servers:

  1. Install and configure your failover cluster as you normally would and ensure you introduce enough nodes into the cluster to meet the demand.
  2. Once you have the cluster in place and configured as required right click on your cluster and go to ‘Configure Role’
High Availability Wizard

High Availability Wizard

  1. You will need to specify a NETBIOS name for the broker service that      you will use as the Client Access Point when configuring the VMs for      replication. This will create a computer object in AD for you.
AD Computer Object

AD Computer Object

  1. Next, right      click the Replica Broker you created and click on ‘Replication Settings’
Replication Settings

Replication Settings

  1. You will then see the configuration wizard as in the standalone configuration. You can select http or a certificate based authentication (this depends on if the remote cluster is part of the same domain or has a trust in place – if not you will need to use a certificate based approach) You can as before also select the servers that are allowed to replicate by server name or wildcard and you can select the security tag to use.
Authentication

Authentication

  1. Once the server is configured for replication you can then enable replication a per virtual machine basis. Instead of selecting the physical server to migrate to you need to select the Client Access Point e.g. in my case ‘HyperVReplica’.
Replica Server

Replica Server

Replication from this point on works in exactly the same way as described earlier with the log files being transmitted every 5 minutes. The newly created virtual machine on the Replica server will be made highly available.

Folder Structure for Hyper-V Replica

The standard folder structure you are used to with Hyper-V is created with the addition of a folder called ‘Hyper-V Replica’ with several subfolders as seen below (The Snapshots folder is only created if recovery history is enabled) with each of the virtual machines being identified by its GUID.

Storage Paths

Storage Paths

Networking with Hyper-V Replica

In a real world situation you would most likely be replicating your virtual machines off site to another office or to a partners DR facility over a WAN connection. The network addressing schema will obviously be different at this site and will cause problems for your users trying to access your servers. Microsoft has thought about this and has included the ability to configure different network settings at the Replica site.

Replica Network

Replica Network

To configure this you need to modify the virtual machine properties of each machine and each of the virtual adaptors connected to the machine. This is only available on synthetic network adaptors – you can’t set this for legacy adaptors. The only other pre-requisite for this to work is that your virtual machine must be running any of the following OS’s Windows Server 2012, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 SP2 (or higher), Windows 7, Vista SP2 (or higher), and Windows XP SP2 (or higher). The latest Windows Server 2012 Integration Services must be installed in the virtual machine.

Using Hyper-V Replica

Once you have all this in place and you are successfully replicating your VMs to another stand alone or cluster server you have a few ways to move over to your replicated VMs.

Planned Failover – A planned failover allows you to failover to your Replica VMs in a planned and controlled manor. This can be used if you have prior warning of an event that you know is going to cause potential problems to your primary datacenter such as a power outage or natural disaster etc.

In a planned failover reverse replication must be enabled (this is checked as a pre-requisite) so that when you failback your Primary VMs are up to date. The second pre-requisite to a planned failover is that the VMs must be shutdown prior to the failover taking place. Due to this a planned failover does require a small amount of downtime but no data will be lost.

Test Failover – A test failover is a good way to test your DR plan. When you initiate a test failover a new virtual machine is created on the replica server with the name <your VM name – Test> This VM is added to a different network (You can specify a test failover network on the VM properties) this is so it will not affect your live production environment. You can add a few test workstations to this test network and check everything works as required.

Test Failover Network

Test Failover Network

This type of failover does not require any downtime of your live production machines and so can safely be carried out during the working day.

The final failover is the un-unplanned failover – the one no-one wants!

Unplanned Failover – An un-planned failover is as the name suggests. This can happen if you have a hardware problem in your main datacenter or an environmental problem – failed generator during a power outage or failed air-conditioning unit (from experience!) and no redundancy. This allows you to bring up your replica VMs and get your users up and running very quickly. When you’re primary datacenter is up and running again you can simply replicate the VMs back and get everything back to how it was.

Although this is a great additional to a DR policy by no means is it a replacement to you back routine! You MUST continue to preform your backups as you are now.

NIC Teaming in Server 2012

Having spent the last few days battling with the HP Network Configuration Utility (HP NCU) Microsoft’s decision to do away with the need for third party NIC teaming software is a very welcome addition of Server 2012.

Microsoft have never officially supported NIC teaming for Hyper-V (if you had problems and needed to talk to Microsoft support, more than likely you would have to dissolve the team before you could progress your support call too far). I have seen many network related Hyper-V issues that have boiled down to either be NIC driver or a third party teaming utility not playing nicely.

Built in NIC teaming has been a long requested feature, VMWare have had NIC teaming for a while so it’s no great surprise Microsoft have decided to include it. A great feature of this teaming solution is that you will be able to take a NIC from any manufacturer and team them e.g. I could team an Intel and a Broadcom card into a single team. This gives you some great redundancy advantages – if you upgrade the Intel driver and it stops working, no problem the Broadcom can keep the team up and running.

Creating a team is simple – just go to the Server Manager > Locate the teaming link under server properties and follow the very easy wizard.

Next just select the NIC ports you wanted to be included in the team (you can have a whopping 32 ports per team – using LBFO I could have 32 x 10Gb ports (although that would come in at a hefty price) simply having 32x 1Gb ports allows for some extreme bandwidth! This will be very useful for networks such as the live migration, especially as now you can have simultaneous migration happening at once.

Next give your new team a name and off you go. There are a few more additional options you can set, such as:

Team Mode:

Static Teaming: This mode is supported by most server-class switches. As the name suggests this is a manual configuration on the switch and server to form the teams.

Switch Independent: You don’t need to tell the switch anything or connect to different switches – although you can (and should) for better redundancy.

LACP (Link Aggregation Control Protocol): This will dynamically identify the links between the server and the switch. This will allow for the automatic creating of a team. LACP could also expand or reduce the number of NICs in the team.

Load Balancing Mode:

Hyper-V Port: The switch will balance the traffic on multiple links, based on the destination MAC address for the virtual machine.

Address Hash: This is a simple algorithm approach. Based in the components of the packet it creates a hash, it then sends packets with that hash to one of the available NICs.

Stand-by Adaptor: The name says it all – but you can have an active, active or active passive set-up by selecting the NIC you want to be waiting in the wings in case one of the active ports runs into problems.

Once the team has been created you can easily add or remove ports as required. You will see your newly created team in your adaptor settings of the ‘Network & Sharing Center’ as you would any other adaptor and this is where you can set your IP addressing requirements.

Obviously all this can also be achieved with PowerShell using the module ‘Netlbfo’

PS C:\Users\Administrator> Get-Command -Module Netlbfo

CommandType      Name                                              
Function         Add-NetLbfoTeamMember                       
Function        Add-NetLbfoTeamNic                              
Function         Get-NetLbfoTeam                                     
Function         Get-NetLbfoTeamMember                        
Function         Get-NetLbfoTeamNic                                  
Function         New-NetLbfoTeam                                     
Function         Remove-NetLbfoTeam                                  
Function         Remove-NetLbfoTeamMember               
Function         Remove-NetLbfoTeamNic                         
Function       Rename-NetLbfoTeam                                  
Function        Set-NetLbfoTeam                                    
Function         Set-NetLbfoTeamMember                          
Function         Set-NetLbfoTeamNic    

To create your team:
New-NetLbfoTeam -Name “ProductionTeam1” -TeamMembers LAN04,LAN02,LAN03 -TeamingMode Static

PS C:\Users\Administrator> Get-NetLbfoTeam
Name                   : Production Team 1
Members                : {Ethernet 4, Ethernet 3, Ethernet 2}
TeamNics               : Production Team 1
TeamingMode            : SwitchIndependent
LoadBalancingAlgorithm  : TransportPorts
Status                 : Down

One VERY cool feature is that all of this will also work within a virtual machine! This will mean that a VM will be able to be connected to more than one virtual switch. This will provide great redundancy all the way though the network layer from physical switch through to VM.

I think this is a great new feature for Windows Server 2012 and I’m sure many people will be recreating their teams with this ASAP and uninstalling their 3rd party vendor applications.

RemoteFX – Windows Server 2008 R2 vs. Windows Sever 2012

Although RDP was great in a LAN environment it was not always best suited to the WAN. Thankfully RemoteFX has seen huge improvements in Windows Sever 2012 – a noteworthy point for both CitrixHDX and VMWAre PCoIP.

As a quick reference, some of the improvements are:

  • Support for 10 touch points in a remote session and pressure sensitivity
  • Full support for Microsoft Lync
  • True Single Sign-on
  • Complete USB Redirection – ANY USB device can be redirected to the remote session (scanner, printers, cameras etc.) and it will be secure, which means no one else will be able to access the USB devices you are redirecting
  • And the best feature – in my opinion – is RemoteFX Media Remoting

There has been a change in protocol for some of the content transmitted if RemoteFX believes it is necessary. This will be really useful when you are transmitting a video. Traditionally RDP would transmit the video in TCP and would then re-transmit any dropped packets. However this is pointless when you are watching a fast video as by the time the packets have been re-transmitted they are no longer required –  making UDP a perfect alternative. As with all the new features this will be controllable via GPO’s so you will have complete control over what RemoteFX is doing.

RemoteFX will now look at what it needs to transmit and optimize accordingly for example RemoteFX will send the text using a new codec that will send the data very quickly and using very little bandwidth. The images will be sent as a base jpg image and then progressive rendering will build up the image (think old fashioned web browsing) and lastly the video will be re-encoded as H.264 transmitted to the endpoint who will then de-code. If the endpoint is capable of playing video the server will not decompress the video and then transmit instead it will transmit the compressed video and allow the endpoint to decode it.

All of this is adaptive depending on the bandwidth available so if the bandwidth is tight the host will use more CPU and being to compress more to help get the content to the endpoint as quickly as possible.

No GPU needed! Microsoft have developed a software GPU so there will no longer be a need to purchase an expensive graphics card for your server for basic aero experience (you will still put a GPU in for CAD/CAM applications etc.) Microsoft have done this because in Windows 8 you will not be able to turn the aero interface off. The new software GPU will still have full DirectX support.

And lastly there will be a Windows 8 Metro App (obviously!) this will be great for people who are  using this on their tablets over a 3G connection!

All of the features mentioned above are available for both physical and virtual hosts!

References:

http://technet.microsoft.com/en-us/edge/technet-radio-it-time-wans-lans-and-remotefx-in-windows-server-8-beta