History of the SMB Protocol
SMB (Server Message Block) also known as CIFS (Common Internet File System) is network protocol (the ports used are 445 or over NetBIOS 137 & 138 on UDP or 137 7 139 on TCP). The SMB protocol uses a client-server based approach – a client makes a request and the server responds.
SMB 1.0 was first introduced into Windows to support the very early network operating systems from Microsoft. The original protocol didn’t change that much until the introduction of Windows Vista and Windows Server 2008 at which point Microsoft released the SMB v2.0. Some of the new features included at the time were; larger buffer sizes and increased scalability (the number of open file handles an number of shares a server can advertise and more.)
With the release of Windows 7 and Windows Server 2008 R2 so came alone version 2.1 of the protocol which included a small number of enhancements and a new feature for the opportunistic locking of files.
When Server 2012 was first starting to be talked about the SMB protocol was called version 2.2 but Microsoft have since promoted a major version to 3.0 due to the huge number of changes that have been included.
What’s new in the SMB protocol with version 3.0?
As mentioned above the list of new features for the new version of SMB is impressive, the list includes:
SMB Transparent Failover:
SMB transparent failover allows a file server to be continually available. This is now especially important when you consider some of the new technologies and features and that are available in products such as Windows Server 2012 and Microsoft SQL Server 2012. Features such as the ability to store your virtual machines or locate the SQL databases files on a SMB file share rely on storage to be very highly available which would not be possible for a fileserver not using these SMB 3.0 features! An example would be of a virtual machine that lost its connection to its storage, the virtual machine to crash or if your SQL server lost its storage access the database would go into an offline state.
With transparent failover you will have zero downtime (just a small I/O delay while the client fails over). If you in the middle of saving a huge Excel file or moving a 30GB file, and the file share you were using became suddenly unavailable all you will notice would be a slight pause, no Excel warning messages or failed file transfer. This transparent failover feature works for both planned e.g. if you need to move all of your users to another server while you reboot to install Windows patches or and unplanned events e.g. if someone accidentally powers off one of your live file servers.
When you failover to a different file sever all of your state (the handles, locks & attributes) would also be seamlessly moved over to the new server. This is easy to do in a planned failover situation (think live migration but with handles and state) however in an unplanned situation where the server is there one second and gone the next it is not quite so simple. A client machine will not realise what has happened and would traditionally have to wait for several TCP timeouts before acknowledging the server was no longer accessible. With transparent failover however the cluster is would to tell the client there was a problem by using the SMB ‘witness protocol’.
This does obviously rely on both the client and the server supporting SMB 3.0 features i.e Windows Server 2012 and Windows 8.
SMB Scale-Out is the SMB feature that allows you to create a file shares that will allow simultaneous access to files using any file server in the file server cluster. These new file servers are known as ‘Scale Out File Severs’ (SOFS). The Scale Out File Sever is a new type of clustering resource type for file server clustering that allows you to create true active-active file servers.
This new active-active architecture allows your file servers to become very highly available and be network load balanced (by using DNS round robin). This is obviously a benefit for your users but its main purpose is to aid your virtual machines storage that could well be sitting on a network share. As the demand on the file server grows you can simply add another server into the cluster adding resource and bandwidth to your file server services.
There are a few requirements to get this running, for example your file servers will need to use CSVs V2.0 for your storage and you will need to be prepared to loose a few features such as the ability to set user quotas.
All of your file severs are considered clones (‘Scale Out File Server Cluster Clone Resources’) with one of the nodes in the cluster being the ‘Scale Out File Server Cluster Clone Resources Leader’ – the leader is the node in which the Custer resource is showing as online in your cluster. The leader is similar to the coordinator node for CVSs (Cluster Shared Volumes).
SMB Direct uses RDMA (Remote Direct Memory Access) allows for computers on the network to send and receive data without having to use processor time, interrupt the OS or cache. Allows for very high throughput with ultra low latency by allowing the NICs themselves to do the work.
I have looked at this in more detail here.
SMB Multi Channel
SMB multi channel allows for multiple connections to be established for data transfer. Traditionally if you wanted to transfer a 50GB file from one server to another using a server with 2x 1Gb network adaptors (none teamed) then a single SMB session would be established using one of the NICs and a single CPU core. This would be slow and would more than likely max out the CPU core being used. It was is possible using RSS (Reverse Side Scaling) NICs to have that single SMB session spread over multiple cores – but you still only have 1 session.
The second possibility is to team the NICs on that server to give you 1x 2Gb (using the 2x 1Gb NICs) and RSS. You will have increased the possible bandwidth but you still only have one session going over the 2Gb pipe. That is where multi-session comes in.
If the NIC is RSS-capable then the server will create multiple SMB connections allowing for simultaneous data transfer of that 50GB file and with RSS you avoid the potential CPU bottleneck. If the server is using teamed NICs you have the additional advantage that those multiple sessions have the increased bandwidth – SMB multi channel is an addition NOT a replacement to NIC teaming.
SMB multi channel is enabled by default there is no need to turn anything on (in fact you have to explicitly turn it off which you can by via PowerShell:
Server Side – Set-SmbServerConfiguration -EnableMultiChannel $false
Client side – Set-SmbClientConfiguration -EnableMultiChannel $false
If you do disable multi channel for any reason you will also be disabling SMB Direct as the SMB multi channel is what is used to detect the RDMA capability of a NIC.
The third option is to use RDMA compatible NICs (as a side note, you should not team RDMA NICs, if you do the RDMA capabilities will not be available). SMB will detect that the NIC is RDMA capable and will create multiple RDMA connections for the session (two per interface). This gives the best experience as you will be able to take advantage of the the very high throughput and ultra low latency offered by RDMA NICs. As you can’t team them you will need to use multiple RDMA NICs to achieve fault tolerance.
There are very few requirements to get this working – you need both computers to be running Windows Server 2012 or Windows 8, multiple network connections and either the NICs must support either RSS, RDMA or be teamed.
You can verify if you computer & NICs are capable of using SMB multi channel and view the rest of the SMB configuration using PowerShell:
get-SMBClientConfiguration / Get-SmbServerConfiguration – This will list all of your SMB Config
get-SMBClientNetworkInterface / Get-SmbServerNetworkInterface – This will show your network interfaces and will tell you the RSS & RDMA capabilities.
Although you have been able to encrypt the data while its been on your storage using third party products for along time, and more recently with Microsoft’s Bit Locker technologies, you have not, until now there has been able to encrypt the data while the is on the wire and is in transit.
SMB 3.0 includes a way to encrypt the data, either per server or even per share which make it very flexible. All of this is available with hardly any overhead especially if you are using a compatible processor (for example Intel i5 or i7) which allows the processor to offload the encrypt operation.
VSS Support for SMB File Shares
Until SMB 3.0 VSS only worked when using block storage with Windows Serer 2012 however a new VSS provider has been included (the ‘File Share Shadow Copy Provider’) and a new VSS agent (called the ‘File Share Shadow Copy Agent’) . These two VSS components will allow a VSS aware application (vendors will need to update their applications to support share names) to preform a standard VSS snap shot. This will be very useful when you are hosting your virtual machines on a SMB share. A very detailed TechNet blog post on this can be found here.
This was just a very high level over view of the new features. I am slowly starting to put together posts for each of the features in more detail.